title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 8
8.58M
|
---|---|---|---|---|
Tertiary lymphoid structures (TLS) identification and density assessment on H&E-stained digital slides of lung cancer | 32aa433c-8b40-4fcf-8746-948fed07c7db | 8460026 | Anatomy[mh] | Understanding the host immune response to cancer is a critical area of investigation. This has resulted in the recent introduction of various immunotherapeutic drugs (targeting checkpoint inhibition) in the treatment of lung, renal and skin cancers. In addition, the host immune response is also partly mediated by the Tertiary Lymphoid Structures (TLS) . The latter are discrete entities of lymphoid cells which are recognised on histological H&E-stained sections, as they share some histological features with lymph nodes . In general, TLS are not present under normal conditions in some organs and have been observed in pathogen infection, autoimmune disorders, allograft rejection, and in several types of cancer [ – ]. However, in contrast to autoimmune disorders high densities of TLS in cancers including breast, colorectal and lung cancer, are usually associated with positive patient prognosis, outcomes and improved immunotherapy response [ , , ]. The presence and importance of TLS in lung cancer were first reported by Dieu-Nosjean et al . in 2016 , who used immunohistochemistry, gene expression assays, and flow cytometry on large series of lung tumors. They demonstrated that TLS are the sites for the generation of the local and systemic T- and B-cell responses against tumours. Furthermore, in lung cancer, previous studies have identified three maturation stages of TLS culminating in germinal centre formation with significant relevance to patient survival . The authors have described TLS development along the stages of secondary lymphoid organ formation and shown that the second (primary follicle-like stage) and third (secondary follicle-like maturation stages) are dependent on co-expression of CD21, CD23 and CXCL13, but that the first maturation stage (early stage, E-TLS), is characterized by dense lymphocytic aggregates without CD21 and CD23 expression. TLS density can be assessed in diagnostic H&E sections and can, thus, be easily introduced in routine pathology to serve as a relevant prognostic parameter . TLS are identifiable on H&E sections by histopathologists as discrete entities with curved and smooth outline and contain tightly packed mature lymphoid cells. However, there is no current consensus agreement on the definition of TLS, even though their presence has been evaluated in previous studies by morphology on H&E slides as early as 1990 . For example, it is uncertain if there is a minimum number of mature lymphoid cells in the TLS. Although the lymphoid cells appear much more densely packed in a TLS compared to lymphocytes within normal or inflamed tissue, the minimum density of lymphocytes defining TLS remains unspecified. In addition, the minimum size of a TLS is not agreed. The assessment of TLS density over a large histological area is also very time consuming and subject to interpretation variation. Previous studies have assessed TLS based on representative areas of tumour rather than doing this on the whole tumour area . However, only a limited number of studies are in progress for developing automated methods for TLS detection and analysis. Silina et al . describes a quantitative pathology approach for the identification and quantification of different TLS maturation stages using seven-color immunofluorescent staining and segmentation algorithms of Inform software . As such, being able to evaluate TLS density across the overall area of the tumour would be more accurate. Furthermore, identification of TLS from routine H&E histological images would allow easier integration into clinical workflows. Various techniques and methods, based on either hand-crafted or deep learning features, have been developed for digital pathology image segmentation tasks aiming to label regions of an image according to what is being shown and aid pathologists to make diagnostic and treatment processes more efficient. Hand-crafted developed methodologies, where a set of local or global features are extracted, mostly include thresholding methods , region growing methods based on seed points growing , exploitation of morphology features , watershed transformation , active contour models , Markov Random Fields and dynamic image segmentation methods . On the other hand, many segmentation methods have used deep-learning techniques aiming to address the problem by extracting knowledge directly from the data. There are numerous deep learning methods that have been developed for medical image segmentation. More specifically, these include autoencoders , deep convolutional neural networks (CNNs) , cascaded networks and fully convolutional networks . However, the training of complex deep learning networks requires a large number of images and computational power as well as considerable effort and time for their annotation by experts . To this end, in this study, we first propose an automated approach for the identification and quantification of TLS in H&E histological images by applying a method that combines a DeepLab v3+ network, an active contour model and a lymphocytes segmentation approach. Secondly, we aim to translate the visual recognition of TLS by histopathologists into a universally reproducible set of mathematical values for the standardisation of TLS recognition: area occupied by TLS, the minimum number of lymphocytes present and their density (number/unit area). A heat map of lymphocytes is then built, thus allowing us to define TLS in lung tissue (cancer and normal). Based on the above data, we propose formal mathematical criteria for the definition of TLS.
The framework of the proposed methodology for the detection of TLS regions and their lymphocytes is shown in . Initially, an H&E image was fed into a modified DeepLab v3+ network for the detection of candidate TLS regions and an active contour model was then applied in order to refine the boundaries of the TLS regions. Then, segmentation of lymphocytes was performed for the identification of the following features: the number of lymphocytes, the size of TLS regions and the number of lymphocytes per unit area of TLS. The estimated features were used for post validation of candidate TLS regions aiming to filter out the false-positive detected TLS regions. Identification of the candidate TLS regions For the identification of the candidate TLS regions, a modified pre-trained DeepLab v3+ model with Inception-ResNet-v2 as the main feature extractor, which employs dropout to avoid overfitting, was utilized. The DeepLab models have been extensively used in the task of semantic medical image segmentation and tested on large volumes of image datasets [ – ]. These models provide a capability in learning multi-scale contextual features through Atrous Spatial Pyramid Pooling (ASPP) and use a decoder module for the refinement of the segmentation results, especially along object boundaries. In this work, the ASPP is a module that employs multiple parallel atrous convolutional layers with different rates to learn multi-scale information of image aiming to identify different sizes of TLS regions and to retain the balance between context assimilation and fine localization. This network was selected due to the good balance it achieves between accuracy and computational complexity. Specifically, Inception-ResNet-v2 outperforms other common configurations with regards to accuracy and complexity . The model was pre-trained on ImageNet and then fine-tuned with training data prepared for this work. Deep neural networks pre-training can be seen as a case of transfer learning , in which a neural network further trained on a source dataset is subsequently fine-tuned to a target dataset. The pre-training on the source dataset enables the deep neural networks to learn useful low level features in their first layers, such that good results can be achieved with fewer examples in the fine-tuning stage, which mostly adapts the higher level features in the last layers, hence requiring less labeled training data. It is worth mentioning that color normalization was applied in all the dataset images. Furthermore, an augmentation method was utilized to enlarge the image samples in order to better fine-tune the DeepLab v3+. Data augmentation artificially enlarges the size of the training dataset by applying spatial warps, which has been proven a very effective strategy in many image analysis tasks . Even though the number of training cases might appear small, we combined the abundant pixel‐level information with pre‐training and with data augmentation in order to increase the variability of training images and to avoid overfitting of the network . Additionally, a modified loss function was defined in order to adjust the model to better deal with the boundaries of TLS. Thus, introducing a weighting factor w , we force the model to be sensitive to the TLS boundaries and regions enclosed within the TLS. More specifically, the loss function is defined as follows: L o s s = − ∑ p = 1 N w p r p l o g ( t p ) (1) where w p , r p and t p denote the weighting factors, the reference values and the predicted values at pixel p respectively, and N is the total number of pixels. Regarding the weighting factors, we set the w p = 2 when p is a TLS pixel and otherwise, we set w p = 1. TLS boundary refinement Since the lymphoid cells appear in higher density in TLS regions in order to obtain precise TLS contours, we adopted an active contour approach that utilizes local intensity distribution to drive the evolving curve. More specifically, the local intensities within its neighborhood are assumed to follow a Gaussian probability distribution: p i , x ( I ( y ) | m i ( x ) , σ i ( x ) 2 ) = 1 2 π σ i ( x ) e x p ( − ( I ( y ) − m i ( x ) ) 2 2 σ i ( x ) 2 ) (2) where m i ( x ) and σ i ( x ) are mean and standard deviation of the intensities in each local region. Thus, the local Gaussian distribution fitting energy is estimated as follows: E = − ∫ Ω ∫ i n s i d e ( C ) K σ ( x − y ) l o g p 1 , x ( I ( y ) | m 1 ( x ) , σ 1 ( x ) 2 ) d y d x − ∫ Ω ∫ o u t s i d e ( C ) K σ ( x − y ) l o g p 2 , x ( I ( y ) | m 2 ( x ) , σ 2 ( x ) 2 ) d y d x (3) where Ω ∈ R 2 represents the image domain, C a closed contour and the neighborhood of each pixel is defined by using a truncated Gaussian kernel K σ . Thus, the applied model is able to differentiate between regions with intensity heterogeneity and also between regions with similar intensity means but different intensity variances . The active contour model is initialized by the candidate TLS regions detected in the previous step. Lymphocytes segmentation In digital histopathology the cell segmentation is the task of the automated splitting of microscopic tissue images into segments, which represent individual cells. Many cell segmentation methods have been developed, utilizing both traditional techniques and deep learning methods in the field of medical image analysis. They achieve comparable accuracy rates and they identify single cells through watershed transformation , using active contours , modelling the cells with a set of circle or ellipses [ , – ], while many other methods utilize deep neural networks [ – ]. In this work, we propose an improved methodology based on an ellipsoidal model that iteratively identifies and counts the cells ( ) aiming to keep good balance between the estimated cells’ shape and overlapping parts of touching cells through a single validation criterion and at the same time to overcome the limitations of previously developed methods that in many cases erroneously reject small touching cells. To this end, initially, input RGB images were converted to grayscale and filtered using a Gaussian filter with a 3×3 kernel in order to remove small artifacts. Furthermore, a histogram equalization filter was applied in order to enhance the differences between lymphocytes and other tissue components. Subsequently, an effective method using an adaptive threshold approach was applied. More specifically, this method set the threshold based on the local mean intensity in the neighbourhood of each pixel. Thus, the formula used for thresholding was defined as follows: O ( i , j ) = { 255 , I ( i , j ) ≥ T 0 , I ( i , j ) < T (4) where O ( i , j ) is the resulting pixel of output image at ( i , j ), I ( i , j ) is the pixel of the input image and T is the selected local threshold value. In the binary image, in order to suppress small artifacts, morphological operations consisting of erosion, dilation and removal of small elements were applied. For the separation of touching cells an improved ellipsoidal modelling approach is proposed. Initially, we estimated the distance transformation of the binary image M of p pixels that represents the connected cells and we estimated the regional maxima of this. Considering that the number and location of local maxima corresponds to these of nuclei, we rejected the touching maxima. The remaining maxima comprise the list of candidate seeds. Based on the hypothesis that cells can be spatially modeled as ellipsoids E C , the pixels of cells were then modeled using a Gaussian distribution. More specifically, a Gaussian mixture model was applied with the number of clusters C being equal to that of candidate seeds and the mixture parameters, namely mean and variance, to be estimated using the expectation-maximization (EM) algorithm. For the initialization of the EM algorithm we used k -Nearest Neighbor classification using Euclidean distance as the distance metric in order to estimate the initial parameters. The EM is an iterative method consisting of two steps: (i) expectation, which computes the likelihood with respect to the current estimates and (ii) maximization ( ), which maximizes the expected log likelihood ( ) as follows: Q ( θ | θ ( t ) ) = E Z | X , θ ( t ) [ l o g L ( θ ; Χ , Ζ ) ] (5) θ ( t + 1 ) = a r g max θ Q ( θ | θ ( t ) ) (6) where Q is the expected values of the log likelihood function θ , X is the pixel coordinates, Z is the latent variables and θ ( t ) is the current parameters. Having estimated the ellipsoidal models of cells, we need to identify the optimal number of seeds rejecting or approving the candidate seeds from the previously estimated list. Thus, in this approach, we proposed a single fitness validation criterion estimating this for the total number of combinations of candidate seeds, aiming to accurately identify the total number of cells. The criterion takes into account the foreground, the background and the overlapping cell areas that are included by the estimated ellipses and the total area of the extracted ellipses. More specifically, the total area covered by the estimated ellipses is defined as follows: E = ∑ p ∈ M E C ( p ) (7) the foreground area of the binary image M that is included by the estimated ellipses is defined as follows: A F = ∑ p = 1 M ( p ) E ( p ) (8) the area of the background area of the binary image M that is included by the estimated ellipses is defined as follows: A B = ∑ p = 1 [ 1 − M ( p ) ] E ( p ) (9) and the overlapping parts of the ellipses of the touching cells for the total number of the identified ellipses is defined as follows: A T = ∑ i = 1 ∑ p = 1 E C i ( p ) ∩ E C j ( p ) , j = 1 , j ≠ i (10) Based on the calculation of these metrics, we estimated the fitness degree of the estimated ellipsoidal components against the 2D cell data and we selected the candidate seeds that maximize the following: S = max ( A F − A B − A T E ) (11) The final segmentation of the clustered cells was performed by applying Bayesian classification which assigns each pixel p to cluster C i with the maximum posterior probability. Finally, as lymphocytes typically have small (7–10 μm ), round, and dark nuclei with little cytoplasm, which is distinctive from malignant cells or stromal cells , we used morphological and textural features, namely size and shape cells, the average value and the skewness of the intensity histogram of the cell, in order to reject the non-lymphocytes. Post validation of candidate TLS regions Following the candidate TLS refinement and lymphocytes detection, the falsely detected candidate TLS regions were rejected, in order to validate the identified candidate TLS regions and decrease the false positive TLS identification rates. More specifically, hypothesizing that the lymphocytes density of TLS regions is much higher than lymphocytes within the rest of a tissue, 3 features were extracted and used for the rejection of candidate non-TLS regions. To this end, the number of lymphocytes, the size and the number of lymphocytes per unit area of each candidate TLS region were extracted. After the estimation of the features, an SVM classifier was deployed towards the aim of arriving at a final decision regarding whether an identified candidate TLS region is an actual TLS region or a false-positive candidate case. Dataset description Formalin-fixed paraffin-embedded tissue, surplus to diagnostic purposes, was obtained from patients undergoing lung cancer resection. Informed consent was obtained from the donor prior to surgery for use of surgically-excised tissues for research purposes. This study was approved by the local Ethical Committee of the University of East Anglia (Ref No. 2017/2018–119 HT). Histological cases were retrieved from the archive of the Norfolk and Norwich University Hospital histopathology department. Tumours were classified according to the 2015 WHO classification . For each patient, TLS assessment was based on a representative tissue block, with adequate tumour material and interface between normal and tumor tissue well represented. Annotation was performed on tumour tissue slides from 18 patients with primary lung cancer. There were 8 tumours, of which there were 14 adenocarcinoma (5 acinar predominant, 1, papillary, 2 solid predominant and 6 lepidic predominant), 3 squamous cell carcinomas and 2 sarcomatoid carcinomas. The age range of the patients was 54–82 years old, with an average of 69 ± 2.4 years. The tumour size was between 8 and 62 mm, with an average of 28 ± 6.4. TLS was defined as all dense lymphocytic aggregates and 144 TLS were annotated by 2 histopathologists (FK, MDC) with 100% concordance. To assess the generalizability of the model, two datasets (D1 and D2) were created consisting of 5 and 13 patients respectively for internal training-validation and further validation in independent populations.
For the identification of the candidate TLS regions, a modified pre-trained DeepLab v3+ model with Inception-ResNet-v2 as the main feature extractor, which employs dropout to avoid overfitting, was utilized. The DeepLab models have been extensively used in the task of semantic medical image segmentation and tested on large volumes of image datasets [ – ]. These models provide a capability in learning multi-scale contextual features through Atrous Spatial Pyramid Pooling (ASPP) and use a decoder module for the refinement of the segmentation results, especially along object boundaries. In this work, the ASPP is a module that employs multiple parallel atrous convolutional layers with different rates to learn multi-scale information of image aiming to identify different sizes of TLS regions and to retain the balance between context assimilation and fine localization. This network was selected due to the good balance it achieves between accuracy and computational complexity. Specifically, Inception-ResNet-v2 outperforms other common configurations with regards to accuracy and complexity . The model was pre-trained on ImageNet and then fine-tuned with training data prepared for this work. Deep neural networks pre-training can be seen as a case of transfer learning , in which a neural network further trained on a source dataset is subsequently fine-tuned to a target dataset. The pre-training on the source dataset enables the deep neural networks to learn useful low level features in their first layers, such that good results can be achieved with fewer examples in the fine-tuning stage, which mostly adapts the higher level features in the last layers, hence requiring less labeled training data. It is worth mentioning that color normalization was applied in all the dataset images. Furthermore, an augmentation method was utilized to enlarge the image samples in order to better fine-tune the DeepLab v3+. Data augmentation artificially enlarges the size of the training dataset by applying spatial warps, which has been proven a very effective strategy in many image analysis tasks . Even though the number of training cases might appear small, we combined the abundant pixel‐level information with pre‐training and with data augmentation in order to increase the variability of training images and to avoid overfitting of the network . Additionally, a modified loss function was defined in order to adjust the model to better deal with the boundaries of TLS. Thus, introducing a weighting factor w , we force the model to be sensitive to the TLS boundaries and regions enclosed within the TLS. More specifically, the loss function is defined as follows: L o s s = − ∑ p = 1 N w p r p l o g ( t p ) (1) where w p , r p and t p denote the weighting factors, the reference values and the predicted values at pixel p respectively, and N is the total number of pixels. Regarding the weighting factors, we set the w p = 2 when p is a TLS pixel and otherwise, we set w p = 1.
Since the lymphoid cells appear in higher density in TLS regions in order to obtain precise TLS contours, we adopted an active contour approach that utilizes local intensity distribution to drive the evolving curve. More specifically, the local intensities within its neighborhood are assumed to follow a Gaussian probability distribution: p i , x ( I ( y ) | m i ( x ) , σ i ( x ) 2 ) = 1 2 π σ i ( x ) e x p ( − ( I ( y ) − m i ( x ) ) 2 2 σ i ( x ) 2 ) (2) where m i ( x ) and σ i ( x ) are mean and standard deviation of the intensities in each local region. Thus, the local Gaussian distribution fitting energy is estimated as follows: E = − ∫ Ω ∫ i n s i d e ( C ) K σ ( x − y ) l o g p 1 , x ( I ( y ) | m 1 ( x ) , σ 1 ( x ) 2 ) d y d x − ∫ Ω ∫ o u t s i d e ( C ) K σ ( x − y ) l o g p 2 , x ( I ( y ) | m 2 ( x ) , σ 2 ( x ) 2 ) d y d x (3) where Ω ∈ R 2 represents the image domain, C a closed contour and the neighborhood of each pixel is defined by using a truncated Gaussian kernel K σ . Thus, the applied model is able to differentiate between regions with intensity heterogeneity and also between regions with similar intensity means but different intensity variances . The active contour model is initialized by the candidate TLS regions detected in the previous step.
In digital histopathology the cell segmentation is the task of the automated splitting of microscopic tissue images into segments, which represent individual cells. Many cell segmentation methods have been developed, utilizing both traditional techniques and deep learning methods in the field of medical image analysis. They achieve comparable accuracy rates and they identify single cells through watershed transformation , using active contours , modelling the cells with a set of circle or ellipses [ , – ], while many other methods utilize deep neural networks [ – ]. In this work, we propose an improved methodology based on an ellipsoidal model that iteratively identifies and counts the cells ( ) aiming to keep good balance between the estimated cells’ shape and overlapping parts of touching cells through a single validation criterion and at the same time to overcome the limitations of previously developed methods that in many cases erroneously reject small touching cells. To this end, initially, input RGB images were converted to grayscale and filtered using a Gaussian filter with a 3×3 kernel in order to remove small artifacts. Furthermore, a histogram equalization filter was applied in order to enhance the differences between lymphocytes and other tissue components. Subsequently, an effective method using an adaptive threshold approach was applied. More specifically, this method set the threshold based on the local mean intensity in the neighbourhood of each pixel. Thus, the formula used for thresholding was defined as follows: O ( i , j ) = { 255 , I ( i , j ) ≥ T 0 , I ( i , j ) < T (4) where O ( i , j ) is the resulting pixel of output image at ( i , j ), I ( i , j ) is the pixel of the input image and T is the selected local threshold value. In the binary image, in order to suppress small artifacts, morphological operations consisting of erosion, dilation and removal of small elements were applied. For the separation of touching cells an improved ellipsoidal modelling approach is proposed. Initially, we estimated the distance transformation of the binary image M of p pixels that represents the connected cells and we estimated the regional maxima of this. Considering that the number and location of local maxima corresponds to these of nuclei, we rejected the touching maxima. The remaining maxima comprise the list of candidate seeds. Based on the hypothesis that cells can be spatially modeled as ellipsoids E C , the pixels of cells were then modeled using a Gaussian distribution. More specifically, a Gaussian mixture model was applied with the number of clusters C being equal to that of candidate seeds and the mixture parameters, namely mean and variance, to be estimated using the expectation-maximization (EM) algorithm. For the initialization of the EM algorithm we used k -Nearest Neighbor classification using Euclidean distance as the distance metric in order to estimate the initial parameters. The EM is an iterative method consisting of two steps: (i) expectation, which computes the likelihood with respect to the current estimates and (ii) maximization ( ), which maximizes the expected log likelihood ( ) as follows: Q ( θ | θ ( t ) ) = E Z | X , θ ( t ) [ l o g L ( θ ; Χ , Ζ ) ] (5) θ ( t + 1 ) = a r g max θ Q ( θ | θ ( t ) ) (6) where Q is the expected values of the log likelihood function θ , X is the pixel coordinates, Z is the latent variables and θ ( t ) is the current parameters. Having estimated the ellipsoidal models of cells, we need to identify the optimal number of seeds rejecting or approving the candidate seeds from the previously estimated list. Thus, in this approach, we proposed a single fitness validation criterion estimating this for the total number of combinations of candidate seeds, aiming to accurately identify the total number of cells. The criterion takes into account the foreground, the background and the overlapping cell areas that are included by the estimated ellipses and the total area of the extracted ellipses. More specifically, the total area covered by the estimated ellipses is defined as follows: E = ∑ p ∈ M E C ( p ) (7) the foreground area of the binary image M that is included by the estimated ellipses is defined as follows: A F = ∑ p = 1 M ( p ) E ( p ) (8) the area of the background area of the binary image M that is included by the estimated ellipses is defined as follows: A B = ∑ p = 1 [ 1 − M ( p ) ] E ( p ) (9) and the overlapping parts of the ellipses of the touching cells for the total number of the identified ellipses is defined as follows: A T = ∑ i = 1 ∑ p = 1 E C i ( p ) ∩ E C j ( p ) , j = 1 , j ≠ i (10) Based on the calculation of these metrics, we estimated the fitness degree of the estimated ellipsoidal components against the 2D cell data and we selected the candidate seeds that maximize the following: S = max ( A F − A B − A T E ) (11) The final segmentation of the clustered cells was performed by applying Bayesian classification which assigns each pixel p to cluster C i with the maximum posterior probability. Finally, as lymphocytes typically have small (7–10 μm ), round, and dark nuclei with little cytoplasm, which is distinctive from malignant cells or stromal cells , we used morphological and textural features, namely size and shape cells, the average value and the skewness of the intensity histogram of the cell, in order to reject the non-lymphocytes.
Following the candidate TLS refinement and lymphocytes detection, the falsely detected candidate TLS regions were rejected, in order to validate the identified candidate TLS regions and decrease the false positive TLS identification rates. More specifically, hypothesizing that the lymphocytes density of TLS regions is much higher than lymphocytes within the rest of a tissue, 3 features were extracted and used for the rejection of candidate non-TLS regions. To this end, the number of lymphocytes, the size and the number of lymphocytes per unit area of each candidate TLS region were extracted. After the estimation of the features, an SVM classifier was deployed towards the aim of arriving at a final decision regarding whether an identified candidate TLS region is an actual TLS region or a false-positive candidate case.
Formalin-fixed paraffin-embedded tissue, surplus to diagnostic purposes, was obtained from patients undergoing lung cancer resection. Informed consent was obtained from the donor prior to surgery for use of surgically-excised tissues for research purposes. This study was approved by the local Ethical Committee of the University of East Anglia (Ref No. 2017/2018–119 HT). Histological cases were retrieved from the archive of the Norfolk and Norwich University Hospital histopathology department. Tumours were classified according to the 2015 WHO classification . For each patient, TLS assessment was based on a representative tissue block, with adequate tumour material and interface between normal and tumor tissue well represented. Annotation was performed on tumour tissue slides from 18 patients with primary lung cancer. There were 8 tumours, of which there were 14 adenocarcinoma (5 acinar predominant, 1, papillary, 2 solid predominant and 6 lepidic predominant), 3 squamous cell carcinomas and 2 sarcomatoid carcinomas. The age range of the patients was 54–82 years old, with an average of 69 ± 2.4 years. The tumour size was between 8 and 62 mm, with an average of 28 ± 6.4. TLS was defined as all dense lymphocytic aggregates and 144 TLS were annotated by 2 histopathologists (FK, MDC) with 100% concordance. To assess the generalizability of the model, two datasets (D1 and D2) were created consisting of 5 and 13 patients respectively for internal training-validation and further validation in independent populations.
For the evaluation of the proposed method we conducted extensive tests using the two datasets. Initially, through the first dataset, we internally validated the efficiency for TLS identification of the proposed methodology by performing an ablation analysis and leave-one-out cross-validation and then through the second dataset we externally validated the generalizability of the proposed model in a different population. Furthermore, we compared the efficiency of identification of TLS regions, using state of the art approaches. Finally, for the identified TLS regions for both datasets, we used box plots to show the range of the number and density of lymphocytes as well as the TLS area. Identification of TLS regions The presented TLS identification and density assessment methodology comprised three main components, namely DeepLab model, its active contour approach and post-validation processing. The applied DeepLab v3+ model was used to identify the candidate TLS regions in a semantic image segmentation task, the active contour approach for boundary refinement of the candidate TLS regions, while the post-validation scheme was used to reject the candidate non-TLS regions. Initially, for the evaluation of the components of the proposed methodology, in the first dataset we performed tissue slide-level leave-one-out analysis. Thus, we found that the DeepLab v3+ model achieves a performance that reaches an Area under the Receiver Operating Characteristic (AUROC) curve of 0.9584. More specifically, defining sensitivity levels, at 95%, 98% and 99%, the DeepLab v3+ model achieves 85.79%, 80.95% and 74.98% specificity rates respectively. In addition to the DeepLab v3+ model, the use of boundary refinement improves the overall performance reaching an AUROC of 0.96. This slight increase in AUROC is translated into higher specificity rates at the predefined sensitivity levels, reaching 86.97% specificity at 95% sensitivity, 80.97% specificity at 98% sensitivity and 74.99% specificity at 99% sensitivity level. The adoption of the post-validation scheme and rejection of the candidate non-TLS regions improves the performance further, reaching an AUROC of 0.9609. Thus, the corresponding specificity and sensitivity rates for the proposed model are 87.02% specificity at 95% sensitivity, 80.97% specificity at 98% sensitivity and 74.99% specificity at 99% sensitivity level ( ). Furthermore, in order to confirm that the performance of the proposed methodology remains robust, we carried out an external validation analysis using the second dataset. To this end, we used the first dataset as the training dataset, and the proposed methodology reaches an AUROC equal to 0.9589. More specifically, it achieves 92.87% specificity at 95% sensitivity, 88.79% specificity at 98% sensitivity and 84.32% specificity at 99% sensitivity level. The qualitative results of demonstrate that the total number of the components used in this methodology contribute to the overall accuracy of TLS identification. Based on the input 10x H&E-stained images ( ), the DeepLab v3+ accurately detects the candidate TLS regions ( ). Then the active contour approach obtains precise TLS boundaries ( ) while the post-validation processing step rejects candidate non-TLS regions ( ) that have partially similar characteristics to TLS regions. A comparison of state-of-the-art methods For the validation of the proposed methodology, three state of the art methods were deployed. Deploying a SegNet model , a U-Net model and a classic lymphocytes-based thresholding method the proposed method outperforms these ( ) identifying more accurate the TLS regions ( ). More specifically, the SegNet is a CNN architecture for semantic segmentation proposed by researchers at University of Cambridge . The main SegNet applications regarding segmentation tasks such as semantic segmentation of prostate cancer , gland segmentation from colon cancer histology images and brain tumor segmentation from multi-modal magnetic resonance images . For the evaluation of the SegNet in the task of TLS identification, we used the same protocol as the proposed methodology and we found that its performance in the external validation set reaches an AUROC of 0.9307. This is translated to 80.87% specificity at 95% sensitivity, 72.14% specificity at 98% sensitivity and 70.12% specificity at 99% sensitivity level. Then we chose U-Net, a popular deep-learning-based method to solve microscopy image segmentation problems . Deploying the U-Net we found that its performance reaches an AUROC of 0.7489 and 15.2% specificity at 95% sensitivity level. It is worth mentioning that in contrast to the proposed network the U-Net was not pre-trained. This might explain the latter’s insufficient capture of the experimental variation. Finally, we applied a thresholding method aiming to distinguish regions with contrasting density lymphocytes levels. The thresholding method reaches an AUROC of 0.9021 and 57.75% specificity at 95% sensitivity, 38.71% specificity at 98% sensitivity and 22.41% specificity at 99% sensitivity level. Criteria for the definition of TLS regions In this work, for the standardisation of TLS recognition, we aim to translate the visual recognition of TLS by histopathologists into a universally reproducible set of mathematical values ( ). Thus, through the lymphocytes segmentation procedure, we show that the TLS regions include a minimum number of 45 lymphocytes and that the minimum area of TLS regions is 6,245 μm 2 . The mean number of lymphocytes in TLS regions is 620.8 while the mean area of TLS regions is 48,387 μm 2 . The largest number of lymphocytes in TLS regions is 2936 and the largest area of TLS regions is 230,604 μm 2 . The minimum number of lymphocytes per area of TLS regions is 0.0074/ μm 2 with a mean and standard deviation of 0.0128/ μm 2 and 0.0026/ μm 2 respectively and a maximum of 0.0189/ μm 2 . It is worth mentioning that the aforementioned criteria were defined through the analysis of TLS regions in both datasets created. In additional analyses, we carried out tests for the comparison between the lymphocyte’s density values of identified TLS and that outside of TLS regions. Thus, in the comparison of the lymphocytes density values of identified TLS regions with the values outside of the TLS regions we identified that the minimum number lymphocytes of TLS regions is 0.0019/ μm 2 , with a mean and standard deviation of 0.0040/ μm 2 and 0.0010/ μm 2 respectively and a maximum of 0.0063/ μm 2 ( ). Finally, it is worth mentioning that for the evaluation of the proposed lymphocytes segmentation method, we manually annotated four hundred cells. Thus, the method was compared with an ellipsoidal model and the classical watershed algorithm outperforming these with a correctly segmented rate of 91.5% in contrast to 89.5% and 86% respectively. Thus, heat maps of lymphocytes were constructed from the input 10x H&E-stained images and their corresponding detected TLS regions ( ). In the heat maps, the dark blue color represents the background and the other cells, while the lighter blue to red colors represent the lymphocytes from lower to higher density. Thus, it is clearly shown that TLS regions were easily recognized within the lung cancer tissue from the lymphocytes heatmap.
The presented TLS identification and density assessment methodology comprised three main components, namely DeepLab model, its active contour approach and post-validation processing. The applied DeepLab v3+ model was used to identify the candidate TLS regions in a semantic image segmentation task, the active contour approach for boundary refinement of the candidate TLS regions, while the post-validation scheme was used to reject the candidate non-TLS regions. Initially, for the evaluation of the components of the proposed methodology, in the first dataset we performed tissue slide-level leave-one-out analysis. Thus, we found that the DeepLab v3+ model achieves a performance that reaches an Area under the Receiver Operating Characteristic (AUROC) curve of 0.9584. More specifically, defining sensitivity levels, at 95%, 98% and 99%, the DeepLab v3+ model achieves 85.79%, 80.95% and 74.98% specificity rates respectively. In addition to the DeepLab v3+ model, the use of boundary refinement improves the overall performance reaching an AUROC of 0.96. This slight increase in AUROC is translated into higher specificity rates at the predefined sensitivity levels, reaching 86.97% specificity at 95% sensitivity, 80.97% specificity at 98% sensitivity and 74.99% specificity at 99% sensitivity level. The adoption of the post-validation scheme and rejection of the candidate non-TLS regions improves the performance further, reaching an AUROC of 0.9609. Thus, the corresponding specificity and sensitivity rates for the proposed model are 87.02% specificity at 95% sensitivity, 80.97% specificity at 98% sensitivity and 74.99% specificity at 99% sensitivity level ( ). Furthermore, in order to confirm that the performance of the proposed methodology remains robust, we carried out an external validation analysis using the second dataset. To this end, we used the first dataset as the training dataset, and the proposed methodology reaches an AUROC equal to 0.9589. More specifically, it achieves 92.87% specificity at 95% sensitivity, 88.79% specificity at 98% sensitivity and 84.32% specificity at 99% sensitivity level. The qualitative results of demonstrate that the total number of the components used in this methodology contribute to the overall accuracy of TLS identification. Based on the input 10x H&E-stained images ( ), the DeepLab v3+ accurately detects the candidate TLS regions ( ). Then the active contour approach obtains precise TLS boundaries ( ) while the post-validation processing step rejects candidate non-TLS regions ( ) that have partially similar characteristics to TLS regions.
For the validation of the proposed methodology, three state of the art methods were deployed. Deploying a SegNet model , a U-Net model and a classic lymphocytes-based thresholding method the proposed method outperforms these ( ) identifying more accurate the TLS regions ( ). More specifically, the SegNet is a CNN architecture for semantic segmentation proposed by researchers at University of Cambridge . The main SegNet applications regarding segmentation tasks such as semantic segmentation of prostate cancer , gland segmentation from colon cancer histology images and brain tumor segmentation from multi-modal magnetic resonance images . For the evaluation of the SegNet in the task of TLS identification, we used the same protocol as the proposed methodology and we found that its performance in the external validation set reaches an AUROC of 0.9307. This is translated to 80.87% specificity at 95% sensitivity, 72.14% specificity at 98% sensitivity and 70.12% specificity at 99% sensitivity level. Then we chose U-Net, a popular deep-learning-based method to solve microscopy image segmentation problems . Deploying the U-Net we found that its performance reaches an AUROC of 0.7489 and 15.2% specificity at 95% sensitivity level. It is worth mentioning that in contrast to the proposed network the U-Net was not pre-trained. This might explain the latter’s insufficient capture of the experimental variation. Finally, we applied a thresholding method aiming to distinguish regions with contrasting density lymphocytes levels. The thresholding method reaches an AUROC of 0.9021 and 57.75% specificity at 95% sensitivity, 38.71% specificity at 98% sensitivity and 22.41% specificity at 99% sensitivity level.
In this work, for the standardisation of TLS recognition, we aim to translate the visual recognition of TLS by histopathologists into a universally reproducible set of mathematical values ( ). Thus, through the lymphocytes segmentation procedure, we show that the TLS regions include a minimum number of 45 lymphocytes and that the minimum area of TLS regions is 6,245 μm 2 . The mean number of lymphocytes in TLS regions is 620.8 while the mean area of TLS regions is 48,387 μm 2 . The largest number of lymphocytes in TLS regions is 2936 and the largest area of TLS regions is 230,604 μm 2 . The minimum number of lymphocytes per area of TLS regions is 0.0074/ μm 2 with a mean and standard deviation of 0.0128/ μm 2 and 0.0026/ μm 2 respectively and a maximum of 0.0189/ μm 2 . It is worth mentioning that the aforementioned criteria were defined through the analysis of TLS regions in both datasets created. In additional analyses, we carried out tests for the comparison between the lymphocyte’s density values of identified TLS and that outside of TLS regions. Thus, in the comparison of the lymphocytes density values of identified TLS regions with the values outside of the TLS regions we identified that the minimum number lymphocytes of TLS regions is 0.0019/ μm 2 , with a mean and standard deviation of 0.0040/ μm 2 and 0.0010/ μm 2 respectively and a maximum of 0.0063/ μm 2 ( ). Finally, it is worth mentioning that for the evaluation of the proposed lymphocytes segmentation method, we manually annotated four hundred cells. Thus, the method was compared with an ellipsoidal model and the classical watershed algorithm outperforming these with a correctly segmented rate of 91.5% in contrast to 89.5% and 86% respectively. Thus, heat maps of lymphocytes were constructed from the input 10x H&E-stained images and their corresponding detected TLS regions ( ). In the heat maps, the dark blue color represents the background and the other cells, while the lighter blue to red colors represent the lymphocytes from lower to higher density. Thus, it is clearly shown that TLS regions were easily recognized within the lung cancer tissue from the lymphocytes heatmap.
Histologically, the TLS are recognised as mature and tightly packed lymphoid cells forming discrete entities with a smooth circular outline. Previous studies have determined the density of TLS based on the expression of high endothelial venules, DC Lamp expressing cells and other markers. However, based on our diagnostic experience, this is likely to represent an underestimate of the density of TLS. The TLS, detected by immunohistochemistry, represent only a subset of all the TLS which can be seen histologically on an H&E slide. In observations, Omer and Ng kee kwong have shown that there are TLS which do not show expression of CD21 and CD23 expressing cells, which are used as markers of TLS. Therefore, TLS, demonstrated with specific immunostain, represents an underestimate of the total number of TLS. In the study by Silina et al , the total TLS number can only be identified on histological interpretation on the H&E slides. Although histopathologists have recognized the presence of TLS as Crohn’s-like lymphoid aggregates adjacent to colorectal adenocarcinomas since at least 1990 , current methods of detecting and quantifying TLS vary in the literature. Other factors, such as the TLS diameter has been shown to affect the prognosis of colorectal cancer . Therefore, the counting of TLS over a large histological area should be facilitated such that other factors could be integrated in the prognostic determination of the TLS. We first develop a machine learning automated approach to mimic the identification of the TLS by histopathologists. This had a 92.87% specificity at 95% sensitivity, 88.79% specificity at 98% sensitivity and 84.32% specificity at 99% sensitivity level, implying that the automated approach was able to reproduce histopathologists assessment with great accuracy. This was compared to existing methods of vision recognition and our methodology was superior to those. After the lymphocytes have been segmented, we then used this approach to try to determine the mathematical criteria defining a TLS. Firstly, it appears apparent that there should be a minimum number of lymphocytes within the TLS. In this limited series of TLS, we have shown that the minimum number of lymphocytes is 45. It is also apparent that the TLS has a minimum area and we have shown this to be 6,245 μm 2 . Visually, we recognise that the lymphocytes are more tightly packed in TLS compared to those in the same tissue, but outside of the TLS. We have here shown that the density of lymphocytes is approximately 3 times those outside of the TLS. The mean density of lymphocytes within a TLS area is 0.0128 μm 2 which is much higher compared to 0.004 μm 2 in non-TLS regions. We therefore argue that future studies should use the following criteria for definition of TLS: lymphocyte number within the TLS, the minimum area of the TLS, and density of lymphocytes within a given area. As the density of the lymphocytes within the TLS is much higher than those outside the TLS, the algorithm has demonstrated a distinction between TLS and the tumour infiltrating lymphocytes. The latter are especially relevant to the prognostic factor of breast cancer and therefore, this current trained deep learning model cannot be used to detect regions of tumour infiltrating lymphocytes. A limitation of this current study is that we were not able to correlate TLS density with the patient outcome, owing to the low number of patients in each dataset. Therefore, future studies should correlate the above 3 criteria with outcome of lung cancer patients in large datasets. Based on these findings, it would be possible to adjust the particular values used to define the TLS, to give an optimum definition based on clinical outcome data. Standardisation of TLS density assessment by machine learning will allow the comparison of the host immune response between different subtypes of the same tumour, and between the same tumour across different studies. It will also allow for better correlation between TLS density and outcome. Another limitation is that this study is based on lung cancer histological images. Future studies are needed to prove that this methodology will be validated in other cancer types and in material from other centers. However, in our experience, the TLS are histologically identical between the different cancer types and we expect this algorithm to be validated in other cancer subtypes too. In our histological experience, lung biopsies are quite narrow and therefore only show part of the TLS, if they were present. Therefore, the smooth outline of the TLS would not be seen in the biopsy and therefore, the algorithm is unlikely to be effective. The model would need to be adapted to detect TLS in biopsies. This can be the subject of another study. This algorithm provides for the detection of all TLS, but does not make a distinction between the different types of TLS previously described by Silina et al . [ – ]. However, we have made observations whereby TLS with germinal centres (secondary follicle-like TLS) constitute 53% of all TLS in a series of lung cancer cases (Omer and Ng kee kwong). We believe that the number of these TLS subtypes would be dependent on the total number of TLS. In this study, we used morphology to detect TLS. A future step would be to use machine learning to quantify TLS of different subtypes. Our data shows that the performance of the proposed methodology for TLS identification remains high even when the methodology is trained with a limited number of data. The estimated accuracy rates mean that there is a possibility that this AI method can be used in larger scale studies and possibly future clinical diagnostic studies. Although the tumour inflammatory response is assessed on H&E slides using tumour infiltrating lymphocytes, the assessment of TLS is not used in diagnostic practice as there is no precise or standard definition . We therefore believe that our methodology can be used in future studies, including those of non-pulmonary tumour sites. Furthermore, our study provides the foundations for the standardisation of the definition and quantification of the TLS on standard H&E histological images. In addition, we have shown that machine learning can provide a fast and reliable method of quantification. This would possibly lead to its widespread adoption in routine histopathological practice.
|
Association and shared biological bases between birth weight and cortical structure | 5722a8dc-4efd-473b-92e8-45c76fda4146 | 11882966 | Biochemistry[mh] | According to Barker’s hypothesis , it is established that intrauterine growth plays a crucial role in shaping various neuropsychiatric traits, such as intelligence and cognitive outcomes, which extend beyond childhood into young adulthood and midlife . This insight has spurred studies exploring the structural variations in the brain associated with birth weight as a proxy for intrauterine growth . For instance, two cohort studies have identified associations between birth weight and two cortical structural phenotypes, including cortical volume and surface area . However, despite these findings, the current evidence on the association between birth weight and cortical structure remains incomprehensive, leaving several gaps that warrant further investigation. One essential gap is the lack of robust evidence regarding the relationship between birth weight and more complex aspects of cortical morphometry, such as the folding, curvature, and microstructural phenotypes, which are more reflective of myelination and cytoarchitecture . For example, current evidence for the association between birth weight and fractional anisotropy was controversial. Rimol et al. suggested a significant association between very low birth weight and fractional anisotropy among 120 individuals . However, this association was not consistently replicated across other studies . The inconsistency in reported findings likely stems from small sample sizes, as well as confounding biases and reverse causality inherent in observational studies . Hence, further well-designed studies covering more complex aspects of cortical morphometry are essential to advancing the comprehensive understanding of the association between birth weight and cortical structure. Another significant gap in the current literature concerns the mechanisms linking birth weight to cortical structure. One prevalent explanation suggests that adverse intrauterine conditions could elicit adaptive fetal responses that may have a long-term detrimental impact on brain development . However, this explanation lacks a detailed understanding of the cellular and molecular mechanisms involved. To date, the biological pathways linking birth weight to cortical development remain largely unexplored. Further studies are needed to elucidate the exact biological underpinnings of these associations, thus facilitating the development of targeted therapies or preventive strategies to support optimal development of cerebral cortex. Recent progress in genome-wide association studies (GWAS) of cortical structure , the development of genome-wide cross-trait analysis tools , and the publication of multi-omics datasets have provided a new opportunity to address the abovementioned gaps. Researchers could examine the potential relationships between birth weight and cortical structure through two-sample Mendelian randomization (MR) analyses . Furthermore, combining multi-omics datasets with genome-wide cross-trait analysis tools could enable the exploration of the shared biological relationships between birth weight and cortical structure, providing valuable insights into their potential mechanisms. In this study, we aimed to address the abovementioned gaps by conducting a comprehensive analysis using a multi-omics framework to investigate both the association and biological relationship underlying birth weight and cortical structure. We first examined whether birth weight is associated with 13 aspects of global cortical structure measures. Then, for significant associations, we detected their shared causal genes across transcriptome and proteome to implicate the underlying mechanisms. At last, we identified the enriched cell types of birth weight to echo the previous analyses and explore the underlying pathways from a cellular perspective.
Study overview As depicted in Fig. , this study follows a three-phase approach. In phase 1, we conducted two-sample MR analyses to investigate the association of birth weight with cortical structure. In phase 2, we first performed a transcriptome-wide association analysis (TWAS) to identify gene expressions related to birth weight. Then, we carried out summary-based MR (SMR) analyses incorporating expression quantitative trait loci (eQTL) datasets to determine whether these gene expressions could impact the cortical structure highlighted in the MR analyses. Additionally, we performed proteome-wide association studies (PWAS) and protein quantitative trait loci (pQTL)-based SMR analyses to provide evidence at the proteomic level. In phase 3, we applied the cell-type expression-specific integration for complex traits (CELLECT) method to explore the cell-type enrichment of birth weight. All data used in this study were deidentified publicly available data, therefore no ethical approval was required for this study. Data sources Birth weight We obtained the hitherto largest GWAS summary statistics of birth weight from the UK Biobank (UKB) and 35 studies participating in the Early Growth Genetics (EGG) Consortium (Supplementary Table ). In the original GWAS, up to 321 223 participants of predominantly European ancestry (~ 93%) were recruited. Birth weight was collected through actual measurements, medical records and self-reporting. All birth weight measures were transformed to z-score before analysis. A fixed-effects meta-analysis was conducted to combine the summary statistics from the UKB and the EGG Consortium. After approximate conditional and joint multiple SNP analyses, 146 independent SNPs reaching a significant threshold of 6.6 × 10 −9 were reported . We selected 143 independent significant SNPs located on autosomes as instrumental variables (IVs) for birth weight in the two-sample MR analysis (Supplementary Table ). We also downloaded summary statistics consisting of 298 142 individuals of European ancestry for other downstream analyses. Global cortical structure We obtained the hitherto latest and the most comprehensive GWAS summary statistics of cortical structure involving 36 663 individuals from the UKB and the Adolescent Brain Cognitive Development (ABCD) study (Supplementary Table ). Thirteen aspects of cortical phenotypes were analyzed globally and regionally in the original GWAS. High-resolution anatomical magnetic resonance imaging (MRI) data were processed to extract 8 cortical macrostructural phenotypes, including surface area (SA), volume, thickness, folding index (FI), intrinsic curvature index (ICI), local gyrification index (LGI), mean curvature (MC) and gaussian curvature (GC). Diffusion MRI data were processed to extract 5 cortical microstructural phenotypes, including fractional anisotropy (FA), mean diffusivity (MD), isotropic volume fraction (ISOVF), intracellular volume fraction (ICVF) and orientation diffusion index (ODI). Before analysis, all cortical phenotypes were standardized, and the global measures were calculated as the average across the 180 bilaterally averaged cortical regions. We obtained the full-set summary statistics of global measures of the thirteen macro- or microstructural phenotypes for analysis. Statistical analysis MR We performed two-sample MR analyses to investigate the association between birth weight and cortical structure . Initially, we computed the R 2 to determine the variance in birth weight explained by the IVs and used F-statistics to assess the strength of the IVs. IVs with F-statistics below 10 were considered weak instruments and excluded from analysis. For the primary analysis, we employed the inverse-variance weighted (IVW) approach , operating under the assumption that all genetic variants are valid IVs. We complement the IVW MR with the weighted median, MR-Egger, MR-PRESSO and IVW radial methods. The weighted median method can robustly estimate the causal relationship even when less than 50% of the genetic variants are invalid IVs . MR-Egger method includes an intercept term to account for directional pleiotropy . The MR-PRESSO method could detect outliers and refine causal estimates by removing these outliers . The IVW radial method allows more straightforward detection of outliers and influential data points . We also performed leave-one-out analyses to identify if any SNP disproportionately drove the associations. To investigate potential bias due to sample overlap between birth weight and cortical structure, we further conducted the MRlap method . Furthermore, considering educational attainment and adult body mass index as potential confounders , we implemented Multivariable MR (MVMR) analyses to evaluate the independent effect of birth weight on cortical structure. We used Bonferroni correction to account for multiple tests in the primary analyses. An association was considered significant if the P -value in the primary analysis was below 3.846 × 10 −3 (0.05/13) and the direction of effect estimates remained consistent across all methods. We reported the MR analyses following the Strengthening the Reporting of Observational Studies in Epidemiology using MR (STROBE-MR) (Supplementary Table ). TWAS We performed a primary TWAS following the FUSION pipeline to identify gene expressions associated with birth weight. We integrated birth weight summary statistics with a cross-tissue expression weight calculated through sparse canonical correlation analysis (sCCA) . A total of 37 917 sCCA features extracted from the GTEx v8 release were involved in the cross-tissue expression reference weights. We defined significant results at P < 1.319 × 10 −6 (0.05/37 917) using Bonferroni correction based on the number of features tested across the reference weights. As a sensitivity analysis, we performed S-MultiXcan analysis for birth weight based on expression data from 49 tissues in GTEx v8 release to examine the robustness of the sCCA-TWAS results . PWAS We performed PWAS following the FUSION pipeline to identify proteins associated with birth weight. We used a reference weight developed based on protein abundance measures of 376 dorsolateral prefrontal cortex (dPFC) samples from the Religious Order Study and Rush Memory and Aging Project (ROS/MAP) . A total of 1 761 proteins were tested in the proteome reference weight. We defined significant results at P < 2.839 × 10 −5 (0.05/1 761) using Bonferroni correction based on the number of proteins tested across the reference weight. SMR We implemented the SMR & HEIDI methods to prioritize gene expressions and proteins that could impact cortical structure . The eQTL summary data were obtained from the BrainMeta dataset, which includes RNA-seq data from 2 865 brain cortex samples of 2 443 unrelated individuals of European ancestry . The pQTL summary data were sourced from the ROS/MAP, which contains protein abundance measures from 1 277 dPFC samples of European ancestry . Bonferroni correction was applied to consider multiple comparisons. Significant gene expressions and proteins were confirmed if the adjusted P -value for SMR analysis < 0.05, with a conservative unadjusted P -value for Heterogeneity in Dependent Instrument (HEIDI) test > 0.01. CELLECT To identify cell types related to birth weight, we performed CELLECT analyses with default parameters . GWAS summary statistics of birth weight with three sources of cell-type expression were used in the analyses. Firstly, we used the Tabula Muris dataset, which contains 115 cell types from 18 mouse organs, of which 7 cell types are related to brain . Then, we used the Mouse Nervous System dataset, which includes 265 cell types from the mouse nervous system . Finally, due to the relatively significant differences between mice and humans, we incorporated the Allen Brain Map Human Multiple Cortical Areas SMART-sequence dataset, which investigates 120 cell types from the human cortex . Significant results were identified with a nominal unadjusted P -value < 0.05.
As depicted in Fig. , this study follows a three-phase approach. In phase 1, we conducted two-sample MR analyses to investigate the association of birth weight with cortical structure. In phase 2, we first performed a transcriptome-wide association analysis (TWAS) to identify gene expressions related to birth weight. Then, we carried out summary-based MR (SMR) analyses incorporating expression quantitative trait loci (eQTL) datasets to determine whether these gene expressions could impact the cortical structure highlighted in the MR analyses. Additionally, we performed proteome-wide association studies (PWAS) and protein quantitative trait loci (pQTL)-based SMR analyses to provide evidence at the proteomic level. In phase 3, we applied the cell-type expression-specific integration for complex traits (CELLECT) method to explore the cell-type enrichment of birth weight. All data used in this study were deidentified publicly available data, therefore no ethical approval was required for this study.
Birth weight We obtained the hitherto largest GWAS summary statistics of birth weight from the UK Biobank (UKB) and 35 studies participating in the Early Growth Genetics (EGG) Consortium (Supplementary Table ). In the original GWAS, up to 321 223 participants of predominantly European ancestry (~ 93%) were recruited. Birth weight was collected through actual measurements, medical records and self-reporting. All birth weight measures were transformed to z-score before analysis. A fixed-effects meta-analysis was conducted to combine the summary statistics from the UKB and the EGG Consortium. After approximate conditional and joint multiple SNP analyses, 146 independent SNPs reaching a significant threshold of 6.6 × 10 −9 were reported . We selected 143 independent significant SNPs located on autosomes as instrumental variables (IVs) for birth weight in the two-sample MR analysis (Supplementary Table ). We also downloaded summary statistics consisting of 298 142 individuals of European ancestry for other downstream analyses. Global cortical structure We obtained the hitherto latest and the most comprehensive GWAS summary statistics of cortical structure involving 36 663 individuals from the UKB and the Adolescent Brain Cognitive Development (ABCD) study (Supplementary Table ). Thirteen aspects of cortical phenotypes were analyzed globally and regionally in the original GWAS. High-resolution anatomical magnetic resonance imaging (MRI) data were processed to extract 8 cortical macrostructural phenotypes, including surface area (SA), volume, thickness, folding index (FI), intrinsic curvature index (ICI), local gyrification index (LGI), mean curvature (MC) and gaussian curvature (GC). Diffusion MRI data were processed to extract 5 cortical microstructural phenotypes, including fractional anisotropy (FA), mean diffusivity (MD), isotropic volume fraction (ISOVF), intracellular volume fraction (ICVF) and orientation diffusion index (ODI). Before analysis, all cortical phenotypes were standardized, and the global measures were calculated as the average across the 180 bilaterally averaged cortical regions. We obtained the full-set summary statistics of global measures of the thirteen macro- or microstructural phenotypes for analysis.
We obtained the hitherto largest GWAS summary statistics of birth weight from the UK Biobank (UKB) and 35 studies participating in the Early Growth Genetics (EGG) Consortium (Supplementary Table ). In the original GWAS, up to 321 223 participants of predominantly European ancestry (~ 93%) were recruited. Birth weight was collected through actual measurements, medical records and self-reporting. All birth weight measures were transformed to z-score before analysis. A fixed-effects meta-analysis was conducted to combine the summary statistics from the UKB and the EGG Consortium. After approximate conditional and joint multiple SNP analyses, 146 independent SNPs reaching a significant threshold of 6.6 × 10 −9 were reported . We selected 143 independent significant SNPs located on autosomes as instrumental variables (IVs) for birth weight in the two-sample MR analysis (Supplementary Table ). We also downloaded summary statistics consisting of 298 142 individuals of European ancestry for other downstream analyses.
We obtained the hitherto latest and the most comprehensive GWAS summary statistics of cortical structure involving 36 663 individuals from the UKB and the Adolescent Brain Cognitive Development (ABCD) study (Supplementary Table ). Thirteen aspects of cortical phenotypes were analyzed globally and regionally in the original GWAS. High-resolution anatomical magnetic resonance imaging (MRI) data were processed to extract 8 cortical macrostructural phenotypes, including surface area (SA), volume, thickness, folding index (FI), intrinsic curvature index (ICI), local gyrification index (LGI), mean curvature (MC) and gaussian curvature (GC). Diffusion MRI data were processed to extract 5 cortical microstructural phenotypes, including fractional anisotropy (FA), mean diffusivity (MD), isotropic volume fraction (ISOVF), intracellular volume fraction (ICVF) and orientation diffusion index (ODI). Before analysis, all cortical phenotypes were standardized, and the global measures were calculated as the average across the 180 bilaterally averaged cortical regions. We obtained the full-set summary statistics of global measures of the thirteen macro- or microstructural phenotypes for analysis.
MR We performed two-sample MR analyses to investigate the association between birth weight and cortical structure . Initially, we computed the R 2 to determine the variance in birth weight explained by the IVs and used F-statistics to assess the strength of the IVs. IVs with F-statistics below 10 were considered weak instruments and excluded from analysis. For the primary analysis, we employed the inverse-variance weighted (IVW) approach , operating under the assumption that all genetic variants are valid IVs. We complement the IVW MR with the weighted median, MR-Egger, MR-PRESSO and IVW radial methods. The weighted median method can robustly estimate the causal relationship even when less than 50% of the genetic variants are invalid IVs . MR-Egger method includes an intercept term to account for directional pleiotropy . The MR-PRESSO method could detect outliers and refine causal estimates by removing these outliers . The IVW radial method allows more straightforward detection of outliers and influential data points . We also performed leave-one-out analyses to identify if any SNP disproportionately drove the associations. To investigate potential bias due to sample overlap between birth weight and cortical structure, we further conducted the MRlap method . Furthermore, considering educational attainment and adult body mass index as potential confounders , we implemented Multivariable MR (MVMR) analyses to evaluate the independent effect of birth weight on cortical structure. We used Bonferroni correction to account for multiple tests in the primary analyses. An association was considered significant if the P -value in the primary analysis was below 3.846 × 10 −3 (0.05/13) and the direction of effect estimates remained consistent across all methods. We reported the MR analyses following the Strengthening the Reporting of Observational Studies in Epidemiology using MR (STROBE-MR) (Supplementary Table ). TWAS We performed a primary TWAS following the FUSION pipeline to identify gene expressions associated with birth weight. We integrated birth weight summary statistics with a cross-tissue expression weight calculated through sparse canonical correlation analysis (sCCA) . A total of 37 917 sCCA features extracted from the GTEx v8 release were involved in the cross-tissue expression reference weights. We defined significant results at P < 1.319 × 10 −6 (0.05/37 917) using Bonferroni correction based on the number of features tested across the reference weights. As a sensitivity analysis, we performed S-MultiXcan analysis for birth weight based on expression data from 49 tissues in GTEx v8 release to examine the robustness of the sCCA-TWAS results . PWAS We performed PWAS following the FUSION pipeline to identify proteins associated with birth weight. We used a reference weight developed based on protein abundance measures of 376 dorsolateral prefrontal cortex (dPFC) samples from the Religious Order Study and Rush Memory and Aging Project (ROS/MAP) . A total of 1 761 proteins were tested in the proteome reference weight. We defined significant results at P < 2.839 × 10 −5 (0.05/1 761) using Bonferroni correction based on the number of proteins tested across the reference weight. SMR We implemented the SMR & HEIDI methods to prioritize gene expressions and proteins that could impact cortical structure . The eQTL summary data were obtained from the BrainMeta dataset, which includes RNA-seq data from 2 865 brain cortex samples of 2 443 unrelated individuals of European ancestry . The pQTL summary data were sourced from the ROS/MAP, which contains protein abundance measures from 1 277 dPFC samples of European ancestry . Bonferroni correction was applied to consider multiple comparisons. Significant gene expressions and proteins were confirmed if the adjusted P -value for SMR analysis < 0.05, with a conservative unadjusted P -value for Heterogeneity in Dependent Instrument (HEIDI) test > 0.01. CELLECT To identify cell types related to birth weight, we performed CELLECT analyses with default parameters . GWAS summary statistics of birth weight with three sources of cell-type expression were used in the analyses. Firstly, we used the Tabula Muris dataset, which contains 115 cell types from 18 mouse organs, of which 7 cell types are related to brain . Then, we used the Mouse Nervous System dataset, which includes 265 cell types from the mouse nervous system . Finally, due to the relatively significant differences between mice and humans, we incorporated the Allen Brain Map Human Multiple Cortical Areas SMART-sequence dataset, which investigates 120 cell types from the human cortex . Significant results were identified with a nominal unadjusted P -value < 0.05.
We performed two-sample MR analyses to investigate the association between birth weight and cortical structure . Initially, we computed the R 2 to determine the variance in birth weight explained by the IVs and used F-statistics to assess the strength of the IVs. IVs with F-statistics below 10 were considered weak instruments and excluded from analysis. For the primary analysis, we employed the inverse-variance weighted (IVW) approach , operating under the assumption that all genetic variants are valid IVs. We complement the IVW MR with the weighted median, MR-Egger, MR-PRESSO and IVW radial methods. The weighted median method can robustly estimate the causal relationship even when less than 50% of the genetic variants are invalid IVs . MR-Egger method includes an intercept term to account for directional pleiotropy . The MR-PRESSO method could detect outliers and refine causal estimates by removing these outliers . The IVW radial method allows more straightforward detection of outliers and influential data points . We also performed leave-one-out analyses to identify if any SNP disproportionately drove the associations. To investigate potential bias due to sample overlap between birth weight and cortical structure, we further conducted the MRlap method . Furthermore, considering educational attainment and adult body mass index as potential confounders , we implemented Multivariable MR (MVMR) analyses to evaluate the independent effect of birth weight on cortical structure. We used Bonferroni correction to account for multiple tests in the primary analyses. An association was considered significant if the P -value in the primary analysis was below 3.846 × 10 −3 (0.05/13) and the direction of effect estimates remained consistent across all methods. We reported the MR analyses following the Strengthening the Reporting of Observational Studies in Epidemiology using MR (STROBE-MR) (Supplementary Table ).
We performed a primary TWAS following the FUSION pipeline to identify gene expressions associated with birth weight. We integrated birth weight summary statistics with a cross-tissue expression weight calculated through sparse canonical correlation analysis (sCCA) . A total of 37 917 sCCA features extracted from the GTEx v8 release were involved in the cross-tissue expression reference weights. We defined significant results at P < 1.319 × 10 −6 (0.05/37 917) using Bonferroni correction based on the number of features tested across the reference weights. As a sensitivity analysis, we performed S-MultiXcan analysis for birth weight based on expression data from 49 tissues in GTEx v8 release to examine the robustness of the sCCA-TWAS results .
We performed PWAS following the FUSION pipeline to identify proteins associated with birth weight. We used a reference weight developed based on protein abundance measures of 376 dorsolateral prefrontal cortex (dPFC) samples from the Religious Order Study and Rush Memory and Aging Project (ROS/MAP) . A total of 1 761 proteins were tested in the proteome reference weight. We defined significant results at P < 2.839 × 10 −5 (0.05/1 761) using Bonferroni correction based on the number of proteins tested across the reference weight.
We implemented the SMR & HEIDI methods to prioritize gene expressions and proteins that could impact cortical structure . The eQTL summary data were obtained from the BrainMeta dataset, which includes RNA-seq data from 2 865 brain cortex samples of 2 443 unrelated individuals of European ancestry . The pQTL summary data were sourced from the ROS/MAP, which contains protein abundance measures from 1 277 dPFC samples of European ancestry . Bonferroni correction was applied to consider multiple comparisons. Significant gene expressions and proteins were confirmed if the adjusted P -value for SMR analysis < 0.05, with a conservative unadjusted P -value for Heterogeneity in Dependent Instrument (HEIDI) test > 0.01.
To identify cell types related to birth weight, we performed CELLECT analyses with default parameters . GWAS summary statistics of birth weight with three sources of cell-type expression were used in the analyses. Firstly, we used the Tabula Muris dataset, which contains 115 cell types from 18 mouse organs, of which 7 cell types are related to brain . Then, we used the Mouse Nervous System dataset, which includes 265 cell types from the mouse nervous system . Finally, due to the relatively significant differences between mice and humans, we incorporated the Allen Brain Map Human Multiple Cortical Areas SMART-sequence dataset, which investigates 120 cell types from the human cortex . Significant results were identified with a nominal unadjusted P -value < 0.05.
Association between birth weight and cortical structure The F-statistics for all IVs were greater than 10, indicating enough statistical power for the instrument (Supplementary Table ). Our IVW results indicated that genetically predicted birth weight was positively associated with five cortical macrostructural phenotypes, including global FI (β, 0.06; 95% CI, 0.03–0.09), ICI (β, 0.10; 95% CI, 0.04–0.16), LGI (β, 0.16; 95% CI, 0.07–0.26), SA (β, 0.23; 95% CI, 0.13–0.33) and volume (β, 0.22; 95% CI, 0.13–0.32), after Bonferroni correction (Fig. ). The association between genetically predicted birth weight and global GC reached nominal significance (β, −0.05; 95% CI, −0.09 to −0.01, P = 0.0197). No association was found between birth weight and global microstructural phenotypes ( P > 3.846 × 10 −3 ). For the significant associations, concordant estimates were suggested by weighted median, MR-Egger, MR-PRESSO and IVW radial methods. MRlap method also indicated consistent findings, suggesting the results were robust to sample overlap (Supplementary Table ). Leave-one-out analyses showed no outlying SNPs (Supplementary Fig. ). Further adjustments for educational attainment and adult body mass index yielded directionally consistent results, with global SA and volume remaining significant, while FI, ICI and LGI reached nominal significance (Supplementary Table ). Gene expressions linking birth weight and cortical structure The sCCA-TWAS identified 223 gene expressions associated with birth weight ( P < 1.319 × 10 −6 ) (Supplementary Table – ), of which 143 were included in the eQTL dataset for subsequent SMR analyses. Among these, six gene expressions (RABGAP1, CNNM2, CENPW, CDK6, CALHM2 and ATP5MD) were also associated with at least one cortical structural phenotype ( P < 3.497 × 10 −4 ) (Fig. ; Supplementary Table – ). Sensitivity analysis with S-MultiXcan showed 337 gene expressions associated with birth weight ( P < 2.259 × 10 −6 ), with 192 included in the eQTL dataset. Of these, five gene expressions (RABGAP1, CNNM2, CENPW, COMMD7 and DNMT3B) were associated with at least one of the cortical structural phenotypes ( P < 2.604 × 10 −4 ; Supplementary Table – ). In the primary and sensitivity analyses, three gene expressions (RABGAP1, CENPW and CNNM2) were consistently identified (Fig. ). RABGAP1 was associated with global FI, ICI, LGI and SA. CENPW was related to global FI, ICI, LGI and volume, but not SA. CNNM2 was associated with global LGI. ATP5MD, CDK6 and CALHM2 were not identified in the sensitivity analyses. Proteins linking birth weight and cortical structure The PWAS identified 24 cis-regulated proteins in the dPFC associated with birth weight ( P < 2.839 × 10 −5 ), including 16 novel signals compared to the sCCA-TWAS (Supplementary Table ). Among these, 23 were included in the pQTL dataset. Subsequent SMR analyses revealed that four cis-regulated proteins (RAB7L1, RAB5B, PPA2 and CNNM2) were also related to at least one cortical structure ( P < 2.174 × 10 −3 ) (Fig. ; Supplementary Table – ). The cis-regulated protein of CNNM2 was identified as linking birth weight and global LGI, which is consistent with the transcriptomic findings. Additionally, we found that the cis-regulated protein of RAB5B was related to multiple cortical structures, including global FI, ICI, LGI, SA and volume. The cis-regulated proteins of RAB7L1 and PPA2 were also associated with global ICI and LGI, respectively. Birth weight variants enriched in cell types of mouse and human brain For the Tabula Muris dataset, we identified significant enrichment in brain pericyte ( P < 0.05). In addition, applying analyses to the Mouse Nervous System dataset, we identified 10 enriched brain cell types associated with birth weight, predominantly annotated as vascular cell types and located within the central nervous system (CNS) ( P < 0.05). In the human cortex dataset, we found three cell types significantly associated with birth weight, two of which were inhibitory GABAergic neurons and the other was a non-neural cell type annotated as pericyte ( P < 0.05) (Fig. , Supplementary Table – ).
The F-statistics for all IVs were greater than 10, indicating enough statistical power for the instrument (Supplementary Table ). Our IVW results indicated that genetically predicted birth weight was positively associated with five cortical macrostructural phenotypes, including global FI (β, 0.06; 95% CI, 0.03–0.09), ICI (β, 0.10; 95% CI, 0.04–0.16), LGI (β, 0.16; 95% CI, 0.07–0.26), SA (β, 0.23; 95% CI, 0.13–0.33) and volume (β, 0.22; 95% CI, 0.13–0.32), after Bonferroni correction (Fig. ). The association between genetically predicted birth weight and global GC reached nominal significance (β, −0.05; 95% CI, −0.09 to −0.01, P = 0.0197). No association was found between birth weight and global microstructural phenotypes ( P > 3.846 × 10 −3 ). For the significant associations, concordant estimates were suggested by weighted median, MR-Egger, MR-PRESSO and IVW radial methods. MRlap method also indicated consistent findings, suggesting the results were robust to sample overlap (Supplementary Table ). Leave-one-out analyses showed no outlying SNPs (Supplementary Fig. ). Further adjustments for educational attainment and adult body mass index yielded directionally consistent results, with global SA and volume remaining significant, while FI, ICI and LGI reached nominal significance (Supplementary Table ).
The sCCA-TWAS identified 223 gene expressions associated with birth weight ( P < 1.319 × 10 −6 ) (Supplementary Table – ), of which 143 were included in the eQTL dataset for subsequent SMR analyses. Among these, six gene expressions (RABGAP1, CNNM2, CENPW, CDK6, CALHM2 and ATP5MD) were also associated with at least one cortical structural phenotype ( P < 3.497 × 10 −4 ) (Fig. ; Supplementary Table – ). Sensitivity analysis with S-MultiXcan showed 337 gene expressions associated with birth weight ( P < 2.259 × 10 −6 ), with 192 included in the eQTL dataset. Of these, five gene expressions (RABGAP1, CNNM2, CENPW, COMMD7 and DNMT3B) were associated with at least one of the cortical structural phenotypes ( P < 2.604 × 10 −4 ; Supplementary Table – ). In the primary and sensitivity analyses, three gene expressions (RABGAP1, CENPW and CNNM2) were consistently identified (Fig. ). RABGAP1 was associated with global FI, ICI, LGI and SA. CENPW was related to global FI, ICI, LGI and volume, but not SA. CNNM2 was associated with global LGI. ATP5MD, CDK6 and CALHM2 were not identified in the sensitivity analyses.
The PWAS identified 24 cis-regulated proteins in the dPFC associated with birth weight ( P < 2.839 × 10 −5 ), including 16 novel signals compared to the sCCA-TWAS (Supplementary Table ). Among these, 23 were included in the pQTL dataset. Subsequent SMR analyses revealed that four cis-regulated proteins (RAB7L1, RAB5B, PPA2 and CNNM2) were also related to at least one cortical structure ( P < 2.174 × 10 −3 ) (Fig. ; Supplementary Table – ). The cis-regulated protein of CNNM2 was identified as linking birth weight and global LGI, which is consistent with the transcriptomic findings. Additionally, we found that the cis-regulated protein of RAB5B was related to multiple cortical structures, including global FI, ICI, LGI, SA and volume. The cis-regulated proteins of RAB7L1 and PPA2 were also associated with global ICI and LGI, respectively.
For the Tabula Muris dataset, we identified significant enrichment in brain pericyte ( P < 0.05). In addition, applying analyses to the Mouse Nervous System dataset, we identified 10 enriched brain cell types associated with birth weight, predominantly annotated as vascular cell types and located within the central nervous system (CNS) ( P < 0.05). In the human cortex dataset, we found three cell types significantly associated with birth weight, two of which were inhibitory GABAergic neurons and the other was a non-neural cell type annotated as pericyte ( P < 0.05) (Fig. , Supplementary Table – ).
This study thoroughly investigated the potential phenotypic associations and biological relationships between birth weight and various aspects of cortical structure. It indicated positive associations between birth weight and several cortical macrostructural phenotypes, including global FI, ICI, LGI, SA and volume. Additionally, we suggested potential biological relationships underlying these associations, pinpointing functional genes such as CNNM2, CENPW, RABGAP1, ATP5MD and RAB5B, whose cis-regulated expression or protein may participate in these associations. Brain cell types such as inhibitory neurons and pericytes were also implicated with birth weight, implying the cellular bases behind the observed associations. These findings highlighted the early determinants of birth weight on cortical anatomy and prioritized candidate pathways to be further studied, facilitating the discovery of therapeutic targets for precision interventions. Although limited research on the association between birth weight and cortical structure, some studies are consistent with our findings on birth weight and cortical folding (FI, ICI and LGI), SA and volume . These results suggest an important role of the intrauterine environment in cerebral cortical expansion, as both sulci-gyral folding and SA are associated with constrained cortical growth . We found that birth weight was positively correlated with cortical SA but not thickness, aligning with another study . This may be attributed to the different developmental mechanisms of SA and thickness, as the radial unit hypothesis suggests that cortical SA depends on the number of cortical columns, while thickness is determined by the number of neurons produced within those columns . Additionally, considering the developmental sequence of cortical macrostructure and microstructure, it may be plausible that no association was found between birth weight and microstructure. The first six months after birth are critical for cortical microstructure growth . Similar evidence demonstrated that microstructural phenotypes of the cerebral cortex were associated with genes that had peak expression postnatally, rather than genes that were relatively highly expressed prior to birth . Leveraging multi-omics approaches, we identified several functional genes, including CNNM2, CENPW, RABGAP1, ATP5MD and RAB5B, whose cis-regulated expression or protein levels may contribute to the biological mechanisms underlying the observed associations. Both the transcriptomic and proteomic analyses highlighted CNNM2, a gene encoding a magnesium transporter . This finding aligns with existing literature. CNNM2 is located near a lead SNP (rs10883846) associated with birth weight , and it is also an important protein involved in brain development and neurological function . Its expression in the prefrontal lobe was associated with sensorimotor gating function, dendritic spine morphogenesis, cognition and the risk of schizophrenia . Several studies also provide insights into the mechanisms connecting CNNM2 and birth weight to cortical development. Birth weight has been associated with perinatal asphyxia and brain maturation . In an animal model of perinatal asphyxia, reduced CNNM2 expression was observed in asphyxia-induced rats . Additionally, researchers have found that CNNM2 expression increased during neuronal differentiation, with higher levels in mature neurons compared to undifferentiated cells . Taken together, these findings suggest that lower birth weight may reduce CNNM2 expression through mechanisms such as perinatal asphyxia or neuron maturation, potentially impacting cortical structure and brain function. Furthermore, our transcriptomic level analyses identified a significant signal located on the same chromosomal band of CNNM2 (10q24.33), namely ATP5MD. A study has demonstrated that knocking down CNNM2-rs1926032 could induce downregulation of ATP5MD expression, further disrupting the neurodevelopment . These findings suggest additional pathways hinted by CNNM2 and ATP5MD, involving energy metabolism and magnesium transport , could be important biological mechanisms linking birth weight to cortical structure. The expression of CENPW and RABGAP1 were replicated in the sCCA-TWAS and S-MultiXcan analysis for birth weight, and found to be associated with at least four target cortical structural phenotypes, serving as compelling evidence for important transcriptomic signals linking birth weight to cortical structure. CENPW encodes centromere protein W, which is crucial for chromosome maintenance and the cell cycle . It is the nearest gene to rs6925689, a significant SNP associated with birth weight . In this study, we observed that the CENPW expression was negatively associated with global FI, ICI, LGI and volume. Similar findings have also been found previously , supporting a role for CENPW in neurodevelopment. Evidence demonstrates that increased CENPW expression may lead to altered neurogenesis or decreased apoptosis, thereby affecting structural changes such as cortical expansion , which is also consistent with the radial unit hypothesis . Regarding RABGAP1, it encodes a GTPase-activating protein involved in mitosis, cell migration, and vesicular trafficking. It is the nearest gene to another lead SNP (rs10985827) of birth weight , and related to a novel neurodevelopmental syndrome . However, to our knowledge, the mechanisms by which CENPW and RABGAP1 are associated with birth weight or early development remain unclear, and more future studies are needed. Based on the biological functions of CENPW and RABGAP1, cell cycle regulation and intracellular transport might be crucial in the association between birth weight and cortical structure. In our proteomic level analyses, we found the cis-regulated protein of RAB5B was associated with birth weight, global FI, ICI, LGI, SA and volume. RAB5B encodes a protein belonging to the Ras/Rab superfamily of small GTPases . Previous studies have implicated Rab proteins in cortical neuron migration , neurodegeneration , cognitive function control , and etiopathogenesis of several neurodegenerative disorders, including Parkinson’s disease and Alzheimer’s disease . Nevertheless, RAB5B has not been widely recognized as associated with birth weight in the existing literature. The association between RAB5B and birth weight still warrants further investigation, and its role in the relationship between birth weight and cortical structure should be explored in future studies. Our CELLECT analyses indicated that birth weight could be enriched in inhibitory neurons and pericytes in the human cortex. Previous study has shown that gestational age at birth was positively associated with gamma-aminobutyric acid (GABA) concentrations . Similarly, another study also found that intrauterine growth restriction could affect GABAergic synapse development . These findings support our results linking birth weight, a comprehensive proxy of intrauterine development, to inhibitory GABAergic neurons. Additionally, previous studies have identified that GABAergic neurons could play crucial roles in neurogenesis, migration, dendrite arborization and synaptogenesis during the mid-to-late gestation, which are related to cerebral anatomy including cortical expansion and sulcus formation . As for pericyte, existing evidence has suggested that it was essential for blood-brain barrier (BBB) integrity, a process that begins as early as the embryogenesis period . BBB breakdown and brain capillary damage are early biomarkers of human cognitive dysfunction , and are linked to various neuropsychiatric disorders such as Alzheimer’s disease . These findings align with our observations of cerebrovascular cell types in the mouse brain, despite the differences between the mouse and the human CNS. Given the evidence presented, further studies are warranted to elucidate the role of GABA and BBB underlying early life development and cortical structure. Our findings have implications for understanding the intervention and mechanisms of neurodevelopment. First, birth weight, as a proxy of intrauterine growth, is causally associated with several aspects of brain structure that underlie the anatomy of neuropsychiatric disorders. Therefore, early interventions are crucial for children with low birth weight for optimizing neurodevelopment and mitigating the risk of neuropsychiatric disorders later in life. Second, we identified genes involved in processes including calcium and magnesium transport and cell cycle regulation may contribute to the causal associations mentioned above. Although evidence is limited, some of these findings align with established neurodevelopmental hypothesis, which could potentially inform drug development. Furthermore, we point to inhibitory neurons, pericytes, and even cerebrovascular cells could be the underlying cellular bases. These findings may provide potential directions for further mechanistic studies underlying intrauterine growth and cortical development. This study has several limitations. First, sample overlap was inevitable as genetic data for both traits involved UKB participants, which may introduce bias into our estimations. However, the results of the MRlap analyses suggested the influence may not be substantial. Second, to minimize the bias due to population stratification, we restricted all genetic data used in this study to predominately individuals of European ancestry. Consequently, caution is required when generalizing our findings to other ethnic populations. Third, although we identified several genes and cell types potentially involved in the underlying mechanisms, experiments are warranted for further validation. In conclusion, the results of this study provide evidence highlighting the associations between birth weight and various aspects of global cortical structure. We found that cis-regulated expression or protein level of genes involved in cellular metabolism, such as CNNM2 in magnesium transport, and brain cell types, such as inhibitory neurons, might underlie the biology of the observed association. Further studies are essential to validate these results and identify the related mechanisms.
Supplementary Materials Supplementary Tables
|
Fertility clinics have a duty of care towards patients who do not have children with treatment | 014b870c-2910-40e6-aabb-3b91d035af44 | 11291940 | Patient-Centered Care[mh] | Having children is viewed as playing a central role in many people’s lives, from providing meaning to support in older age. For an increasing number of people, parenthood is being achieved through medically assisted reproduction (MAR). Consistently, success in MAR has mostly been measured in terms of achieving (healthy) livebirths. We argue that this focus is too narrow, and that success should be measured in terms of alleviating suffering caused by an unfulfilled child wish ( ). The crucial difference is that fertility clinics need to better tailor their care towards effective support for patients who do not have child(ren) with treatment ( ).
First, because not achieving a livebirth is a common outcome of treatment Many patients who start treatment will end it without children. In the UK, from 107 347 women who started IVF between 1999 and 2007, only 47 189 (44%) had a child after up to eight IVF cycles. The most optimistic estimates indicate that four in each ten patients who do three IVF cycles in the UK or other Western countries will be in this situation ( ; , ; ; ). Counselling should therefore prepare patients equally for both outcome scenarios from the start. While optimistic counselling portraying a live birth as the expected outcome could seem beneficial as it makes patients worry less about negative outcomes, it is likely to add to the suffering of those who end treatment without a live birth, as this will then be an unexpected event they have not been prepared for. Currently, the likelihood of treatment not working is only implicitly discussed if and when cumulative pregnancy rates are reported to patients, and the emotional burden associated with treatment and its negative outcome(s) are not routinely discussed ( ; ; ; ). Ideally, this is replaced by implications counselling, whereby patients are encouraged to reflect about how they feel about potential future scenarios/outcomes, as guidelines routinely recommend for decision-making about different treatment choices ( ), e.g. third-party reproduction ( ; ; ; ). Second, because treatment is psychologically burdensome and creates new losses Fertility treatment adds to patients’ suffering in different ways. In the treatment process, new embryos are created that many times result in ‘new’ losses over ‘what could have been’, either because they do not lead to a pregnancy, or because they lead to a miscarriage. The emotional burden of treatment has been extensively documented and is mostly associated with the emotional roller coaster of repeatedly building hope despite uncertainty and losing it with news of negative results ( ). Some patients argue that the way treatment is organized intensifies their desire for children and, consequently, their suffering with associated losses ( ; ). Meta-analysis shows that indeed, those patients who report a stronger child desire through treatment are at higher risk for maladjustment during and in the aftermath of treatment ( ; ). Ending treatment without children triggers intense grief that is associated with moderate to large impairments in mental health and wellbeing ( ), from which one in ten patients never recover ( ). To the extent that psychosocial suffering is a product of or intensified by fertility treatment, clinics have a duty to address it, and patients more and more advocate for this ( ). Third, because the field has the necessary expertise to support patients Fertility clinics have expertise about how to support patients for whom treatment does not fulfil their desire for children (e.g. ; ; ) and it requires minimal efforts to adjust care in the MAR trajectory in a way that equally benefits patients who will achieve a live birth and those who will not. Given the significant benefits for patients and the minimal sacrifices required from clinics, this is an example of a so-called ‘easy rescue’ ( ) and thus lack of action is unethical. The literature indicates that nine in ten people eventually adjust to ending treatment without the children they wish for, but patients describe this loss as devastating and the adjustment process as difficult, prolonged (2 years on average), and marked with daily suffering and social isolation ( ). Clinics should contribute to ease this adjustment process, but research indicates a lack of investment in this endeavour ( ; ; ; ) and patients feel abandoned by their clinics and left to their own devices, expressing frustration and dissatisfaction ( ; ). Tellingly, from 86 evidence-based recommendations presented in the ESHRE Guidelines for Routine Psychosocial Care in Infertility and Assisted Reproduction, only seven (8%) focus on supporting patients for whom treatment does not work ( ). At a societal level, there is low public recognition of the grief associated with ending treatment and there are no mourning rituals ( ). There is some evidence to suggest that the current lack of investment may compound patients’ difficulties by not enabling them to return to the clinic for support ( ), not preparing them for the difficult emotions that support may initially trigger ( ), and not contributing to higher awareness of this topic within primary or mental-health care, healthcare routes patients may also pursue ( ). Fourth, because it is part of patient-centred care and what patients desire Multiple recent primary and evidence-synthesis studies clearly indicate patients are open to and value information about the negative aspects of treatment ( ; ; , ; ). They want to have a realist overview of what their treatment journey will look like before they embark on it, including of the probability of negative outcomes, so that they can better prepare by developing coping skills, anticipating decisions they may have to make, and knowing how to access support when needed ( ; ; ). A very recent survey showed that nine in ten patients reported being willing to discuss the possibility and implications of their treatment not working while they are still undergoing treatment, with seven in ten stating the best time to do so is before doing their first cycle. Patients reported such conversations should focus on providing an overview of their whole treatment pathway and its potential negative outcomes, imparting knowledge and skills to better process loss and sustain a hopeful outlook if treatment ends up not working, and being informed about how to access emotional support and pursue other routes to parenthood and alternative life goals ( ).
Many patients who start treatment will end it without children. In the UK, from 107 347 women who started IVF between 1999 and 2007, only 47 189 (44%) had a child after up to eight IVF cycles. The most optimistic estimates indicate that four in each ten patients who do three IVF cycles in the UK or other Western countries will be in this situation ( ; , ; ; ). Counselling should therefore prepare patients equally for both outcome scenarios from the start. While optimistic counselling portraying a live birth as the expected outcome could seem beneficial as it makes patients worry less about negative outcomes, it is likely to add to the suffering of those who end treatment without a live birth, as this will then be an unexpected event they have not been prepared for. Currently, the likelihood of treatment not working is only implicitly discussed if and when cumulative pregnancy rates are reported to patients, and the emotional burden associated with treatment and its negative outcome(s) are not routinely discussed ( ; ; ; ). Ideally, this is replaced by implications counselling, whereby patients are encouraged to reflect about how they feel about potential future scenarios/outcomes, as guidelines routinely recommend for decision-making about different treatment choices ( ), e.g. third-party reproduction ( ; ; ; ).
Fertility treatment adds to patients’ suffering in different ways. In the treatment process, new embryos are created that many times result in ‘new’ losses over ‘what could have been’, either because they do not lead to a pregnancy, or because they lead to a miscarriage. The emotional burden of treatment has been extensively documented and is mostly associated with the emotional roller coaster of repeatedly building hope despite uncertainty and losing it with news of negative results ( ). Some patients argue that the way treatment is organized intensifies their desire for children and, consequently, their suffering with associated losses ( ; ). Meta-analysis shows that indeed, those patients who report a stronger child desire through treatment are at higher risk for maladjustment during and in the aftermath of treatment ( ; ). Ending treatment without children triggers intense grief that is associated with moderate to large impairments in mental health and wellbeing ( ), from which one in ten patients never recover ( ). To the extent that psychosocial suffering is a product of or intensified by fertility treatment, clinics have a duty to address it, and patients more and more advocate for this ( ).
Fertility clinics have expertise about how to support patients for whom treatment does not fulfil their desire for children (e.g. ; ; ) and it requires minimal efforts to adjust care in the MAR trajectory in a way that equally benefits patients who will achieve a live birth and those who will not. Given the significant benefits for patients and the minimal sacrifices required from clinics, this is an example of a so-called ‘easy rescue’ ( ) and thus lack of action is unethical. The literature indicates that nine in ten people eventually adjust to ending treatment without the children they wish for, but patients describe this loss as devastating and the adjustment process as difficult, prolonged (2 years on average), and marked with daily suffering and social isolation ( ). Clinics should contribute to ease this adjustment process, but research indicates a lack of investment in this endeavour ( ; ; ; ) and patients feel abandoned by their clinics and left to their own devices, expressing frustration and dissatisfaction ( ; ). Tellingly, from 86 evidence-based recommendations presented in the ESHRE Guidelines for Routine Psychosocial Care in Infertility and Assisted Reproduction, only seven (8%) focus on supporting patients for whom treatment does not work ( ). At a societal level, there is low public recognition of the grief associated with ending treatment and there are no mourning rituals ( ). There is some evidence to suggest that the current lack of investment may compound patients’ difficulties by not enabling them to return to the clinic for support ( ), not preparing them for the difficult emotions that support may initially trigger ( ), and not contributing to higher awareness of this topic within primary or mental-health care, healthcare routes patients may also pursue ( ).
Multiple recent primary and evidence-synthesis studies clearly indicate patients are open to and value information about the negative aspects of treatment ( ; ; , ; ). They want to have a realist overview of what their treatment journey will look like before they embark on it, including of the probability of negative outcomes, so that they can better prepare by developing coping skills, anticipating decisions they may have to make, and knowing how to access support when needed ( ; ; ). A very recent survey showed that nine in ten patients reported being willing to discuss the possibility and implications of their treatment not working while they are still undergoing treatment, with seven in ten stating the best time to do so is before doing their first cycle. Patients reported such conversations should focus on providing an overview of their whole treatment pathway and its potential negative outcomes, imparting knowledge and skills to better process loss and sustain a hopeful outlook if treatment ends up not working, and being informed about how to access emotional support and pursue other routes to parenthood and alternative life goals ( ).
In this section, we discuss the most prevalent concerns about the feasibility and adequacy of addressing the possibility of treatment ending without children highlighted in the literature. Forewarning that treatment may not work might hinder patients’ hope and put them off undergoing treatment Healthcare professionals (HCPs) often express concerns about discussing negative treatment outcomes with patients because they do not want to be perceived as unsupportive or discouraging, nor drive patients away from the clinic ( ). Similarly, most patients think hope and optimism are important and that too much negativity before treatment starts is not appropriate ( ; ). While patients foregoing treatment based on proper information on all possible outcomes is not problematic, and in fact preferable to patients only pursuing treatment because they are ill-informed, the fear that patients may not pursue treatment because they do not feel supported is a valid concern. However, maintaining a positive and supportive attitude does not equal ignoring the potential of unwanted outcomes. Indeed, much psychological theorizing and research indicate that people regulate their hope in multiple ways and that just ‘thinking positively’ is not always helpful ( ; ). For instance, people who report negative expectations when waiting for news feel more anxious than positive thinkers while waiting, but also feel less dismayed when the news they receive is bad, compared with those who report positive expectations ( ). Thus, hope can have negative consequences, if it turns out to be ‘false’ hope. Furthermore, some people do not only plan for achieving their desired outcomes but consider a matrix of competing possibilities. Research showed that making plans about how to cope with barriers and blockages to personal goals can contribute to reducing intrusive painful thoughts about such goals, even when no actual action is taken or progress is made ( ). Overall, the research suggests that fostering hope during treatment is not equally beneficial to everyone and that being hopeful does not equate to ignoring potential negative outcomes. A systematic review also showed that patients only indicate losing faith in treatment or perceiving they have poor prognosis as reasons for having stopped treatment in 5% and 9.5%, respectively, of the times they were presented with these options (whereas, for instance, the physical and psychological burden of treatment is chosen 25% of times; ). Despite this evidence, research indicates that as few as two in ten patients report their fertility team acknowledged the possibility of treatment not working ( ). Some patients perceive they are rushed through the IVF process without being fully informed and without much consideration of the negative impact of treatment ( ; ). One consequence may be that patients are unprepared to cope with negative results, which trigger depressive symptoms, a decline in motivation and a need to rebuild strength and hope ( ; ). As patients themselves argue, putting more emphasis on forewarning can help them to better prepare and take ownership of treatment ( ). The challenge that research needs to address is how to talk about possible negative outcomes in a comprehensive but hopeful and sensitive way. Another consequence may be that some patients perceive to be given false hope or even exploited into doing cycles very unlikely to work ( ; ). Forewarning that treatment may not work may be perceived as incompetence Qualitative research indicates HCPs perceive an institutional, professional, and personal sense of failure when fertility treatment does not result in children ( ). Their narratives indicate they move from a position of omnipotence, given their ability to offer the miracle of generating life, to a position of impotence, when faced with repeated cycles that do not result in pregnancy ( ; ). In conjunction with high patient expectations, these conflicting emotions can result in performance anxiety and avoidance of decision-making around ending treatment ( ). In this context, HCPs may feel their role is to keep patients hopeful until treatment eventually succeeds, potentially leading patients to do more cycles than initially anticipated. HCPs may also find it difficult to temper patients’ expectations, even when these are perceived as unrealistic, and especially when patients are extremely committed to pursue their parenthood goals ( ; ; ). More transparency and discussion of average success rates in the field of MAR prior to the start of treatment may contribute to disentangling the notion of competence from the outcome of treatment cycles. This approach can also facilitate patients’ decision-making process about cycle uptake, unburdening HCPs from the moral dilemma of deciding for whom and when the option of stopping treatment should be introduced ( ; ). An interview-based study with HCPs showed that when they perceived that the decision to end treatment was discussed, shared, and accepted by patients, patients trusted them more, and this increased their sense of professional fulfilment ( ). In sum, the way that end of treatment is communicated with patients can shape trust and quality of care. HCPs feel ill-prepared and lacking appropriate skills to engage in conversations about treatment ending without children Conversations about negative treatment outcomes are hard for patients and staff alike. Both fear these may trigger anxiety in patients, which has been confirmed by research ( ), and a minority feel it makes no sense to discuss something that may not happen ( ; ). Qualitative evidence indicates HCPs find it hard to use personal discretion in deciding with whom and when to have end-of-treatment conversations and feel unprepared to introduce such sensitive topics and manage the difficult emotions these may trigger (especially anger, but also sadness, disappointment, frustration) ( ; ; ). HCPs report feeling conflicted in their own decision-making about discussing end of treatment with patients because they lack explicit criteria, policies, or a formal ethical framework to guide such decisions ( ; ). However, HCPs cannot escape these conversations and have the duty not to avoid these nor to add further suffering to patients by not investing in how to approach these well. Indeed, patients list insensitive communication from HCPS as bad news in itself and compounding any negative impact of bad fertility news shared ( ). A call to action for more research and professional development opportunities is required here, as it is unquestionable that more communication training is needed to support HCPs in approaching what is one of the hardest tasks in their jobs. Psychologists and counsellors will have this expertise and can lead knowledge transfer within multi-disciplinary teams, but only a few evaluated initiatives can be reported so far (e.g. ; ).
Healthcare professionals (HCPs) often express concerns about discussing negative treatment outcomes with patients because they do not want to be perceived as unsupportive or discouraging, nor drive patients away from the clinic ( ). Similarly, most patients think hope and optimism are important and that too much negativity before treatment starts is not appropriate ( ; ). While patients foregoing treatment based on proper information on all possible outcomes is not problematic, and in fact preferable to patients only pursuing treatment because they are ill-informed, the fear that patients may not pursue treatment because they do not feel supported is a valid concern. However, maintaining a positive and supportive attitude does not equal ignoring the potential of unwanted outcomes. Indeed, much psychological theorizing and research indicate that people regulate their hope in multiple ways and that just ‘thinking positively’ is not always helpful ( ; ). For instance, people who report negative expectations when waiting for news feel more anxious than positive thinkers while waiting, but also feel less dismayed when the news they receive is bad, compared with those who report positive expectations ( ). Thus, hope can have negative consequences, if it turns out to be ‘false’ hope. Furthermore, some people do not only plan for achieving their desired outcomes but consider a matrix of competing possibilities. Research showed that making plans about how to cope with barriers and blockages to personal goals can contribute to reducing intrusive painful thoughts about such goals, even when no actual action is taken or progress is made ( ). Overall, the research suggests that fostering hope during treatment is not equally beneficial to everyone and that being hopeful does not equate to ignoring potential negative outcomes. A systematic review also showed that patients only indicate losing faith in treatment or perceiving they have poor prognosis as reasons for having stopped treatment in 5% and 9.5%, respectively, of the times they were presented with these options (whereas, for instance, the physical and psychological burden of treatment is chosen 25% of times; ). Despite this evidence, research indicates that as few as two in ten patients report their fertility team acknowledged the possibility of treatment not working ( ). Some patients perceive they are rushed through the IVF process without being fully informed and without much consideration of the negative impact of treatment ( ; ). One consequence may be that patients are unprepared to cope with negative results, which trigger depressive symptoms, a decline in motivation and a need to rebuild strength and hope ( ; ). As patients themselves argue, putting more emphasis on forewarning can help them to better prepare and take ownership of treatment ( ). The challenge that research needs to address is how to talk about possible negative outcomes in a comprehensive but hopeful and sensitive way. Another consequence may be that some patients perceive to be given false hope or even exploited into doing cycles very unlikely to work ( ; ).
Qualitative research indicates HCPs perceive an institutional, professional, and personal sense of failure when fertility treatment does not result in children ( ). Their narratives indicate they move from a position of omnipotence, given their ability to offer the miracle of generating life, to a position of impotence, when faced with repeated cycles that do not result in pregnancy ( ; ). In conjunction with high patient expectations, these conflicting emotions can result in performance anxiety and avoidance of decision-making around ending treatment ( ). In this context, HCPs may feel their role is to keep patients hopeful until treatment eventually succeeds, potentially leading patients to do more cycles than initially anticipated. HCPs may also find it difficult to temper patients’ expectations, even when these are perceived as unrealistic, and especially when patients are extremely committed to pursue their parenthood goals ( ; ; ). More transparency and discussion of average success rates in the field of MAR prior to the start of treatment may contribute to disentangling the notion of competence from the outcome of treatment cycles. This approach can also facilitate patients’ decision-making process about cycle uptake, unburdening HCPs from the moral dilemma of deciding for whom and when the option of stopping treatment should be introduced ( ; ). An interview-based study with HCPs showed that when they perceived that the decision to end treatment was discussed, shared, and accepted by patients, patients trusted them more, and this increased their sense of professional fulfilment ( ). In sum, the way that end of treatment is communicated with patients can shape trust and quality of care.
Conversations about negative treatment outcomes are hard for patients and staff alike. Both fear these may trigger anxiety in patients, which has been confirmed by research ( ), and a minority feel it makes no sense to discuss something that may not happen ( ; ). Qualitative evidence indicates HCPs find it hard to use personal discretion in deciding with whom and when to have end-of-treatment conversations and feel unprepared to introduce such sensitive topics and manage the difficult emotions these may trigger (especially anger, but also sadness, disappointment, frustration) ( ; ; ). HCPs report feeling conflicted in their own decision-making about discussing end of treatment with patients because they lack explicit criteria, policies, or a formal ethical framework to guide such decisions ( ; ). However, HCPs cannot escape these conversations and have the duty not to avoid these nor to add further suffering to patients by not investing in how to approach these well. Indeed, patients list insensitive communication from HCPS as bad news in itself and compounding any negative impact of bad fertility news shared ( ). A call to action for more research and professional development opportunities is required here, as it is unquestionable that more communication training is needed to support HCPs in approaching what is one of the hardest tasks in their jobs. Psychologists and counsellors will have this expertise and can lead knowledge transfer within multi-disciplinary teams, but only a few evaluated initiatives can be reported so far (e.g. ; ).
In this section, we offer research-informed recommendations to support clinics interested in promoting patients’ adjustment to ending treatment without children, which are summarized in . Reframe fertility treatment ‘success’ and ‘failure’ We need to reflect on why we only consider the birth of a (healthy) baby as success in MAR. Unless one adopts an explicitly pronatalist ideology, this is not because producing children is morally laudable, but rather because by helping people conceive, they are helped in achieving goals they judge central to their wellbeing and purpose in life. The association between parenthood, wellbeing, and happiness is complex (see ; ) and patients may be mistaken in their expectation that becoming a parent will substantially increase their wellbeing ( ). For the purpose of this article, it suffices to say that an unfulfilled wish for (more) children can cause intense suffering and is more strongly associated with wellbeing outcomes than parental status ( ). The logical conclusion is that alleviating this suffering should be the main underlying rationale for fertility treatment and underpin the definition of successful treatment. We therefore propose a new definition of treatment success in infertility care. Fertility treatment is successful when the suffering that accompanies subfertility and infertility is alleviated throughout and beyond the treatment trajectory. Successful treatment can either end with a healthy live birth, or with a state in which patients succeed in coping with their infertility in such a way that it no longer has a detrimental effect on their well-being. Reframing success of MAR in this way has consequences. First, the language used within the field needs to be scrutinized. While it is true that ending treatment with a healthy child is the preferred outcome or ‘plan A’ for patients, we should be careful about framing not achieving this ‘plan A’ in terms of failure, because there are other options (plan B) that can be explored to reach the goal of relieving patient suffering. We must also avoid framing pursuing other avenues as non-compliance, non-adherence, abandonment of treatment, giving up, or dropping out ( ; ), which have very negative and culpable connotations. Besides misplaced feelings of guilt and incompetence in patients and HCPs when treatment does not have the desired outcome ( ; ; ), such language may contribute to HCPs and patients continuing treatment for longer than they had initially planned or judged desirable ( ). Although changing language is never easy, it is important that we reinvent the way we talk about the end of treatment. Suggestions are to simply be descriptive and refer to ‘ending treatment with or without children’, ‘treatment (not) resulting in a pregnancy or livebirth(s)’, as per language used in this article. Second, HCPs need to weight the suffering caused by an unfulfilled wish for children against the suffering caused by treatment. Given how uncertain the treatment outcome is, the wellbeing of patients during and after treatment (regardless of its outcome) should be a major concern driving clinical decision-making. Patients can be willing to bear the treatment burden but only if they gain effectiveness ( ), which becomes less likely as patients progress through consecutive cycles. Firstly, clinics can minimize the risk for treatment to become an unstoppable rollercoaster by encouraging patients to proactively consider the aspects of treatment they can influence even before they start it, e.g. how long and how much do they want to spend doing treatment and how many cycles they are willing to do ( ; ; ). Secondly, clinics can promote active and shared decision-making about doing (more) cycles of treatment, rather than both parties assuming that treatment continuation is the default option ( ). Overall, patients and clinicians are expected to benefit from the message that they do not have to go to extremes for their efforts to be valued, and that stopping treatment ‘in time’ (and moving towards a plan B) can be a better decision for patient wellbeing and good clinical practice than desperately holding on to a small chance of plan A succeeding. Promote open discussion about the possibility of treatment not resulting in children A first step in preparing patients for the possibility of treatment not resulting in children is to inform them repeatedly and in a transparent manner about the expected chance of achieving a livebirth. People not achieving parenthood after IVF treatment remain largely invisible and unvalued in the public domain ( ), making it difficult for those going through treatment to identify with this group, rather than with the group who have children, and fertility patients refer to being socialized to expect a happy ending ( ). Waiting rooms of fertility clinics are oftentimes decorated with birth announcements and baby pictures on a ‘wall of hope’, which can create the false impression that those in the room who did not yet achieve a pregnancy are the exception, rather than the rule. Moreover, even patients who have been counselled about their chances of achieving a live birth continue to overestimate them, believing that they are able to ‘beat the odds’ ( ). While conversations about livebirth rates and the high probability of an embryo transfer not resulting in a pregnancy are difficult for HCPs and patients alike, they help in managing patients’ expectations, especially for patients with poor prognosis ( ). Patients are reported to appreciate honest and realistic information about what they can expect without sugar coating, even when that information is not always what they would have liked to hear ( ; ), and feel duped by practitioners offering them ‘false hope’ throughout their treatment ( ). To support adaptive coping in case of the treatment not resulting in children, medical information should be complemented with psychoeducation about what a negative outcome means for most patients. Encourage patients to develop and discuss ‘plan(s) B’ Clinics and HCPs need to have a consistent treatment narrative. They should not explicitly or implicitly communicate that having genetically related children is the only successful outcome, and then replace this message after a couple of failed cycles by one entailing that donor conception is of equal value to genetic parenthood, only to then, after that option fails, tell patients they can be happy without children. Instead, we propose a new treatment narrative in which hope is framed in multiple ways and consistently communicated to patients from the very start. Such narrative should convey that: the clinic will provide patients with the best treatment (and care) possible to help them have the children they wish for. happiness and fulfilment are possible even when treatment does not result in children as, with time, most people recover from grief, reporting a sense of survival and personal and spiritual growth, as well as greater ability to value what life has to offer ( ; ). ending medical treatment is not the same as end of care, and clinics will support patients in coping and navigating through the implications of either treatment outcome ( ). Adopting this narrative has implications for care provision. Firstly, HCPs can advise patients to remain engaged with other lifegoals that already give them pleasure and fulfilment. Secondly, HCPs can support patients in working out their ‘plan B(s)’ at an early stage of treatment ( ). Systematic review shows that keeping parallel goals to having children is associated with higher wellbeing during and in the aftermath of treatment ( ). Thirdly, HCPs can ensure that ‘plan B(s)’ remain in sight and are discussed at key moments in treatment ( ), for instance, at the start of treatment or after each failed cycle or, at the very least, after three or all funded cycles were attempted. In this context it is important that HCPs send the message that stopping treatment is always an option and can be a positive and brave decision to make, while making sure to reassure patients they did ‘enough’. HCPs also need to keep in mind that a ‘plan B’ is not necessarily donor conception or another medical intervention but can be located outside the clinic. In sum, we advise HCPs to systematically build in moments for patients to consider all possible treatment outcomes and to actively reflect on what is best for them moving forward. Currently, there is a discrepancy between the number of patients who report having properly discussed end of treatment with their HCPs and the number of patients who end treatment without a baby, meaning that many patients who needed and could have benefited from this conversation did not receive it ( ). Even though some patients may not have the mental room to engage in these conversations when HCPs want to approach them, it is necessary that HCPs systematically signal they are available to have this conversation whenever patients are ready. It is the clinician’s duty to extend the invitation to the patient, and the patient’s right to either embrace or reject this invitation. Support patients who end treatment without children After a decision to stop treatment has been reached, it is important to support patients in the rough patch they have ahead of them. Patients have reported feeling abandoned by their fertility team when they decided to discontinue treatment or when no more treatment options were available to them ( ; ). If the success of fertility care is defined in terms of the wellbeing of patients (see above), then care cannot stop when the last cycle ends. While clinicians may feel less competent to guide patients through this phase of the care pathway, an end-of-treatment consultation plays a crucial role in helping patients understand why treatment did not work and reach closure ( ). Even if clinicians cannot pinpoint exactly what did not work, they can remind patients of the limitations of MAR, as this helps patients accept their situation ( ). At this last appointment, HCPs also need to acknowledge patients’ efforts to have children and reassure them they explored ‘enough’ options ( ). For patients whose ‘plan B’ includes exploring alternative routes to parenthood, information about these should be offered. Finally, HCPs need to inform or remind patients of what they can expect moving forward and reassure them they can access support immediately or later, when needed or desired. Create organizational structures to support clinics and HCPs Clinics and their staff should be supported in coping with the demands and burden of discussing negative treatment outcomes. First, communication skills training about how to share bad news is highly relevant to enable staff to confidently engage with these conversations. The ability to communicate well is a professional competency of any HCP and not just a personal aptitude. Clinics should allocate resources and protect time to allow staff to participate in evidence-based communication training, which has been proven to improve confidence and actual performance ( ), with the use of specific protocols being associated with lower perceived stress when communicating with patients ( ). Second, HCPs need to be supported in addressing not only intense patient emotional reactions but also their own emotions, such as guilt and a sense of failure. This is relevant for their own wellbeing and to prevent such emotions from interfering in their clinical practice ( ; ). Within a multidisciplinary team, mental health professional (MHPs) can be involved in training and supporting other team members ( ; ; ), for instance by imparting communication and self-care skills, creating opportunities for debriefs after difficult patient encounters, or educating about the patient experience of treatment. Specific to this manuscript’s topic, MHPs can support other staff in discussing critical situations about ending treatment, e.g. when there is a disagreement between patients and staff and when moral and ethical dilemmas arise ( ; ), so that clinical decisions are shared within the team and lighten the burden of personal responsibility ( ). Finally, the field should be investing in growing the evidence base that informs provision of end-of-treatment care and developing tools to aid in this endeavour (e.g. development and evaluation of support interventions, shared decision-making aids) so that HCPs feel reassured they are following best practice evidence-based recommendations.
We need to reflect on why we only consider the birth of a (healthy) baby as success in MAR. Unless one adopts an explicitly pronatalist ideology, this is not because producing children is morally laudable, but rather because by helping people conceive, they are helped in achieving goals they judge central to their wellbeing and purpose in life. The association between parenthood, wellbeing, and happiness is complex (see ; ) and patients may be mistaken in their expectation that becoming a parent will substantially increase their wellbeing ( ). For the purpose of this article, it suffices to say that an unfulfilled wish for (more) children can cause intense suffering and is more strongly associated with wellbeing outcomes than parental status ( ). The logical conclusion is that alleviating this suffering should be the main underlying rationale for fertility treatment and underpin the definition of successful treatment. We therefore propose a new definition of treatment success in infertility care. Fertility treatment is successful when the suffering that accompanies subfertility and infertility is alleviated throughout and beyond the treatment trajectory. Successful treatment can either end with a healthy live birth, or with a state in which patients succeed in coping with their infertility in such a way that it no longer has a detrimental effect on their well-being. Reframing success of MAR in this way has consequences. First, the language used within the field needs to be scrutinized. While it is true that ending treatment with a healthy child is the preferred outcome or ‘plan A’ for patients, we should be careful about framing not achieving this ‘plan A’ in terms of failure, because there are other options (plan B) that can be explored to reach the goal of relieving patient suffering. We must also avoid framing pursuing other avenues as non-compliance, non-adherence, abandonment of treatment, giving up, or dropping out ( ; ), which have very negative and culpable connotations. Besides misplaced feelings of guilt and incompetence in patients and HCPs when treatment does not have the desired outcome ( ; ; ), such language may contribute to HCPs and patients continuing treatment for longer than they had initially planned or judged desirable ( ). Although changing language is never easy, it is important that we reinvent the way we talk about the end of treatment. Suggestions are to simply be descriptive and refer to ‘ending treatment with or without children’, ‘treatment (not) resulting in a pregnancy or livebirth(s)’, as per language used in this article. Second, HCPs need to weight the suffering caused by an unfulfilled wish for children against the suffering caused by treatment. Given how uncertain the treatment outcome is, the wellbeing of patients during and after treatment (regardless of its outcome) should be a major concern driving clinical decision-making. Patients can be willing to bear the treatment burden but only if they gain effectiveness ( ), which becomes less likely as patients progress through consecutive cycles. Firstly, clinics can minimize the risk for treatment to become an unstoppable rollercoaster by encouraging patients to proactively consider the aspects of treatment they can influence even before they start it, e.g. how long and how much do they want to spend doing treatment and how many cycles they are willing to do ( ; ; ). Secondly, clinics can promote active and shared decision-making about doing (more) cycles of treatment, rather than both parties assuming that treatment continuation is the default option ( ). Overall, patients and clinicians are expected to benefit from the message that they do not have to go to extremes for their efforts to be valued, and that stopping treatment ‘in time’ (and moving towards a plan B) can be a better decision for patient wellbeing and good clinical practice than desperately holding on to a small chance of plan A succeeding.
A first step in preparing patients for the possibility of treatment not resulting in children is to inform them repeatedly and in a transparent manner about the expected chance of achieving a livebirth. People not achieving parenthood after IVF treatment remain largely invisible and unvalued in the public domain ( ), making it difficult for those going through treatment to identify with this group, rather than with the group who have children, and fertility patients refer to being socialized to expect a happy ending ( ). Waiting rooms of fertility clinics are oftentimes decorated with birth announcements and baby pictures on a ‘wall of hope’, which can create the false impression that those in the room who did not yet achieve a pregnancy are the exception, rather than the rule. Moreover, even patients who have been counselled about their chances of achieving a live birth continue to overestimate them, believing that they are able to ‘beat the odds’ ( ). While conversations about livebirth rates and the high probability of an embryo transfer not resulting in a pregnancy are difficult for HCPs and patients alike, they help in managing patients’ expectations, especially for patients with poor prognosis ( ). Patients are reported to appreciate honest and realistic information about what they can expect without sugar coating, even when that information is not always what they would have liked to hear ( ; ), and feel duped by practitioners offering them ‘false hope’ throughout their treatment ( ). To support adaptive coping in case of the treatment not resulting in children, medical information should be complemented with psychoeducation about what a negative outcome means for most patients.
Clinics and HCPs need to have a consistent treatment narrative. They should not explicitly or implicitly communicate that having genetically related children is the only successful outcome, and then replace this message after a couple of failed cycles by one entailing that donor conception is of equal value to genetic parenthood, only to then, after that option fails, tell patients they can be happy without children. Instead, we propose a new treatment narrative in which hope is framed in multiple ways and consistently communicated to patients from the very start. Such narrative should convey that: the clinic will provide patients with the best treatment (and care) possible to help them have the children they wish for. happiness and fulfilment are possible even when treatment does not result in children as, with time, most people recover from grief, reporting a sense of survival and personal and spiritual growth, as well as greater ability to value what life has to offer ( ; ). ending medical treatment is not the same as end of care, and clinics will support patients in coping and navigating through the implications of either treatment outcome ( ). Adopting this narrative has implications for care provision. Firstly, HCPs can advise patients to remain engaged with other lifegoals that already give them pleasure and fulfilment. Secondly, HCPs can support patients in working out their ‘plan B(s)’ at an early stage of treatment ( ). Systematic review shows that keeping parallel goals to having children is associated with higher wellbeing during and in the aftermath of treatment ( ). Thirdly, HCPs can ensure that ‘plan B(s)’ remain in sight and are discussed at key moments in treatment ( ), for instance, at the start of treatment or after each failed cycle or, at the very least, after three or all funded cycles were attempted. In this context it is important that HCPs send the message that stopping treatment is always an option and can be a positive and brave decision to make, while making sure to reassure patients they did ‘enough’. HCPs also need to keep in mind that a ‘plan B’ is not necessarily donor conception or another medical intervention but can be located outside the clinic. In sum, we advise HCPs to systematically build in moments for patients to consider all possible treatment outcomes and to actively reflect on what is best for them moving forward. Currently, there is a discrepancy between the number of patients who report having properly discussed end of treatment with their HCPs and the number of patients who end treatment without a baby, meaning that many patients who needed and could have benefited from this conversation did not receive it ( ). Even though some patients may not have the mental room to engage in these conversations when HCPs want to approach them, it is necessary that HCPs systematically signal they are available to have this conversation whenever patients are ready. It is the clinician’s duty to extend the invitation to the patient, and the patient’s right to either embrace or reject this invitation.
After a decision to stop treatment has been reached, it is important to support patients in the rough patch they have ahead of them. Patients have reported feeling abandoned by their fertility team when they decided to discontinue treatment or when no more treatment options were available to them ( ; ). If the success of fertility care is defined in terms of the wellbeing of patients (see above), then care cannot stop when the last cycle ends. While clinicians may feel less competent to guide patients through this phase of the care pathway, an end-of-treatment consultation plays a crucial role in helping patients understand why treatment did not work and reach closure ( ). Even if clinicians cannot pinpoint exactly what did not work, they can remind patients of the limitations of MAR, as this helps patients accept their situation ( ). At this last appointment, HCPs also need to acknowledge patients’ efforts to have children and reassure them they explored ‘enough’ options ( ). For patients whose ‘plan B’ includes exploring alternative routes to parenthood, information about these should be offered. Finally, HCPs need to inform or remind patients of what they can expect moving forward and reassure them they can access support immediately or later, when needed or desired.
Clinics and their staff should be supported in coping with the demands and burden of discussing negative treatment outcomes. First, communication skills training about how to share bad news is highly relevant to enable staff to confidently engage with these conversations. The ability to communicate well is a professional competency of any HCP and not just a personal aptitude. Clinics should allocate resources and protect time to allow staff to participate in evidence-based communication training, which has been proven to improve confidence and actual performance ( ), with the use of specific protocols being associated with lower perceived stress when communicating with patients ( ). Second, HCPs need to be supported in addressing not only intense patient emotional reactions but also their own emotions, such as guilt and a sense of failure. This is relevant for their own wellbeing and to prevent such emotions from interfering in their clinical practice ( ; ). Within a multidisciplinary team, mental health professional (MHPs) can be involved in training and supporting other team members ( ; ; ), for instance by imparting communication and self-care skills, creating opportunities for debriefs after difficult patient encounters, or educating about the patient experience of treatment. Specific to this manuscript’s topic, MHPs can support other staff in discussing critical situations about ending treatment, e.g. when there is a disagreement between patients and staff and when moral and ethical dilemmas arise ( ; ), so that clinical decisions are shared within the team and lighten the burden of personal responsibility ( ). Finally, the field should be investing in growing the evidence base that informs provision of end-of-treatment care and developing tools to aid in this endeavour (e.g. development and evaluation of support interventions, shared decision-making aids) so that HCPs feel reassured they are following best practice evidence-based recommendations.
The high likelihood of treatment not resulting in a newborn creates an ethical imperative for clinics to prepare and support patients through this potential outcome. This article offers several research-informed recommendations to support clinics in this endeavour. Regulatory bodies need to monitor provision of end-of-treatment care to ensure all fertility patients receive high-quality care.
|
Essential aspects of external quality assurance for point-of-care testing | 18fe7133-5f95-48ff-a431-45c32908857a | 5382857 | Pathology[mh] | A key issue in external quality assurance (EQA) or proficiency testing is to ensure high quality schemes and to avoid that participation in such schemes does more harm than good. Ideally, any EQA program should provide the participants information of whether their measurement procedure has a bias from a true value . An EQA organisation distributing control material should strive to obtain and use native commutable materials where the target values are set by using a reference method or a certified reference material . However, in many cases this is not possible, control material that is not commutable is used, and thus peer group target values must be established. Such materials have severe limitations since the material may also not be commutable between reagent lots within the same method . Circulation of unsuitable EQA materials could in such cases generate harm by misclassifying participant performance. It is often more difficult to obtain a commutable material when the EQA scheme has a high number of participants since commutable material is based on native patient samples and have a limited stability. In such cases, smaller (national/regional) schemes with fewer participants can be preferred. For point-of-care (POC) testing, it is even more difficult to obtain commutable control materials since the matrix generally is whole blood. Often different control materials have to be circulated to the different POC instruments, and no control materials are available for some POC instruments, e.g. as has been shown for POC international normalized ratio (INR) testing . An alternative EQA approach has been developed in situations where commutable control materials are not available , in which a limited number of selected general practitioner (GP) offices perform a split sample comparison with a central laboratory method using native whole blood patient samples. In addition, non-commutable EQA materials are circulated to all participants. In this way, method performance is addressed by the split samples system and participant performance is addressed by the non-commutable material, and the EQA provider does not need to circulate native materials to all the participants . EQA for POC testing is in many ways similar to EQA for larger hospital laboratories. There is, however, one important difference that is not always acknowledged; the participants. Whereas the participants in the EQA schemes for hospital laboratories usually are specialists in laboratory medicine or medical laboratory scientists, the participants in EQA for POC testing are often the end users of the tests, i.e. health care personnel with little or no knowledge of laboratory medicine. The implication of this is that the EQA organiser has a) to convince the participants that participation in EQA schemes are important, b) be able to circulate materials with time intervals that are acceptable both for the participants and from the organisers point of view, c) produce feedback reports that are understandable by the participants, and d) offer help and guidance to the participants when needed. In addition, it is important to e) address the pre-examination, the examination and the post-examination processes, and f) offer schemes for measurement procedures using interval or ordinal scale. The aim of the present paper is to highlight these essential aspects of EQA for POC testing.
The common opinion in laboratory medicine is that EQA is useful. This opinion is more or less part of our education and we are so used to it that we do not question the value of it. However, since we with EQA of POC testing are addressing people without much knowledge of laboratory medicine we have to explain why they should participate in this system and what they can gain from it. In fact, there is little evidence that participation in EQA is useful to improve the quality of the results and no evidence concerning the benefit for the patients. The reason for this is partly that it is difficult to isolate the “EQA factor” from other factors that can contribute to the improvement of the quality of the examination processes. In a recent paper by Bukve et al. , looking at the development of the analytical quality for POC analyses during a period of 9 years, it could be shown that the number of times the participants participated in an EQA program for POC glucose, haemoglobin and C-reactive protein (CRP) testing, was an independent factor associated with improved analytical quality of these analytes . Other independent factors associated with improved analytical quality were type of instrument, performing internal quality control weekly, performing 10 or more patient tests weekly, and having laboratory-qualified personnel performing the tests. Another important factor, which was not investigated in the study by Bukve et al. was that the POC testing laboratories participated in a quality improvement follow-up system in which they always had somebody to contact if they had problems with the tests. To the authors knowledge, this study is the first evidence that EQA for POC testing is useful .
The optimal frequency of EQA schemes is often debated and the evidence for an optimal frequency is difficult to find. Looking through the catalogues of different EQA providers, it is easy to see that the frequency of EQA surveys varies, and the scientific reason for this is not given. In these authors opinion, a high quality scheme with commutable material and reference target values and thoroughly elaborated and understandable feedback reports are much more important than schemes with a high number of surveys. The theoretical reason for this opinion is that EQA should not be a substitute for internal quality control, but should concentrate on finding systematic deviations of measurement procedures preferably from a true target value. In cases where deviations are found, the users of the tests should be followed up by direct contact. This is of course even more important with POC testing since the users are clinicians, nurses and health care personnel with little or no education in laboratory medicine. One important factor that promotes a high frequency of schemes is money. It is common that EQA providers are paid by each survey they circulate, meaning that more money is earned if surveys are circulated more frequently. An alternative approach could be that the EQA providers are paid for the expertise they provide so that professional reasons could determine the optimal frequency of surveys and the content of the feedback reports. A working group in the European Organisation for External Quality Assurance Providers in Laboratory Medicine (EQALM) is currently investigating the optimal frequency of EQA schemes, and the goal is to provide guidance on evidence based models for EQA design .
Feedback reports from EQA schemes are often comprehensive and complicated, and can be difficult to understand. However, such comprehensive reports can be very useful for skilled personnel in large laboratories, e.g. it can help them look at time trends, concentration effects, calibration, etc. For the users of POC tests on the other hand, only two basic aspects are important: 1) is my result correct and 2) if not, what can I do to improve it? The feedback reports must therefore be simple and educational. It is important to explain for the participants if a deviant result is caused by the instrument they are using, by the reagent lot they are using or by their own performance. Therefore, the participants should report which instrument and reagent lot they have used in addition to the control results. It is then easier to identify the reason for a deviant EQA result and to describe this in the feedback report. Reagent lot information is often not given in EQA, probably because it is assumed that lot variation is detected by internal quality control. However, this can be more difficult to achieve for POC testing. In a recent paper dealing with POC tests for urine albumin:creatinine ratio (ACR) and INR it was shown that there indeed could be large lot variations for these analytes, detected by the EQA control material . Concerning INR, a non-commutable material was used and it could be shown that the material was even not commutable within the method studied, i.e. the material was not commutable between reagent lots. The lot variation found did not reflect results obtained from native patient samples. Thus, if this had not been discovered, the feedback report could easily have resulted in “more harm than good”. In contrast, for ACR where a commutable material was used, the lot variation was also found using patient samples and this variation could therefore have clinical implications. Again, information about the reagent lots was of the utmost importance for the understanding of the EQA results and the feedback reports. It could be valuable if the EQA provider asks for different characteristics of the participants in order to understand which factors are associated with good quality. Such factors could for example be the frequency of running internal quality control, the number of tests performed per week, the profession of the test operator, and the number of employees .
In general, EQA participants should have the opportunity to seek help and guidance when needed. For participants in primary health care it is even more important to have someone they can address when they have problems to interpret the feedback reports, and most importantly to decide what actions should follow a deviant result. The EQA provider should have the responsibility to establish such follow-up system. In a hospital, however, the central laboratory could have responsibility for the POC analyses at the wards and thus also to educate and guide the POC users in EQA issues. Such supervision system might be easier to organize within a hospital environment than in primary health care. The POC manufacturers are also responsible to have a system to educate, guide and follow-up the POC users. In 1992, the Norwegian Quality Improvement of Primary Care Laboratories (Noklus) was established and one pre-requisite for starting this organisation was that there should be a system where the users could get sound advice. Therefore, in addition to establish an EQA organization, a system with more than 40 laboratory advisers sited all over Norway was established. Each laboratory advisor has the responsibility for about 50-100 units like e.g. GP offices, nursing homes, emergency healthcare centres, occupational healthcare centres, and oil platforms. 99.8% of all GP offices and 96% of all nursing homes in Norway participates in Noklus voluntary. The main tasks for the laboratory advisors are to visit the participants, organize courses, offer advice concerning which instrument to buy, and follow up results from EQA schemes. In principle, these advisors contact all participants in Norway with poor performance. Since the choice of instrument is very important for the quality of patient results, the Scandinavian Evaluation of laboratory equipment for primary health care (SKUP) was established.
Whereas most attention from EQA providers has been focused on the analytical examination, more and more attention is drawn to the pre- and post-examination processes. Many EQA providers are now circulating pre- and post-examination schemes alone or embarked in the analytical schemes. Especially the post-examination schemes are important for POC testing since there is a direct communication with the end-users of the tests. It is then possible to examine how the test results influence their clinical decisions and indirectly also ascertain what performance they believe they have . By combining the feedback of analytical quality with feedback on how the clinicians use the test, it is possible to generate a strong educational material concerning the value of laboratory tests and how important it is that tests conform to given performance specifications. Noklus has run a series of case history based EQA surveys over years showing that clinicians have widely different knowledge of the analytical quality of the tests they are using.
A nominal scale deals with classification of a quantity irrespective of magnitude, e.g. type of virus, bacteria, or mutations whereas an ordinal scale deals with all types of grading, e.g. urine strips for glucose or human chorionic gonadotrophin (hCG) . Generally, measurements performed on an ordinal scale are measurements that can also be performed on a ratio or interval scale. The quantities are often measured on an ordinal scale because a more rapid test result can be obtained and because people without much laboratory experience can perform such tests. Thus, these tests are commonly used as POC tests in for example GP offices, nursing homes and rural areas . EQA of the ordinal scale can be used for different types of POC tests, such as for example HIV, malaria and tuberculosis . EQA of such tests raise specific challenges because the users have more difficulties to understand the value of such EQA . In an EQA for POC on the ordinal scale, samples are typically circulated with concentrations that are expected, with a very high probability to give “positive” or “negative” results. In addition, samples with an intermediate concentration are circulated. The participants will get an evaluation only with respect to the “positive” or “negative” samples since they are supposed to classify these samples correct . Samples with intermediate concentrations will give results that are expected to be both “positive” and “negative.” This information is useful to assess and to monitor the performance of the POC tests, but not to assess the user performance. Therefore, it is important to also circulate samples with intermediate concentrations, but this must be thoroughly explained to the users so that they can see a benefit from it.
EQA for POC testing is in principle similar to EQA for larger hospital laboratories, but there are some important differences. The participants are often the end-users of the tests (e.g. clinicians and nurses), they have usually little or no knowledge of laboratory medicine and the number of participants is often high. This gives the EQA providers some extra challenges; they must convince the participants that participation in EQA schemes are important, be able to circulate materials with reasonable time intervals, produce feedback reports that are understandable by the participants, and offer help and guidance to the participants when needed. It is also important that EQA for POC testing address the total examination process, and that schemes for measurement procedures using interval or ordinal scale are offered.
|
Supplementation with Chinese herbal preparations protect the gut-liver axis of Hu sheep, promotes gut-liver circulation, regulates intestinal flora and immunity | 455e86bf-f88e-497e-915b-2b50bd90ebc5 | 11599181 | Biochemistry[mh] | In the wake of the complete prohibition of antibiotics, the utilization of Chinese herbal preparations has emerged as a promising alternative in the advancement of animal husbandry. The increasing use of herbal medicines is due in part to the growing problem of antibiotic misuse and resistance. While antibiotics are very effective in treating bacterial infections, prolonged or inappropriate use can lead to the development of bacterial resistance, which reduces the effectiveness of antibiotics, as well as causing adverse effects on livestock ( ). In contrast, Chinese herbal medicines are generally considered to have a lower risk of toxicity and drug resistance, and to be of natural origin, versatile, safe, reliable, economical and environmentally friendly. The effectiveness of Chinese herbal medicines in treating certain diseases is comparable to that of antibiotics, and some specific Chinese medicine components have been shown to possess certain antibacterial properties. For example, Atractylodes macrocephala, saponins, Scutellaria baicalensis, and Radix et Rhizoma Ginseng have shown antibacterial, antiviral, anti-inflammatory, and immune-function-enhancing effects ( ).With the development of biotechnology, especially the rise of microbiome and metabolomics, more and more studies have shown that intestinal microorganisms and intestinal metabolites can be linked to the organs and tissues of the animal body, which are called “axis”, such as: gut-testis axis, gut-liver axis, and gut-cardiac axis. The gut-liver axis, which governs the nutritional metabolism of animals by means of the interplay between intestinal microecology and the host, plays a crucial role in ensuring the sound growth of animals. The role of gut microbial communities in maintaining host health has attracted widespread attention ( ). The liver and intestine are connected through the portal vein, the biliary system, and mediators in the circulation (gut-liver axis). Microorganisms in the gut are involved in the maintenance of liver steady state ( ). Nutrients, microbial antigens, metabolites and bile acids regulate metabolic and immune responses in the gut and liver, thereby interacting with each other to influence the structure and function of microbial communities ( ). Metabolomics is the science of the study of all metabolites in living organisms, such as amino acids, fatty acids and carbohydrates, to quantitatively analyse and assess changes in metabolite levels and to reveal their relevance to physiological processes. Recently, metabolomics analysis based on ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) technology has been widely used in animal physiology and pathology ( ). However, the investigation of the gut-liver axis in ruminants remains relatively limited. Consequently, this study primarily employed metabolomics and 16s to examine the impact of Chinese herbal preparations on the intestinal and liver axis of Hu sheep, and This in turn revealed the role of Chinese herbal preparations in regulating nutrient metabolism, gut-liver circulation and immune function in Hu sheep. The group’s previous research shows that: adding 0.5% and 1% of Chinese herbal preparations to concentrate supplements can improve the content of IgM, IgA and lysosomal enzymes in the serum of Hu sheep, and improve the length of small intestinal villi and villus crypt ratio, which indicates that the Chinese herbal preparations can improve the level of immunity of the organism, and significantly promote the development of small intestinal tissues morphology, and improve the absorption and utilization of nutrients. The mechanism of the gut-liver axis is illustrated in . In animals, the intestine and the liver are connected through the portal vein, and about 2/3 of the liver’s blood comes from the intestine, so the liver first receives small-molecule nutrients (e.g., small-molecule proteins, peptides, amino acids, short-chain fatty acids, etc.) absorbed in the intestine. The liver also secretes bile acids into the intestine, most of which (more than 95%) are reabsorbed in the ileum, returned to the liver through the portal vein, and secreted into the bile ducts, forming the enterohepatic circulation. This bi-directional enterohepatic connection is a bridge for communication in the gut-intestinal microbes-liver axis, which plays an important role in regulating nutrient metabolism and immunity in animals. Gut microbes regulate animal growth and metabolism through signals (e.g., short-chain fatty acids, bile acids, methylamine, amino acid derivatives, and microbe-associated molecular pattern MAMP), which are sensed by the host through T-cell-like receptors, free fatty acid receptors, and bile acid receptors to regulate nutritional metabolism and immunity.
Ethics statement The animal committee of Gansu Agricultural University approved all animal management and experiments (ethical approval number: GSAU-AEW -2020-0057). Chinese herbal preparations Chinese herbal preparations are mainly composed of Codonopsis pilosulae , Medicated leaven , Malt , Hawthorn (Fried), Astragalus membranaceus , Poria cocos , Atractylodes , etc. The colorimetric method was used to determine the effective content of Chinese herbal preparations, and they were made into bulk formulations, which were fed to Hu sheep according to the ratios of concentrate supplement 0.5% and 1% respectively. We showed that the three established methods for the determination of the effective active ingredients were reliable by examining the linear relationship, precision, stability, reproducibility and spiking recovery experiments: the contents of total polysaccharides, total flavonoids and total saponins of the herbal preparations were 27.21%, 0.03% and 0.12%, respectively, of which total polysaccharides had the highest content. Nutritional levels and active ingredient contents of Chinese herbal preparations are shown in , and diet composition and nutrient content are shown in . Test animals The trial was initiated from March to July 2023 Gansu Province, Wuwei City, Gulang County. With an average initial body weight of (19.57 ± 1.56 kg) of 18 healthy male Hu sheep were selected as test animals. Randomly divided into 3 groups, each group of 6 animals, they were control group (Con) (fed basic ration), test I (T1) (fed 0.5% Chinese herbal preparation of concentrate in the ration), test II (T2) (fed 1% Chinese herbal preparation of concentrate in the ration). The pre-test period was 7d (The trial was conducted under the same housing, lighting, and ventilation conditions, with the sheep fed ad libitum in a loose feeding system. A 10-day acclimation period was followed by a 90-day experimental period, with feeding starting at 7:30 am each day. The concentrated supplement was mixed with the feed for the first 10 days, and then gradually increased in proportion to the concentrated supplement containing 0.5% and 1% of the herbal preparation. Specifically, 1/4 of the concentrated supplement was added on days 1-2, 1/3 on days 3-4, 2/3 on days 5-6, and the full amount on day 7. During the trial, the sheep had free access to water to ensure that the feeding method was consistent across all groups), and formal experiment was 90 d. To study the metabolic changes of Chinese herbal preparations on the gut-liver axis of Hu sheep, we performed extensive metabolic analyses of ileum and liver axis metabolites by using a broadly targeted UPLC-MS/MS metabolomics approach and 16S sequencing of ileal microorganisms. Sample preparation Liver tissue samples, ileal contents samples were collected from the test sheep rapidly after slaughtering, immediately place in a tank of liquid nitrogen and return to the laboratory within 1 h. Store in a refrigerator at -80°C for total RNA and protein extraction, and another portion of the sample was fixed in 4% paraformaldehyde solution. Then the gradient alcohol dehydration, xylene transparency, paraffin embedding, serial sectioning, 4 μm thick. Liver and gut samples were removed from the -80°C refrigerator. The samples were thawed on ice and the thawed samples were chopped and mixed. Each sample was accurately weighed 20 mg using multipoint sampling and then transferred to a centrifuge tube and homogenized with a steel ball (30Hz, 20s). Following centrifugation (3000 rpm, 4°C, 30 s), the pellet was added to 400 mL of 70% methanol-water internal standard extractant and shaken well (1500 rpm) for 5 min. It was then centrifuged (12,000 rpm, 4°C, 10 min). To collect the supernatant for analysis, transfer 300 mL of supernatant to a new centrifuge tube, allow to stand for 30 minutes at 20°C, then centrifuge (12,000 rpm, 4°C) for 3 minutes. Ileal immunity indexes Ileal immunity indexes included immunoglobulin A (IgA), immunoglobulin G (IgG), immunoglobulin M (IgM), complement 3 (C3) and complement 4 (C4) content. The above indicators were determined by using kits, which were purchased from Beijing Huaying Biotechnology Co. Chromatography mass spectrometry acquisition conditions The data acquisition instrument system mainly includes ultra-high performance liquid chromatography (Ultra Performance Liquid Chromatography, UPLC) (ExionLC AD, https://sciex.com.cn/ ) and tandem mass spectrometry (Tandem mass spectrometry, MS/MS)(OTRAP ® , https://sciex.com.cn/ ) T3 Method Chromatographic Acquisition Conditions: Chromatographic column: Waters ACQUITY UPLC HSS T3 C18 1.8 um, 2.1 mm*100 mm; Mobile phase: Ultrapure water (0.1% formic acid) in phase A, and acetonitrile (0.1% formic acid) in phase B. Elution gradient:0 min water/acetonitrile (95:5 V/V), 11.0 min at 10:90 V/,12.0 min at 10:90 V/, 12.1 min. Flow rates 0.4 ml/min; column temperature 40°C; injection volume 2 ml. Mass Spectrometry Acquisition Conditions: Electrospray ionization (ESI) temperature 500°C, mass spectrometry voltage 5500 V (positive), -4500 V negative), ion iS I) 55 psi, gas II (GS H) 60 psi, curtain gas (CUR) 25 psi, touch ted dissociation, CAD) parameter was set to high. In a triple quadrupole (Qtrap), each collision-induced ionization (colli ion pair was detected by scanning based on the optimized jade (declustering potential, DP) and collision energy (CE). Metabolomics analysis raw Raw UPLC-MS/MS data were imported into Analyst 1.6.3 software. Metabolites were analyzed qualitatively and quantitatively for each sample using mass spectrometry (MS) based on local metabolite databases. Data were statistically analyzed by log2 transformation to improve normality and standardization. Hierarchical cluster analysis (HCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) were performed using R software. Principal Component Analysis (PCA) was used to simplify and downscale high dimensional complex data. According to OPLS-DA analysis, the criteria for screening differential metabolites were variables with a fold change (FC) ≥2 or ≤0.5 and VIP ≥1. Relationships between differential metabolites were demonstrated by Venn diagrams. Metabolite annotation and pathway enrichment analyses were performed using the KEGG Compound Database and KEGG Pathway Database. Intestinal flora analysis Total DNA was extracted from microbial samples using the kit.DNA quality was measured by 0.8% agarose gel electrophoresisDNA was quantified by UV spectrophotometer.Specific regions of the 16S rRNA gene were amplified by PCR using specific universal primers.The 16S rRNA gene is present in the genomes of all bacteria and is highly conserved and has specific variable regions. The PCR products were purified and libraries were constructed for sequencing. Sequencing was performed using the Illumina MiSeq high-throughput sequencing platform. Process and analyze sequencing data, including sequence quality control, OTU clustering, species annotation, and diversity analysis. Common methods for data analysis included principal coordinate analysis (PCoA), non-metric multidimensional scaling analysis (NMDS), linear discriminant analysis (LEfSe), etc. In this test microbial composition analysis, we focused on the top 10 bacteria at the phylum and genus level. We analyzed the microbial composition at each taxonomic level. The experimental data were analyzed by one-way analysis of variance (ANOVA) using the ANOVA procedure of the SPSS 26.0 statistical software, and Duncan’s method for multiple comparison test, and the results were expressed as mean ± standard deviation, with P <0.05 indicating significant differences. Statistical analyses and graphing of differential genera at the ileal midgut level and genus level were performed using Graphpad 5.0 Prism software.
The animal committee of Gansu Agricultural University approved all animal management and experiments (ethical approval number: GSAU-AEW -2020-0057).
Chinese herbal preparations are mainly composed of Codonopsis pilosulae , Medicated leaven , Malt , Hawthorn (Fried), Astragalus membranaceus , Poria cocos , Atractylodes , etc. The colorimetric method was used to determine the effective content of Chinese herbal preparations, and they were made into bulk formulations, which were fed to Hu sheep according to the ratios of concentrate supplement 0.5% and 1% respectively. We showed that the three established methods for the determination of the effective active ingredients were reliable by examining the linear relationship, precision, stability, reproducibility and spiking recovery experiments: the contents of total polysaccharides, total flavonoids and total saponins of the herbal preparations were 27.21%, 0.03% and 0.12%, respectively, of which total polysaccharides had the highest content. Nutritional levels and active ingredient contents of Chinese herbal preparations are shown in , and diet composition and nutrient content are shown in .
The trial was initiated from March to July 2023 Gansu Province, Wuwei City, Gulang County. With an average initial body weight of (19.57 ± 1.56 kg) of 18 healthy male Hu sheep were selected as test animals. Randomly divided into 3 groups, each group of 6 animals, they were control group (Con) (fed basic ration), test I (T1) (fed 0.5% Chinese herbal preparation of concentrate in the ration), test II (T2) (fed 1% Chinese herbal preparation of concentrate in the ration). The pre-test period was 7d (The trial was conducted under the same housing, lighting, and ventilation conditions, with the sheep fed ad libitum in a loose feeding system. A 10-day acclimation period was followed by a 90-day experimental period, with feeding starting at 7:30 am each day. The concentrated supplement was mixed with the feed for the first 10 days, and then gradually increased in proportion to the concentrated supplement containing 0.5% and 1% of the herbal preparation. Specifically, 1/4 of the concentrated supplement was added on days 1-2, 1/3 on days 3-4, 2/3 on days 5-6, and the full amount on day 7. During the trial, the sheep had free access to water to ensure that the feeding method was consistent across all groups), and formal experiment was 90 d. To study the metabolic changes of Chinese herbal preparations on the gut-liver axis of Hu sheep, we performed extensive metabolic analyses of ileum and liver axis metabolites by using a broadly targeted UPLC-MS/MS metabolomics approach and 16S sequencing of ileal microorganisms.
Liver tissue samples, ileal contents samples were collected from the test sheep rapidly after slaughtering, immediately place in a tank of liquid nitrogen and return to the laboratory within 1 h. Store in a refrigerator at -80°C for total RNA and protein extraction, and another portion of the sample was fixed in 4% paraformaldehyde solution. Then the gradient alcohol dehydration, xylene transparency, paraffin embedding, serial sectioning, 4 μm thick. Liver and gut samples were removed from the -80°C refrigerator. The samples were thawed on ice and the thawed samples were chopped and mixed. Each sample was accurately weighed 20 mg using multipoint sampling and then transferred to a centrifuge tube and homogenized with a steel ball (30Hz, 20s). Following centrifugation (3000 rpm, 4°C, 30 s), the pellet was added to 400 mL of 70% methanol-water internal standard extractant and shaken well (1500 rpm) for 5 min. It was then centrifuged (12,000 rpm, 4°C, 10 min). To collect the supernatant for analysis, transfer 300 mL of supernatant to a new centrifuge tube, allow to stand for 30 minutes at 20°C, then centrifuge (12,000 rpm, 4°C) for 3 minutes.
Ileal immunity indexes included immunoglobulin A (IgA), immunoglobulin G (IgG), immunoglobulin M (IgM), complement 3 (C3) and complement 4 (C4) content. The above indicators were determined by using kits, which were purchased from Beijing Huaying Biotechnology Co.
The data acquisition instrument system mainly includes ultra-high performance liquid chromatography (Ultra Performance Liquid Chromatography, UPLC) (ExionLC AD, https://sciex.com.cn/ ) and tandem mass spectrometry (Tandem mass spectrometry, MS/MS)(OTRAP ® , https://sciex.com.cn/ ) T3 Method Chromatographic Acquisition Conditions: Chromatographic column: Waters ACQUITY UPLC HSS T3 C18 1.8 um, 2.1 mm*100 mm; Mobile phase: Ultrapure water (0.1% formic acid) in phase A, and acetonitrile (0.1% formic acid) in phase B. Elution gradient:0 min water/acetonitrile (95:5 V/V), 11.0 min at 10:90 V/,12.0 min at 10:90 V/, 12.1 min. Flow rates 0.4 ml/min; column temperature 40°C; injection volume 2 ml. Mass Spectrometry Acquisition Conditions: Electrospray ionization (ESI) temperature 500°C, mass spectrometry voltage 5500 V (positive), -4500 V negative), ion iS I) 55 psi, gas II (GS H) 60 psi, curtain gas (CUR) 25 psi, touch ted dissociation, CAD) parameter was set to high. In a triple quadrupole (Qtrap), each collision-induced ionization (colli ion pair was detected by scanning based on the optimized jade (declustering potential, DP) and collision energy (CE).
Raw UPLC-MS/MS data were imported into Analyst 1.6.3 software. Metabolites were analyzed qualitatively and quantitatively for each sample using mass spectrometry (MS) based on local metabolite databases. Data were statistically analyzed by log2 transformation to improve normality and standardization. Hierarchical cluster analysis (HCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) were performed using R software. Principal Component Analysis (PCA) was used to simplify and downscale high dimensional complex data. According to OPLS-DA analysis, the criteria for screening differential metabolites were variables with a fold change (FC) ≥2 or ≤0.5 and VIP ≥1. Relationships between differential metabolites were demonstrated by Venn diagrams. Metabolite annotation and pathway enrichment analyses were performed using the KEGG Compound Database and KEGG Pathway Database.
Total DNA was extracted from microbial samples using the kit.DNA quality was measured by 0.8% agarose gel electrophoresisDNA was quantified by UV spectrophotometer.Specific regions of the 16S rRNA gene were amplified by PCR using specific universal primers.The 16S rRNA gene is present in the genomes of all bacteria and is highly conserved and has specific variable regions. The PCR products were purified and libraries were constructed for sequencing. Sequencing was performed using the Illumina MiSeq high-throughput sequencing platform. Process and analyze sequencing data, including sequence quality control, OTU clustering, species annotation, and diversity analysis. Common methods for data analysis included principal coordinate analysis (PCoA), non-metric multidimensional scaling analysis (NMDS), linear discriminant analysis (LEfSe), etc. In this test microbial composition analysis, we focused on the top 10 bacteria at the phylum and genus level. We analyzed the microbial composition at each taxonomic level. The experimental data were analyzed by one-way analysis of variance (ANOVA) using the ANOVA procedure of the SPSS 26.0 statistical software, and Duncan’s method for multiple comparison test, and the results were expressed as mean ± standard deviation, with P <0.05 indicating significant differences. Statistical analyses and graphing of differential genera at the ileal midgut level and genus level were performed using Graphpad 5.0 Prism software.
Liver tissue section results After feeding the herbal feed additives to the Hu sheep, it was observed through liver tissue sections, as shown in ( ), the liver morphology of the Hu sheep in the control group was more regular, and the liver cord was neatly arranged and a small amount of collagen fibers could be seen in the portal area of the liver tissue, while there was hyperplasia of collagen fibers in the portal area of the liver tissue of the Hu sheep in the T1 and T2 groups, and the formation of fibrous intervals could be seen, but the difference was not obvious. Compared with the control group, T1 and T2 lymphocytes increased but the difference was not obvious, and there was some hyperplasia in connective tissues but the difference was not obvious, which indicated that the herbal preparation had no obvious lesions and abnormalities on the liver of the Hu sheep, and indicated that the herbal feed additives could be used for the healthy reproduction of the Hu sheep, and the side effects on the liver tissues of the Hu sheep were small. Effects of herbal feed additives on ileal immunity indexes of Hu sheep As can be seen from , the serum IgA, IgG, and complement C3 contents of the test I group were significantly higher than those of the control group ( P <0.05); there was no significant difference in serum C4 and IgM contents between the groups ( P >0.05). UPLC-MS/MS-based qualitative and quantitative analyses of metabolites To investigate the metabolic alterations occurring in the gut-liver axis under varying proportions of Chinese herbal preparations, we conducted metabolome sequencing of the ileum. The ileum was specifically identified and analyzed, resulting in the detection of a total of 1202 metabolites. These metabolites were further classified into various categories, including 378 amino acids and their metabolites, 102 Benzene and its derivatives, 51 alcohols and amines, 21 bile acids, 21 coenzymes and vitamins, 94 glycerophospholipids, 87 nucleotides and their metabolites, 31 hormones and hormone-related substances, and 22 aldehydes ketones and esters, 51 carbohydrates and their metabolites, 164 organic acids and their derivatives, 71 heterocyclic compounds, and 89 fatty acyls. The main components of Hu sheep ileums are amino acids and their metabolites and organic acids and their derivatives. Different ratios of Chinese herbal preparations on ileal metabolite differences OPLS-DA assessed the explanatory power and predictive ability of the model by R 2 Y and Q 2 . In this study, OPLS-DA was used to analyse the differences in ileal metabolites between different groups and to examine the effect of Chinese herbal preparations, and significant metabolite differences and good group differentiation were found. OPLS-DA was used for Con vs. T1, T1 vs. T2, and Con vs. T2. the R 2 Y values were 0.99, 0.997, and 0.971, respectively ( ). It suggested that the OPLS-DA model is well fitted. Highly predictable and suitable for subsequent data analysis. According to the OPLS-DA results, groups of differential metabolites were further selected and classified. Further, we used FC ≥ 1.2 to select the results, which are represented by Volcano, Venn, heat and k-means diagram. To study the significant changes in the ileum metabolites of different proportions of Chinese herbal preparations, we analyzed the different metabolites between Con vs T1, T1 vs T2, Con vs T2. Cluster graphs and heat maps show clear clustering of metabolite variants, and the trend of variability is shown in . Based on the Vennen diagrams for the different groups in , a total of eight different metabolites were simultaneously identified in the three groups. For example, organic acids and their derivatives, benzene and their derivatives, nucleotides and their metabolites, bile acids, amino acids and their metabolites, hormones and hormone-related substances were identified as overlapping differential metabolites ( ). for example: 2-Hydroxyphenylacetic acid is an organic compound with antioxidant properties, which can act on β2 receptors on intestinal smooth muscle cell. beta-Muricholic acid is a bile acid that also plays an important role in the gut, it can promote the production and removal of bile. It can stimulate the secretion and excretion of bile by acting on G protein-coupled receptors on the surface of hepatocytes, thus maintaining the normal circulation of bile. L-lysine-L-alanine is one of the essential components of glutathione synthesis. Involved in glutathione synthesis: Glutathione is the main intracellular antioxidant, able to scavenge free radicals and peroxides also acts as a metabolic substrate for intestinal microorganisms, influencing the composition and function of the intestinal microbiota, which in turn influences the antioxidant capacity of the intestines. 6-Methylaminopurine is an antimetabolite. It also has a role in the intestinal tract, which can inhibit the growth and multiplication of bacteria in the gut. A total of 120 metabolites with significant differences were identified in Con vs T1 and included 31 up-regulated and 89 down-regulated metabolites ( ). A total of 263 metabolites with significant differences were identified in Con vs T2 ( ). Includes 134 up-regulated and 129 down-regulated metabolites. Among the differential metabolites between Con vs T2 and T1 vs T2 exhibited numerous up-regulated metabolite. This result suggested that a large number of metabolites accumulated in the ileum after the addition of Chinese herbal preparations. Most of them are up-regulated. T1 vs T2 identified a total of 135 metabolites with significant differences and included 91 up-regulated and 44 down-regulated metabolites ( ). Metabolic profile of metabolites in the ileum by different proportions of Chinese herbal preparations To further investigate the changing patterns of differential metabolite. In the ileum supplemented with different proportions of Chinese herbal preparations. The six classes of differential metabolites were analyzed by heat mapping ( ). The results showed a decreasing trend in most bile acids, organic acids and their derivatives, heterocyclic compounds hormones and their related substances, and an increasing trend in amino acids and their metabolites, and carbohydrates. Metabolic pathways of ileal metabolites by different ratios of Chinese herbal preparations To explore the potential metabolic pathways of the ileum in different proportions of Chinese herbal preparations. According to the differential metabolites generating KEGG classification and bubble diagrams as in ( ), Identify potential developmentally relevant biomarkers (to gut-liver axis nutrient metabolism and antioxidant and immunity). Differential metabolites were screened and annotated, and more than five metabolites involved in the KEGG pathway were selected and subjected to cluster analyses to investigate changes in metabolic patterns at different ratios. The Con and T1( ) subgroups differed in steroid hormone synthesis, as well as in the pathways of tryptophan, phenylalanine, and tyrosine metabolism, bile secretion, vitamin digestion and absorption, amino acid and nucleoside glycoside metabolism, and glutathione and arachidonic acid metabolism. The Con and T2 ( ) subgroups differed in the metabolic pathways of metabolites, mainly the metabolism of phenylalanine and tryptophan, steroid biosynthesis, secretion of bile acids, metabolism of arachidonic acid, secretion of primary bile acids, digestion and absorption of vitamins. The T1 and T2 subgroups ( ) were predominant in the metabolic pathways of metabolites: steroid metabolism, bile acid and primary bile acid secretion, cholesterol metabolism, vitamin digestion and absorption, and arachidonic acid metabolism. Following this, pathway interactions were analyzed for the three groups of differential metabolites in the ileum. We selected the top 25 differential metabolites to map the interactions ( ). We found that fatty acyl metabolites (LXB4, Carnitine C10:2) were closely associated with amino acids and their metabolites Ile-Val, cyclo (pro-pro), the Organic acids and their derivatives (SDMA). beta-Muricholic acid was closely associated with Tridecanedioic acid, Tetradecanedioic acid. These metabolites synergize with each other to promote the intestinal barrier of the ileum and the absorption and metabolism of nutrients. Qualitative and quantitative metabolite analysis based on UPLS-MS To study the metabolic changes in the liver of different proportions of herbal preparations. We performed liver metabolome sequencing and identified and analyzed a total of 1059 metabolites in the liver, classifying the various metabolites in the liver into different categories. Among them, 333 were amino acids and their metabolites, 81 were benzene and its derivatives, 45 were alcohols and amines, 12 were bile acids, 15 were coenzymes and vitamins, 95 were glycerophospholipids and glycerolipids, 77 were nucleotides and their metabolites, 20 were hormones and hormone-related substances, 18 were aldehydes, ketones and esters, 4 were tryptophan, cholinergic and pigmentary substances, 52 were carbohydrates and their metabolites, 52 were organic acids and its derivatives 155 kinds, 62 kinds of heterocyclic compounds and 76 kinds of fatty acyl groups. The main components of goat liver include amino acids and their metabolites, organic acids and their derivatives. Differences in liver metabolites by different proportions of Chinese herbal preparations The OPLS-DA analysis revealed R 2 Y values of 0.998, 0.998, and 0.998 for the comparisons of Con vs T1, T1 vs T2, and Con vs T2, respectively ( ), indicating a well-fitted, predictable, and appropriate OPLS-DA model for subsequent data analysis. according to the OPLS-DA results, differential metabolites were further screened and classified. Further, we used FC ≥ 1.2 to select the results represented by Volcano, Venn, heat and k-means diagram. In order to investigate the significant changes in liver metabolites of different ratios of Chinese herbal preparations, we analyzed the differential metabolites between Con vs T1, T1 vs T2, and Con vs T2. A total of 176 metabolites with significant differences were identified in Con vs T1, including 30 up-regulated and 146 down-regulated metabolites. A total of 55 metabolites with significant differences were identified for T1 vs T2. This included 35 up-regulated and 20 down-regulated metabolites. Con vs T2 identified a total of 168 metabolites with significant differences. Includes 55 up-regulated and 113 down-regulated metabolites. Among the differential metabolites between T1 vs T2 showed many up-regulated metabolites. This result suggested that a significant numerous metabolites accumulated in the ileum after the addition of Chinese herbal preparations. Most of them are upwardly mobile. The thermograms of the differential metabolites between the three combinations also clearly show the trends described above. Cluster plots and heat maps showed clear clustering of metabolite variation, and the trend of variation is shown in ( ). Based on the Wayne diagrams of the different groups. A total of 191 differential metabolites were identified confirming the dynamic changes in liver metabolites by the addition of herbal preparations. Also, six differential metabolites such as organic acids and their derivatives, heterocyclic compounds, bile acids, amino acids and their metabolites were identified as overlapping differential metabolites. Characteristics of metabolism of metabolites in the liver by different proportions of Chinese herbal preparations To further investigate the changing pattern of differential metabolites, heat map analysis of eight types of differential metabolites was carried out in livers supplemented with different proportions of Chinese herbal preparations ( ). The results showed that most of the amino acids and their metabolites, benzene and their derivatives, bile acids, organic acids and their derivatives, carbohydrates and their related substances showed a decreasing trend. Metabolic pathways of liver metabolites in different proportions of Chinese herbal preparations Differential metabolites identified based on screening criteria were annotated using KEGG annotation information. In order to better investigate the different metabolic patterns of potential metabolic pathways in different proportions of livers ( ). the major metabolic pathways for the Con and T1 subgroups of differential metabolites were: arachidonic acid metabolism, vitamin digestion and absorption, glutathione metabolism, primary bile acid biosynthesis, and bile secretion. The main metabolic pathways for the Con and T2 subgroups of metabolites were: arachidonic acid metabolism, glycolysis, bile secretion, vitamin digestion and absorption, cholesterol metabolism, glutathione metabolism, protein digestion and absorption, vitamin B6 metabolism, primary bile acid biosynthesis. The major pathways of T1 and T2 include glycolysis, bile secretion, glutathione metabolism, retrograde endogenous cannabinoid signaling, protein digestion and uptake, and vitamin digestion and uptake. Meanwhile, the major metabolic pathways associated with the differential metabolites mainly involve energy metabolism, bile acid metabolism with gut-liver axis immunity, circulation and antioxidant effects. L-isoleucine is one of the important raw materials in the liver, which promotes the synthesis of albumin, globulin and other important proteins by liver cells. Subsequently, the selected relevant metabolites were analyzed for pathway interactions. The differential metabolite pathways of the interacting mapped Con vs T1 groups, Con vs T2 groups, and T1 vs T2 groups are shown in . Subsequently, pathway interactions were analyzed for the three groups of differential metabolites. We selected the top 25 differential metabolites and plotted the interactions . We found that the fatty acyl metabolites ((±)5-HETE), 5,6-DiHETrE were located in the central of the graph, and were closely associated with other metabolites such as the amino acid and its derivatives (N-Methylalanine) Gly-Ile, aldehydes and ketones metabolites (N-Ethylglycine), and organic acids and their metabolites (2-methyl citric acid) are closely related. Together, these metabolites promote liver antioxidant, anti-inflammatory, and nutrient uptake and substance metabolism. Ileum microorganisms To further investigate whether Chinese herbal preparations affect the intestinal flora, we analyzed the composition of the ileal flora by 16srRNA gene sequencing. The structure of microbial composition at the phylum level and genus level is shown in ( ). At the phylum level, there was an increasing trend in the phylum of Firmicutes , Actinobacteriota , Bacteroidota , Proteobacteria , Verrucomicrobiota compared to the Con. At the genus level, there was an increasing trend in the Christensen’s algae R_7_group , Eubacterium hallii_group , Lachnospiraceae_NK3A20_group , Arthrobacter , and NK4A214_group compared to the Con. Correlation analysis Bile acids (BAs) are the final byproducts of liver cholesterol metabolism, synthesized within the liver, and subsequently transported as precursor-bound bile acids across the tubular membranes of hepatocytes to the biliary system, which ultimately eliminates bile acids into the small intestine ( ). The heat map analysis of the correlation between ileum bile acids and liver bile acids is shown in ( ). For example, 3-epideoxycholic acid in the ileum was significantly negatively correlated with isodeoxycholic acid in the liver ( P <0.05). There was a significant positive correlation ( P <0.05) between β-deursolic acid in the ileum and 3-epi-deoxycholic acid in the liver. Based on the findings presented in ( ), it was observed that Gamma-Mercholic Acid exhibited a positive correlation with the thick-walled phylum in the ileum. Furthermore, Gamma-Mercholic Acid and Cholic acid, beta-Muricholic acid were identified as relevant bile acids for Bacteroidota. Additionally, Hyodeoxycholic acid, Isolithocholic acid, Lithocholic acid, Ursodeoxycholic Acid were found to be positively correlated with Actinobacteriota. According to ( ), we found that the bile acids in the liver that correlated with Firmicute s were Gamma-Mercholic Acid, Isochodeoxycholic acid, and showed a positive correlation. The bile acids of relevance to the Bacteroidota are Gamma-Mercholic Acid, Cholic acid and Isochodeoxycholic acid. The bile acids correlated with Actinobacteriota were Gamma-Mercholic Acid, Cholic acid and Isochodeoxycholic acid, with a positive trend.
After feeding the herbal feed additives to the Hu sheep, it was observed through liver tissue sections, as shown in ( ), the liver morphology of the Hu sheep in the control group was more regular, and the liver cord was neatly arranged and a small amount of collagen fibers could be seen in the portal area of the liver tissue, while there was hyperplasia of collagen fibers in the portal area of the liver tissue of the Hu sheep in the T1 and T2 groups, and the formation of fibrous intervals could be seen, but the difference was not obvious. Compared with the control group, T1 and T2 lymphocytes increased but the difference was not obvious, and there was some hyperplasia in connective tissues but the difference was not obvious, which indicated that the herbal preparation had no obvious lesions and abnormalities on the liver of the Hu sheep, and indicated that the herbal feed additives could be used for the healthy reproduction of the Hu sheep, and the side effects on the liver tissues of the Hu sheep were small.
As can be seen from , the serum IgA, IgG, and complement C3 contents of the test I group were significantly higher than those of the control group ( P <0.05); there was no significant difference in serum C4 and IgM contents between the groups ( P >0.05).
To investigate the metabolic alterations occurring in the gut-liver axis under varying proportions of Chinese herbal preparations, we conducted metabolome sequencing of the ileum. The ileum was specifically identified and analyzed, resulting in the detection of a total of 1202 metabolites. These metabolites were further classified into various categories, including 378 amino acids and their metabolites, 102 Benzene and its derivatives, 51 alcohols and amines, 21 bile acids, 21 coenzymes and vitamins, 94 glycerophospholipids, 87 nucleotides and their metabolites, 31 hormones and hormone-related substances, and 22 aldehydes ketones and esters, 51 carbohydrates and their metabolites, 164 organic acids and their derivatives, 71 heterocyclic compounds, and 89 fatty acyls. The main components of Hu sheep ileums are amino acids and their metabolites and organic acids and their derivatives.
OPLS-DA assessed the explanatory power and predictive ability of the model by R 2 Y and Q 2 . In this study, OPLS-DA was used to analyse the differences in ileal metabolites between different groups and to examine the effect of Chinese herbal preparations, and significant metabolite differences and good group differentiation were found. OPLS-DA was used for Con vs. T1, T1 vs. T2, and Con vs. T2. the R 2 Y values were 0.99, 0.997, and 0.971, respectively ( ). It suggested that the OPLS-DA model is well fitted. Highly predictable and suitable for subsequent data analysis. According to the OPLS-DA results, groups of differential metabolites were further selected and classified. Further, we used FC ≥ 1.2 to select the results, which are represented by Volcano, Venn, heat and k-means diagram. To study the significant changes in the ileum metabolites of different proportions of Chinese herbal preparations, we analyzed the different metabolites between Con vs T1, T1 vs T2, Con vs T2. Cluster graphs and heat maps show clear clustering of metabolite variants, and the trend of variability is shown in . Based on the Vennen diagrams for the different groups in , a total of eight different metabolites were simultaneously identified in the three groups. For example, organic acids and their derivatives, benzene and their derivatives, nucleotides and their metabolites, bile acids, amino acids and their metabolites, hormones and hormone-related substances were identified as overlapping differential metabolites ( ). for example: 2-Hydroxyphenylacetic acid is an organic compound with antioxidant properties, which can act on β2 receptors on intestinal smooth muscle cell. beta-Muricholic acid is a bile acid that also plays an important role in the gut, it can promote the production and removal of bile. It can stimulate the secretion and excretion of bile by acting on G protein-coupled receptors on the surface of hepatocytes, thus maintaining the normal circulation of bile. L-lysine-L-alanine is one of the essential components of glutathione synthesis. Involved in glutathione synthesis: Glutathione is the main intracellular antioxidant, able to scavenge free radicals and peroxides also acts as a metabolic substrate for intestinal microorganisms, influencing the composition and function of the intestinal microbiota, which in turn influences the antioxidant capacity of the intestines. 6-Methylaminopurine is an antimetabolite. It also has a role in the intestinal tract, which can inhibit the growth and multiplication of bacteria in the gut. A total of 120 metabolites with significant differences were identified in Con vs T1 and included 31 up-regulated and 89 down-regulated metabolites ( ). A total of 263 metabolites with significant differences were identified in Con vs T2 ( ). Includes 134 up-regulated and 129 down-regulated metabolites. Among the differential metabolites between Con vs T2 and T1 vs T2 exhibited numerous up-regulated metabolite. This result suggested that a large number of metabolites accumulated in the ileum after the addition of Chinese herbal preparations. Most of them are up-regulated. T1 vs T2 identified a total of 135 metabolites with significant differences and included 91 up-regulated and 44 down-regulated metabolites ( ).
To further investigate the changing patterns of differential metabolite. In the ileum supplemented with different proportions of Chinese herbal preparations. The six classes of differential metabolites were analyzed by heat mapping ( ). The results showed a decreasing trend in most bile acids, organic acids and their derivatives, heterocyclic compounds hormones and their related substances, and an increasing trend in amino acids and their metabolites, and carbohydrates.
To explore the potential metabolic pathways of the ileum in different proportions of Chinese herbal preparations. According to the differential metabolites generating KEGG classification and bubble diagrams as in ( ), Identify potential developmentally relevant biomarkers (to gut-liver axis nutrient metabolism and antioxidant and immunity). Differential metabolites were screened and annotated, and more than five metabolites involved in the KEGG pathway were selected and subjected to cluster analyses to investigate changes in metabolic patterns at different ratios. The Con and T1( ) subgroups differed in steroid hormone synthesis, as well as in the pathways of tryptophan, phenylalanine, and tyrosine metabolism, bile secretion, vitamin digestion and absorption, amino acid and nucleoside glycoside metabolism, and glutathione and arachidonic acid metabolism. The Con and T2 ( ) subgroups differed in the metabolic pathways of metabolites, mainly the metabolism of phenylalanine and tryptophan, steroid biosynthesis, secretion of bile acids, metabolism of arachidonic acid, secretion of primary bile acids, digestion and absorption of vitamins. The T1 and T2 subgroups ( ) were predominant in the metabolic pathways of metabolites: steroid metabolism, bile acid and primary bile acid secretion, cholesterol metabolism, vitamin digestion and absorption, and arachidonic acid metabolism. Following this, pathway interactions were analyzed for the three groups of differential metabolites in the ileum. We selected the top 25 differential metabolites to map the interactions ( ). We found that fatty acyl metabolites (LXB4, Carnitine C10:2) were closely associated with amino acids and their metabolites Ile-Val, cyclo (pro-pro), the Organic acids and their derivatives (SDMA). beta-Muricholic acid was closely associated with Tridecanedioic acid, Tetradecanedioic acid. These metabolites synergize with each other to promote the intestinal barrier of the ileum and the absorption and metabolism of nutrients.
To study the metabolic changes in the liver of different proportions of herbal preparations. We performed liver metabolome sequencing and identified and analyzed a total of 1059 metabolites in the liver, classifying the various metabolites in the liver into different categories. Among them, 333 were amino acids and their metabolites, 81 were benzene and its derivatives, 45 were alcohols and amines, 12 were bile acids, 15 were coenzymes and vitamins, 95 were glycerophospholipids and glycerolipids, 77 were nucleotides and their metabolites, 20 were hormones and hormone-related substances, 18 were aldehydes, ketones and esters, 4 were tryptophan, cholinergic and pigmentary substances, 52 were carbohydrates and their metabolites, 52 were organic acids and its derivatives 155 kinds, 62 kinds of heterocyclic compounds and 76 kinds of fatty acyl groups. The main components of goat liver include amino acids and their metabolites, organic acids and their derivatives.
The OPLS-DA analysis revealed R 2 Y values of 0.998, 0.998, and 0.998 for the comparisons of Con vs T1, T1 vs T2, and Con vs T2, respectively ( ), indicating a well-fitted, predictable, and appropriate OPLS-DA model for subsequent data analysis. according to the OPLS-DA results, differential metabolites were further screened and classified. Further, we used FC ≥ 1.2 to select the results represented by Volcano, Venn, heat and k-means diagram. In order to investigate the significant changes in liver metabolites of different ratios of Chinese herbal preparations, we analyzed the differential metabolites between Con vs T1, T1 vs T2, and Con vs T2. A total of 176 metabolites with significant differences were identified in Con vs T1, including 30 up-regulated and 146 down-regulated metabolites. A total of 55 metabolites with significant differences were identified for T1 vs T2. This included 35 up-regulated and 20 down-regulated metabolites. Con vs T2 identified a total of 168 metabolites with significant differences. Includes 55 up-regulated and 113 down-regulated metabolites. Among the differential metabolites between T1 vs T2 showed many up-regulated metabolites. This result suggested that a significant numerous metabolites accumulated in the ileum after the addition of Chinese herbal preparations. Most of them are upwardly mobile. The thermograms of the differential metabolites between the three combinations also clearly show the trends described above. Cluster plots and heat maps showed clear clustering of metabolite variation, and the trend of variation is shown in ( ). Based on the Wayne diagrams of the different groups. A total of 191 differential metabolites were identified confirming the dynamic changes in liver metabolites by the addition of herbal preparations. Also, six differential metabolites such as organic acids and their derivatives, heterocyclic compounds, bile acids, amino acids and their metabolites were identified as overlapping differential metabolites.
To further investigate the changing pattern of differential metabolites, heat map analysis of eight types of differential metabolites was carried out in livers supplemented with different proportions of Chinese herbal preparations ( ). The results showed that most of the amino acids and their metabolites, benzene and their derivatives, bile acids, organic acids and their derivatives, carbohydrates and their related substances showed a decreasing trend.
Differential metabolites identified based on screening criteria were annotated using KEGG annotation information. In order to better investigate the different metabolic patterns of potential metabolic pathways in different proportions of livers ( ). the major metabolic pathways for the Con and T1 subgroups of differential metabolites were: arachidonic acid metabolism, vitamin digestion and absorption, glutathione metabolism, primary bile acid biosynthesis, and bile secretion. The main metabolic pathways for the Con and T2 subgroups of metabolites were: arachidonic acid metabolism, glycolysis, bile secretion, vitamin digestion and absorption, cholesterol metabolism, glutathione metabolism, protein digestion and absorption, vitamin B6 metabolism, primary bile acid biosynthesis. The major pathways of T1 and T2 include glycolysis, bile secretion, glutathione metabolism, retrograde endogenous cannabinoid signaling, protein digestion and uptake, and vitamin digestion and uptake. Meanwhile, the major metabolic pathways associated with the differential metabolites mainly involve energy metabolism, bile acid metabolism with gut-liver axis immunity, circulation and antioxidant effects. L-isoleucine is one of the important raw materials in the liver, which promotes the synthesis of albumin, globulin and other important proteins by liver cells. Subsequently, the selected relevant metabolites were analyzed for pathway interactions. The differential metabolite pathways of the interacting mapped Con vs T1 groups, Con vs T2 groups, and T1 vs T2 groups are shown in . Subsequently, pathway interactions were analyzed for the three groups of differential metabolites. We selected the top 25 differential metabolites and plotted the interactions . We found that the fatty acyl metabolites ((±)5-HETE), 5,6-DiHETrE were located in the central of the graph, and were closely associated with other metabolites such as the amino acid and its derivatives (N-Methylalanine) Gly-Ile, aldehydes and ketones metabolites (N-Ethylglycine), and organic acids and their metabolites (2-methyl citric acid) are closely related. Together, these metabolites promote liver antioxidant, anti-inflammatory, and nutrient uptake and substance metabolism.
To further investigate whether Chinese herbal preparations affect the intestinal flora, we analyzed the composition of the ileal flora by 16srRNA gene sequencing. The structure of microbial composition at the phylum level and genus level is shown in ( ). At the phylum level, there was an increasing trend in the phylum of Firmicutes , Actinobacteriota , Bacteroidota , Proteobacteria , Verrucomicrobiota compared to the Con. At the genus level, there was an increasing trend in the Christensen’s algae R_7_group , Eubacterium hallii_group , Lachnospiraceae_NK3A20_group , Arthrobacter , and NK4A214_group compared to the Con.
Bile acids (BAs) are the final byproducts of liver cholesterol metabolism, synthesized within the liver, and subsequently transported as precursor-bound bile acids across the tubular membranes of hepatocytes to the biliary system, which ultimately eliminates bile acids into the small intestine ( ). The heat map analysis of the correlation between ileum bile acids and liver bile acids is shown in ( ). For example, 3-epideoxycholic acid in the ileum was significantly negatively correlated with isodeoxycholic acid in the liver ( P <0.05). There was a significant positive correlation ( P <0.05) between β-deursolic acid in the ileum and 3-epi-deoxycholic acid in the liver. Based on the findings presented in ( ), it was observed that Gamma-Mercholic Acid exhibited a positive correlation with the thick-walled phylum in the ileum. Furthermore, Gamma-Mercholic Acid and Cholic acid, beta-Muricholic acid were identified as relevant bile acids for Bacteroidota. Additionally, Hyodeoxycholic acid, Isolithocholic acid, Lithocholic acid, Ursodeoxycholic Acid were found to be positively correlated with Actinobacteriota. According to ( ), we found that the bile acids in the liver that correlated with Firmicute s were Gamma-Mercholic Acid, Isochodeoxycholic acid, and showed a positive correlation. The bile acids of relevance to the Bacteroidota are Gamma-Mercholic Acid, Cholic acid and Isochodeoxycholic acid. The bile acids correlated with Actinobacteriota were Gamma-Mercholic Acid, Cholic acid and Isochodeoxycholic acid, with a positive trend.
The ileum plays a crucial role in the gut-liver axis in ruminants, specifically in the absorption of amino acids. Short chain fatty acids (SCFAs) serve as crucial metabolites produced by gut microorganisms. Several studies have demonstrated the significant role of short-chain fatty acids (SCFAs) in the maintenance of host health and disease. Primarily, short-chain fatty acids (SCFAs) have the ability to regulate the immune system and enhance the body’s resistance to pathogenic microorganisms. Additionally, SCFAs can promote the absorption of nutrients in the small intestine, particularly for fat and protein ( ). SCFAs, such as butyric acid and propionic acid, serve as a source of nutrients and energy for the intestinal epithelium. Propionic acid also acts as a precursor for lipogenesis and gluconeogenesis. Moreover, maintaining appropriate levels of butyric acid in the gut is crucial for preserving intestinal integrity and permeability ( ). Several studies have demonstrated that the dietary addition of astragalus polysaccharides significantly increases the levels of acetic acid, propionic acid, butyric acid, taurine, and bile acids in the ileum, thereby regulating intestinal immune function ( ). In this experiment, the addition of 0.5% Chinese herbal preparations increased the short-chain fatty acid content (propionic acid and butyric acid) in the ileum, which could enhance the immunity of the ileum. Bile, an essential liver secretion. Bile acids, which are the primary organic solute components of bile, are synthesized from cholesterol in the liver and stored in the gallbladder. Following intestinal secretion, bile acids are reabsorbed in the distal small ileum ( ). Cholic acid (CA) and goose deoxycholic acid (CDCA) are produced from cholesterol in the liver through a series of enzymatic reactions, combining with glycine to form bile acids, which are then released into the bile. BAs are then secreted through the gallbladder into the intestine ( ). Several studies have demonstrated that bile acids enhance nutrient absorption and regulate the innate immune system of the gut and liver ( ). Bile-removing acids are important in the ileum, it not only promotes fat digestion and absorption, but also has various physiological functions such as regulating cholesterol metabolism and maintaining intestinal microecological balance. Depurative bile acids significantly increased the abundance of probiotic species. For example, Bifidobacterium bifidum, which promotes lipid metabolism via the fatty acid-liver peroxisome proliferator-activated receptor alpha (PPAR-α) signaling pathway, which in turn upregulates liver FXR ( ). The study demonstrated that the administration of CDCA supplementation in mice effectively restored the LPS-induced intestinal permeability and mitigated the damage to the intestinal barrier ( ). Our results showed that levels of pro-goose deoxycholic acid, deoxycholic acid, γ-rat bile acid, rat deoxycholic acid and 1-aziridinyl ethanol were significantly higher in T1 compared to Con and T2, suggesting that the addition of 0.5% of the Chinese herbal preparation to the diets of Hu sheep enhances the assimilation of bile acids and nutrients in the ileum, as well as modulating the immune system. The metabolites derived from tryptophan, including indole-3-acetic acid, indole-3-aldehyde, and indole-3-acrylic acid, contribute to the maintenance of intestinal immune homeostasis and exert a positive influence on both adaptive and innate immune responses in the ileum. These findings highlight the beneficial impact of these tryptophan metabolites on the host’s immune system ( ). Indoles, such as 5-hydroxyindole pyruvic acid, have been identified as potential biomarkers for diarrhea. In animal studies, the supplementation of Ginseng and Atractylodis Macrocephalae in the diet has been shown to enhance the microenvironment of intestinal flora and prevent diarrhoea ( ). In the present study, the addition of Chinese herbal preparations to the ration resulted in increased levels of 5-hydroxyindole pyruvate in the ileum of Hu sheep, which contributed to help to prevent Hu sheep diarrhoea. Additionally, Indole-3-lactic acid has been found to improve intestinal epithelial cell damage, promote the proliferation of intestinal stem cells, and mitigate oxidative stress in the intestinal epithelium ( ). In this experimental study, the levels of indole-3-lactic acid were found to be significantly higher in the test group compared to the control group, suggesting that the herbal preparation contributed to the enhancement of the ileum’s barrier function in Hu sheep. The key to the gut-liver axis is in the liver, the animal intestine is connected to the liver via the portal vein, and 2/3 of the liver blood comes from the intestine. Intestinal microbiota is the axis that affects gut-liver physiopathology ( ). SCFAs are fermentation products of intestinal microorganisms, and high concentrations of SCFAs affect immunity through the intestine and portal vein, and have a significant effect on gut-liver immunity. Propionic acid is a precursor of hepatic gluconeogenesis, butyric acid is partially used for hepatic ATP metabolism, and butyric acid activates gluconeogenesis. SCFA regulate gut metabolic proteins ( ). For example, Beta-alanine also has an antioxidant effect, which reduces oxidative stress caused by free radicals and fights against cell and tissue damage. β-alanine is a precursor of coenzyme A, which promotes hepatocyte proliferation and repair, and enhances liver metabolism and detoxification ( ). In this experiment, the addition of 0.5% Chinese herbal preparations significantly increased the β-alanine content thereby promoting liver metabolism and improving liver antioxidant function. Amino acids, such as histidine, cysteine, methionine, etc, have antioxidant properties due to the sulphur or amino groups they contain and can be directly or indirectly involved in the antioxidant process. Firstly, it can provide energy and raw materials for the synthesis of other important substances, and secondly, amino acids can be converted into the corresponding amines by decarboxylation, a process that can be involved in the synthesis of neurotransmitters and hormones as well as in the release of energy. For example, glycine and glutamic acid are components of glutathione, the main antioxidant in the liver ( ). Glutamine is a precursor to glutathione, which helps cells fight oxidation and scavenge free radicals. It enhances immune responses, reduces inflammation and cellular damage, and modulates antioxidant and anti-inflammatory pathways ( ). Glutathione-mediated biotransformation in the liver is a well-known detoxification process for the elimination of small xenobiotics ( ). GSH is involved in numerous cellular processes, such as protein folding, safeguarding protein thiols from oxidation and cross-linking, degradation of proteins containing disulfide bonds, regulation of the cell cycle and proliferation, metabolism of ascorbic acid, apoptosis, and iron death ( ). In this experiment: oxidized glutathione and reduced glutathione were significantly higher in the liver of the test group than in the control group. It indicates that the addition of herbal preparations can improve the antioxidant function of the liver. Caffeic acid has a significant effect in the ileum, and likewise in the liver. It has been shown that dietary addition of caffeic acid significantly reduced nickel-induced lipid peroxidation and restored antioxidant defense levels in rat liver. These results demonstrate the physiological relevance of caffeic acid and its antioxidant effects in body ( ). The addition of 0.5% Chinese herbal preparations in this experiment increased the content of caffeic acid in the liver thereby increasing the antioxidant function of the liver. The impact of the gut microbiota on human health has been recognized as important and the liver is one of the key organs affected by the gut microbiota. The influence of gut flora on the liver is obvious, but the liver can also influence gut flora in a number of ways. This interaction constitutes the gut-liver axis ( ). It has been shown that the higher the level of the Firmicutes , the less inflammation there is. Acetylpropionic acid can be broken down by Firmicutes into short-chain fatty acids such as acetic acid and propionic acid. These short-chain fatty acids can be absorbed into the bloodstream and then pass through the portal vein to the liver where they are finally used for gluconeogenesis in the liver ( ). Studies have shown that dihydromyricetin (DMY), a natural flavonoid, may alleviate NASH by decreasing hepatotoxicity, modulating lipid metabolism, and increasing intestinal probiotics. Flavonoids in herbal medicine were found to increase the abundance of intestinal flora, protect the gastrointestinal tract and the liver, and contribute to the body’s antioxidant capacity. Christensenellaceae_R_7_ group is associated with energy metabolism and inflammation, contributing to gut flora balance and nutrient absorption ( ). lachnospiraceae_nk3a20_groups has compounds with antioxidant and anti-inflammatory properties. Some studies have shown that the higher relative abundance of lachnospiraceae_nk3a20_groups may be beneficial to reduce the incidence of diarrhoea in rabbits ( ). The addition of Chinese herbal preparations in this experiment could significantly increase the abundance of lachnospiraceae_nk3a20_groups , and We can know that Poria cocos in herbal preparations can improve antioxidant in Hu sheep. The Eubacterium hallii group , a component of the intestinal microbiota, was observed to produce butyric acid, a short-chain fatty acid. This acid plays a vital role in the regulation of intestinal flora, maintenance of intestinal health, and provision of beneficial effects on intestinal well-being ( ). The inclusion of 0.5% of the Chinese herbal preparation in the diet resulted in elevated levels of Eubacterium hallii group bacteria in the intestines, consequently exerting a protective influence on the intestinal tract. Several studies have demonstrated the efficacy of Seven-flavor Baijusan, an herbal formulation, in treating diarrhoea of diverse etiologies. This formulation exerts its antidiarrheal properties by modulating the composition of the intestinal microbiota. Notably, the cumulative glycosides present in Seven-flavored Atractylodes Macrocephalae exhibit a targeted influence on select intestinal bacteria and bile acids ( ). Saponins are a large class of amphiphilic glycosides, saponins are saponins released by the intestinal flora, which have a variety of biological activities, such as promotion of absorption, antioxidant, anti-inflammatory, and anticancer activity also helps to explain the mechanism of in vivo bioactivity of saponins as well as the prevention of several chronic diseases ( ). Our findings suggest that the addition of herbal preparations led to an increase in the presence of the Firmicutes and Mycobacterium anisopliae , which in turn led to an increase in the abundance of γ-commercial and bile acids. Based on these observations, it was demonstrated that polysaccharides and saponins in Chinese herbal preparations can ameliorate diarrhoea in Hu sheep. Furthermore, study demonstrated that ursodeoxycholic acid effectively accelerates the frequency of bile acid recycling within the human body ( ). Synthesis of lipocholic acid, deoxycholic acid and their derivatives by intestinal flora serves as a mechanism to regulate signaling molecules in immune cells, including DCs, macrophages, Th1 cells and Th17 cells. The function of these immune cells is closely linked to immune and antioxidant responses ( ). In this study, the addition of Chinese herbal preparations led to an increase in the abundance and subsequent content of goose deoxycholic acid, bile acids and deoxycholic acid within the phylum Hautchettia, Mycobacterium anthropophilum, Actinobacteria and Ascomycetes. It can be demonstrated that the use of Chinese herbal preparations enhances the digestion of nutrients, antioxidant processes, promotes the diversity of intestinal flora, and strengthens the immune function of the organisms in hooves.
In this study, we found that the addition of Chinese herbal preparations resulted in higher content of amino acids and their metabolites, organic acids and their metabolites, bile acids, and increased abundance of enterohepatic flora ( Firmicutes , and Bacteroidota ) in the liver and ileum of the Hu sheep. This suggests that the addition of Chinese herbal preparations can promote the digestion, absorption and metabolism of nutrients in the ileum and liver, as well as improve the antioxidant and immune effects on the organism, promote gut-liver circulation, regulate the abundance of gut-liver flora, and increase the resistance to diarrhoea in Hu sheep. The best effect is when the amount added is 0.5 percent.
|
A Radiologist’s Guide to IDH-Wildtype Glioblastoma for Efficient Communication With Clinicians: Part II–Essential Information on Post-Treatment Imaging | 2868b754-e964-48a5-b2a5-f99e11361f76 | 11955384 | Pathologic Processes[mh] | Isocitrate dehydrogenase (IDH)-wildtype glioblastoma, central nervous system (CNS) World Health Organization (WHO) grade 4, is a highly aggressive tumor with a median overall survival (OS) of less than 18 months despite standard treatment . Notwithstanding advances in various treatment options, tumor recurrence or progression is inevitable in IDH-wildtype glioblastomas, given their aggressive biological behavior. In clinical trials, a standardized determination of response assessment is essential for the identification of more effective therapies. Apart from clinical trials, in routine clinical practice, accurate interpretation of post-treatment imaging is crucial for assessing treatment response, changing treatment regimens in cases of tumor progression or recurrence, and predicting the prognosis in patients with IDH-wildtype glioblastoma. However, differentiating true tumor progression or recurrence and treatment response from treatment-related changes such as pseudoprogression (PsP), radiation necrosis, or pseudoresponse (PsR) presents a significant challenge. Clinical symptoms may not provide sufficient information to differentiate these conditions, as the mass effect from treatment-related changes can also cause worsening of neurological symptoms, similar to tumor progression or recurrence. The prior Part I Review summarized the histopathological concept of IDH-wildtype glioblastoma, and the information radiologists can provide for preoperative and immediate postoperative imaging. This subsequent Part II Review focuses on the information that radiologists can provide to clinicians based on post-treatment imaging findings. We present an overview of the clinical pathway of patients with IDH-wildtype glioblastoma, followed by a brief guide to the updated version of the Response Assessment in Neuro-Oncology (RANO) criteria (RANO 2.0) for clinical trials. Finally, we summarize the clinical and imaging findings of treatment-related changes, as well as true tumor progression or recurrence. As immunotherapy has been more frequently applied recently, we have provided a separate section on PsP in patients undergoing immunotherapy.
The standard of care for newly diagnosed patients includes maximal safe resection followed by concomitant radiotherapy and chemotherapy with temozolomide plus 6–12 cycles of adjuvant temozolomide for those aged <70 years who are in good general and neurological condition . Patients with unfavorable prognostic factors may undergo hypofractionated radiation therapy (RT) or chemotherapy alone according to their O 6 -methylguanine-DNA methyltransferase ( MGMT ) promoter methylation status . At recurrence, the standard of care is not well defined . Treatment is selected based on prior treatment, age, Karnofsky Performance Status, MGMT promoter methylation status, and patterns of disease progression . A recent multicenter study showed that re-resection may lead to improved survival outcomes when maximal safe resection with minimal residual contrast-enhancing tumor is achieved . The efficacy of re-irradiation remains debatable , whereas temozolomide rechallenge and lomustine are other options for alkylating chemotherapy. Bevacizumab, an antiangiogenic regimen, may prolong progression-free survival (PFS) but not OS in patients with recurrent tumors and has been the standard salvage therapeutic option for patients with recurrent tumors in the U.S. since its approval in 2009 by the FDA. In Europe, lomustine is the most commonly used second-line chemotherapy based on clinical trials showing a similar OS between patients with recurrence receiving lomustine plus bevacizumab and those receiving lomustine alone . Lomustine has never been shown to prolong post-progression survival, and other options, including regorafenib, are used depending on the individual center’s preference . Immunotherapy is a rapidly emerging treatment modality that includes various modalities such as vaccination therapy, oncolytic viral therapy, and immune checkpoint inhibitors such as anti-PD-1 antibody (nivolumab or pembrolizumab), anti-CTLA-4 antibody (ipilimumab), and chimeric antigen receptor (CAR) T cell therapy . The effect of immunotherapy on recurrence is currently being actively investigated, although its survival benefit has not yet been demonstrated . An overview of the clinical pathway for newly diagnosed and recurrent IDH-wildtype glioblastoma, combined with the timeline and imaging period, is summarized in .
Radiologists’ key role in imaging IDH-wildtype glioblastoma in the post-treatment setting is to correctly diagnose tumor progression or recurrence from treatment-related changes, such as PsP, radiation necrosis, or PsR. Contrary to the preconceptions frequently observed by radiology trainees, the most important aspect of this process is “not” focusing solely on the current imaging findings; it also considers the underlying clinical context. In other words, radiologists should acknowledge the clinical background to provide reliable imaging interpretations, such as the incidence, time window, and risk factors of each treatment-related change. Radiologists should also realize the importance of reviewing preoperative, immediate postoperative, and serial follow-up images; checking the surgical note and RT chart; and checking the period of each treatment to provide an accurate diagnosis of the “current” post-treatment imaging. Active communication with clinicians is also crucial in difficult cases to draw a reasonable conclusion about current imaging for patient care; we advise active discussions with neurosurgeons, neurologists, radiation oncologists, and pathologists.
In general, brain tissue is protected by the blood-brain barrier (BBB), which prevents contrast agent molecules from passing through . Contrast enhancement on MRI is a non-specific imaging finding that reflects the passage of gadolinium-based contrast agents across a disrupted BBB. While neoangiogenesis, one of the pathological hallmark of IDH-wildtype glioblastoma, is the most important cause of contrast enhancement in IDH-wildtype glioblastomas, any other cause of vascular leakage can also cause contrast enhancement. Cytotoxic therapies or immunotherapies may not only damage tumor vessels and normal brain tissue but also induce an inflammatory response in microglia, which induces pronounced BBB disruption. Therefore, a contrast-enhancing mass on post-treatment imaging may not only include tumor recurrence, but also PsP and radiation necrosis, which are notoriously difficult to differentiate from tumor recurrence on conventional imaging. In contrast, antiangiogenic therapy reduces the tumor vessel vasculature and leads to restoration of the BBB . Therefore, contrast enhancement declines even in the presence of non-enhancing tumor growth, leading to PsR on MRI .
The recommended MRI protocols for adult gliomas in clinical trials include 3D pre- and post-contrast T1-weighted imaging (T1), 2D post-contrast T2-weighted (T2) and pre-contrast fluid-attenuation inversion recovery (FLAIR) imaging, and 2D diffusion-weighted imaging . Perfusion imaging, such as dynamic susceptibility contrast imaging or arterial spin labeling, provides more detailed information on the underlying tumor physiology and should be routinely used for baseline imaging and follow-up in the clinical setting. MRI also provides more information on tumor metabolism and has been applied in some institutions in confounding cases. However, advanced physiological imaging methods are not included in the imaging protocols of clinical trials because of a lack of standardization and require further validation. Post-contrast FLAIR is not routinely recommended for gliomas; however, this sequence, in addition to pre-contrast FLAIR, may be useful in the detection of leptomeningeal metastases (LM) at recurrence . The inherent pitfalls and limitations of each advanced imaging technique have been summarized elsewhere and will not be discussed in detail in this review because this is the basic knowledge required for radiologists. In terms of PET imaging, amino acid PET is mostly approved to differentiate treatment-related changes from tumor recurrence in Europe, whereas no approval has been received in the U.S. .
In 2010, RANO was developed to improve the reliability and comparability of response assessments of gliomas in clinical trials with previously released RANO statements, such as high-grade gliomas (RANO-HGG) and low-grade gliomas (RANO-LGG) . Over time, concerns regarding the challenges of differentiating PsP secondary to radiotherapy and immunotherapies from true tumor progression have led to the introduction of the Modified RANO criteria (mRANO) and Immunotherapy RANO criteria (iRANO) . Based on studies comparing RANO-HGG, mRANO, and iRANO , RANO 2.0, which provides a single unified set of response criteria for all gliomas in clinical trials , was recently developed. There are several distinct aspects of RANO 2.0. A single unified set of response criteria, rather than separate RANO-HGG and RANO-LGG criteria, is applied according to RANO 2.0 in all gliomas. The first post-radiotherapy (post-RT) MRI (21–35 days after RT completion), rather than the postsurgical MRI, should be used as the baseline imaging in newly diagnosed patients, whereas a pre-treatment (pre-Tx) scan (≤14 days before the start of treatment) should be used as the baseline in recurrent patients. Repeat MRI is mandatory to confirm progression within 12 weeks after radiotherapy completion to distinguish PsP from tumor progression because the incidence of PsP is high during this period. Confident diagnosis of tumor progression within 12 weeks after radiotherapy without follow-up imaging is only possible if the progression is clearly outside the radiation field (for example, beyond the high-dose region or 80% isodose line) or if there is pathological confirmation . For IDH-wildtype glioblastomas with contrast enhancement, the non-enhancing tumor will no longer be evaluated, except when assessing the response to antiangiogenic agents. Given the limited value of evaluating non-enhancing progression in contrast-enhancing IDH-wildtype glioblastomas, the RANO group recommends removing it from the criteria for determining progression in most trials . In uncommon IDH-wildtype glioblastomas without contrast enhancement, which comprise approximately 7% of IDH-wildtype glioblastomas , T2/FLAIR should be performed. In clinical trials, in testing agents that significantly reduce BBB permeability (i.e. antiangiogenic agents) contrast enhancement may not accurately reflect the actual tumor burden. Diameters/segmentation, as proposed for evaluating mixed contrast-enhancing and non-enhancing tumors, or a qualitative assessment can be adopted. Details of the changes in RANO 2.0 are presented elsewhere . Although RANO 2.0 is useful for response assessment in clinical trials, the biggest limitation of its application in routine clinical practice is that it does not reflect advanced imaging such as diffusion, perfusion, or amino acid PET owing to validation issues . Therefore, in routine practice, we advise readers to rely more on the clinical context and advanced imaging findings than sole reliance on RANO 2.0, for an accurate diagnosis in each patient.
An overview of post-treatment imaging findings, such as PsP after RT, radiation necrosis, PsR, and PsP after immunotherapy, combined with the timeline and imaging period, is presented along with a checklist for interpretation in . Pseudoprogression After Radiotherapy This section focuses on PsP after radiotherapy. PsP during immunotherapy will be discussed separately because of the different clinical and imaging characteristics from PsP after radiotherapy . PsP after radiotherapy is defined as an enlarged or new contrast enhancement within the radiation field mimicking tumor progression that resolves spontaneously without modifying the treatment on follow-up imaging . An accurate diagnosis of PsP is important because effective treatment might be erroneously terminated if PsP is misdiagnosed as tumor progression, with a potentially negative impact on survival, whereas the efficacy of subsequent therapy may be overestimated. Data on whether there is a true survival advantage for patients with PsP are controversial . Reliable discrimination between PsP and early tumor progression can be achieved through follow-up imaging or histopathological confirmation; however, histopathological confirmation is rarely performed in cases where PsP is strongly suspected owing to its invasive approach, and a uniform pathological diagnosis of PsP is lacking . Therefore, radiologists play a critical role in the diagnosis of PsP, as the diagnosis is primarily made by radiologic assessment with a short-term period of imaging follow-up, and rarely by histopathological confirmation. Clinical Presentation PsP occurs in 30%–40% of patients with IDH-wildtype glioblastoma within 12–24 weeks of completing radiotherapy . Although the RANO 2.0 criteria restrict the time period of PsP to within 12 weeks of completing radiotherapy , PsP may also be seen within 24 weeks of completing radiotherapy with a lower incidence than within 12 weeks . In terms of clinical findings, the literature shows discordant results on whether neurological status is associated with PsP, whereas others suggest less neurological deterioration in PsP than in tumor progression , and some studies suggest no significant difference in neurological status between PsP and tumor progression . PsP eventually resolves spontaneously without further treatment. Risk Factors MGMT promoter methylation is a well-known predictive biomarker of temozolomide treatment as well as a strong prognostic biomarker in patients with IDH-wildtype glioblastoma and is observed in approximately 40% of patients . MGMT promoter methylation is also acknowledged as a strong risk factor for PsP, with a higher likelihood of developing PsP than in patients without MGMT promoter methylation, although previous studies have shown variable results in terms of the incidence of PsP in MGMT promoter-methylated versus unmethylated tumors . Nonetheless, it should be noted that patients with MGMT promoter methylation are more likely to develop PsP (at least twice as much as patients without MGMT promoter methylation), and a vast majority of early imaging changes in patients with MGMT promoter methylation represent PsP rather than tumor progression . Pathophysiology and Histopathology The mechanism of PsP remains poorly understood; however, it is thought to represent edema and increased vascular permeability secondary to radiotherapy-induced tumor and endothelial cell death . Transient interruption of myelin synthesis secondary to radiation injury in oligodendrocytes, leading to inflammation and increased permeability, is a possible mechanism . Temozolomide, an alkylating agent, damages DNA not only in tumor cells but also in the surrounding normal tissue, amplifying the inflammatory response and contributing to PsP . It should be noted that no specific histopathological classification criteria currently exist for the diagnosis of PsP or radiation necrosis; the final diagnosis depends largely on each pathologist’s personal judgment and may show high interobserver variability . Acquiring an adequate specimen is critical for effective histological analysis, which is unfortunately not always possible during reoperation . Moreover, tissues may frequently contain a mixture of PsP and residual or recurrent tumors in varying proportions. Although routine pathological distinction between residual and recurrent tumors is recommended, it is not always feasible . Several more problems lie in the reproducible differentiation between PsP and tumor recurrence. There is no cutoff threshold for the overall percentage of recurrent tumor tissue to diagnose tumor recurrence in a mixture of PsP and tumor recurrence, and whether this tumor tissue should include only “bona fide viable tumor” or should also include “nonviable tumor” has not been established . In other words, in post-treatment diagnosis of PsP, radiologists cannot simply pass on the burden of accurate diagnosis to the pathology department, and radiological impressions should be actively communicated with clinicians and pathologists. Imaging Conventional imaging has low value in differentiating PsP from tumor recurrence . Although contrast-enhancing lesions showing callosal involvement, crossing of the midline, and subependymal spread are reportedly highly associated with tumor recurrence compared to PsP, these imaging features are also commonly observed in PsP . Therefore, radiologists should not rely heavily on conventional imaging for PsP diagnosis. The RT planning field should be routinely checked by a radiologist because a newly developed contrast-enhancing lesion outside the RT field strongly suggests tumor recurrence rather than PsP. In terms of advanced imaging, PsP shows a higher apparent diffusion coefficient (ADC) value and lower relative cerebral blood volume (rCBV) than tumor recurrence because tumor tissue shows higher cellularity and vascularity, leading to low ADC and high rCBV values, respectively . However, as there are overlapping ADC and rCBV values between PsP and tumor recurrence , a comparison of the ADC and rCBV values between the initial preoperative tumor and current post-treatment imaging should also be considered for accurate differentiation between PsP and tumor recurrence. The recurrent tumor on post-treatment imaging usually shows a similar trend in ADC and rCBV values as the tumor on preoperative imaging. On MR spectroscopy, relatively higher values of N-acetylaspartate (NAA) and creatinine (Cr) with lower values of choline (Cho) elevation, leading to lower Cho/NAA and lower Cho/Cr, are observed in PsP than in tumor recurrence . In amide proton transfer imaging, lower APT signals are observed in PsP than in tumor recurrence . In amino acid PET, less tracker uptake is observed in PsP than in tumor recurrence , and amino acid PET may provide useful information when perfusion imaging shows inconclusive results in differentiating PsP from tumor recurrence . , , and show representative imaging cases of PsP, early tumor recurrence outside the RT field, and early tumor recurrence inside the RT field, respectively, all within 12 weeks of completing radiotherapy. Note that in all cases, the clinical context and advanced imaging findings were considered for the final interpretation, rather than rigorously adhering to RANO 2.0. summarizes the clinical and imaging differences between PsP after radiotherapy and tumor progression and recurrence. Radiation Necrosis Radiation necrosis requires recognition because effective treatment might be erroneously terminated if it is misdiagnosed as tumor recurrence or progression . Because radiation necrosis manifests as a gradually enlarging contrast-enhanced lesion on follow-up imaging, differentiating it from tumor recurrence is often difficult using conventional imaging. Clinical Presentation Radiation necrosis typically occurs 9–12 months after treatment and can occur up to several years later, with an incidence of up to 25% . The clinical presentation of radiation necrosis typically mimics that of tumor progression, showing neurological decline. Compared with PsP, which usually shows transient clinical symptoms, radiation necrosis may persist for a longer period with a worse prognosis . The treatment of radiation necrosis includes corticosteroids to relieve cerebral edema and surgical decompression in cases of severe mass effects. Bevacizumab has also shown efficacy in radiation necrosis in terms of improved neurological symptoms , although the dose is usually lower compared to treatment in recurrent tumors (median dose of 7.5 mg/kg in radiation necrosis compared to a standard dose of 10 mg/kg in recurrent tumors) . Risk Factors Re-irradiation, particularly at high doses and large treatment volumes, increases the risk of radiation necrosis . As these risk factors are widely acknowledged, re-irradiation is now carefully planned in patients with recurrent IDH-wildtype glioblastoma to conform to the cumulative biologically equivalent radiation dose that avoids radiation necrosis . Pathophysiology and Histopathology Compared to PsP, radiation necrosis shows more severe tissue reactions. Radiation-induced vascular insult leads to endothelial cell damage, vascular hyalinization, cellular swelling, and necrosis . Oligodendrocyte and white matter damage is also induced by DNA-damaging free radicals. Upregulated vascular endothelial growth factor (VEGF) expression is associated with the magnitude of edema and BBB breakdown . Histologically, radiation necrosis is characterized by coagulative necrosis accompanied by gemistocytic astrocytes, indicating gliosis with atypia. Collections of abnormally dilated and thin-walled telangiectasias can also be observed . As explained in detail in the Pathophysiology and Histopathology section of PsP, there are currently no specific guidelines for the histopathological characterization of radiation necrosis. Pathological differentiation from tumor recurrence is not always easy in tissues with mixed radiation necrosis and tumor recurrence; details are presented in the previous section (Pathophysiology and Histopathology section of PsP). Imaging In conventional imaging, radiation necrosis usually occurs in the white matter within the radiation field. Internal enhancement patterns such as “Swiss cheese” or “soap bubble” patterns have been shown to be more typical of radiation necrosis than true tumor recurrence . However, the evaluation of these imaging patterns remains subjective and inaccurate . Contrast-enhancing lesions showing multiplicity, callosal involvement, crossing of the midline, and subependymal spread are reportedly more highly associated with tumor recurrence than with radiation necrosis . However, these imaging findings commonly overlap between radiation necrosis and tumor recurrence, and relying on conventional imaging alone to differentiate between radiation necrosis and tumor recurrence may not be optimal. On advanced imaging, radiation necrosis shows a higher ADC value and lower rCBV than tumor recurrence . The presence of centrally restricted diffusion in the necrotic portion of the ring-enhancing lesion may indicate radiation necrosis rather than tumor recurrence; this area is thought to represent coagulative necrosis during radiation necrosis . The rCBV value may be more helpful than the ADC value in distinguishing radiation necrosis from tumor recurrence . On MR spectroscopy, relatively higher values of NAA and lower values of Cho favor radiation necrosis over tumor recurrence, leading to lower Cho/NAA and Cho/Cr ratios, whereas an elevated lipid-lactate peak may also suggest radiation necrosis . In amide proton transfer imaging, lower APT signals are observed in radiation necrosis than in tumor recurrence . In amino acid PET, less tracker uptake is observed in radiation necrosis than in tumor recurrence ; however, false-positive uptake was reported in a patient who underwent re-irradiation with a high cumulative radiation dose, in which strong tracer uptake can be related to strong reactive astrogliosis . shows a representative case of pathologically confirmed radiation-induced necrosis. According to RANO 2.0, this case could have been defined as progressive disease because of an increased tumor burden based on conventional imaging; however, careful interpretation based on the clinical context and advanced imaging may lead to a more accurate and plausible diagnosis of radiation necrosis. summarizes the clinical and imaging differences between radiation necrosis and tumor recurrence or progression. Pseudoresponse in Antiangiogenic Therapy PsR occurs during bevacizumab treatment, an antiangiogenic therapy that targets VEGF. Bevacizumab may prolong PFS, but not OS, in patients with recurrent tumors . PsR is characterized by a decrease in contrast enhancement without a true antitumor effect, whereas the lesion remains stable or has progressed on T2/FLAIR images . The term PsR has historically been used to describe the phenomenon in which a seemingly rapid response to bevacizumab is observed (for example, a markedly decreased size of contrast-enhancing tumor on post-contrast T1-weighted images) without any difference in OS in clinical trials of patients with recurrent tumors . This term is not used as commonly as PsP or radiation necrosis nowadays because it is important to determine whether the patient has progressive or stable disease in either contrast-enhancing or non-enhancing tumors rather than specifically determining PsR, which may include both progressive and stable disease (visualized as non-enhancing tumors on T2/FLAIR images) by its definition. However, understanding the concept of PsR is important for correctly interpreting post-treatment imaging results during bevacizumab treatment. Clinical Presentation PsR is usually observed shortly after the initiation of bevacizumab treatment during follow-up imaging . Owing to the rapid decrease in vasogenic edema and mass effect, neurological symptoms may improve. Approximately 30% of patients undergoing bevacizumab treatment may show PsR . In terms of tumor progression patterns, the proportion of predominantly non-enhancing tumor recurrence patterns, which may be considered a form of PsR, was reported to be 34.2% in a meta-analysis of recurrent high-grade gliomas . Pathophysiology and Histopathology By targeting VEGF, bevacizumab not only reduces tumor vascularity but also normalizes tumor vasculature, improving the distribution of blood supply while also reducing tumor-associated edema and tissue hypoxia . When angiogenesis is blocked with bevacizumab, the growth pattern of IDH-wildtype glioblastoma may change, leading to the utilization of mature vasculature after infiltrating normal host tissue (called “vessel co-option”) . The histopathology of PsR is not well described in the literature because reoperation with pathological confirmation is usually not performed in this state of imaging manifestation. Imaging On imaging, PsR shows a rapid decrease in contrast-enhancing tumor and peritumoral edema, usually at the first follow-up imaging after bevacizumab treatment initiation, whereas the non-enhancing tumor remains stable or has increased in size . Therefore, careful examination of the T2/FLAIR sequence is required for follow-up imaging after antiangiogenic therapy to delineate the extent of non-enhancing tumors, even when contrast-enhancing tumors decrease or nearly disappear on post-contrast T1-weighted imaging. The non-enhancing tumor must be differentiated from peritumoral edema, because the effect of bevacizumab may result in a decrease in peritumoral edema, whereas the extent of the non-enhancing tumor may actually increase in PsR. A detailed explanation of the imaging differentiation of non-enhancing tumors from peritumoral edema has been provided in Part I Review and elsewhere . Few studies have evaluated the advanced imaging of PsR. One study reported a trend of normalization of ADC values in previous contrast-enhancing tumors and FLAIR hyperintense areas in patients with PsR, which may be attributed to decreased edema . However, this study included both peritumoral edema and non-enhancing tumors as FLAIR hyperintense areas. In our experience, non-enhancing tumors (apart from peritumoral edema) may not show normalization of ADC values. shows a representative case of PsR tumor progression. Pseudoprogression During Immunotherapy During immunotherapy, PsP may manifest as enlarged or newly developed contrast-enhancing lesions with increased perilesional edema, which may decrease in size during follow-up without further treatment. PsP during immunotherapy is discussed separately from PsP after radiotherapy because of the different clinical and imaging characteristics of these two conditions: 1) As concurrent chemoradiotherapy is the standard of care in initially diagnosed patients, whereas immunotherapy is mostly performed in recurrent patients with IDH-wildtype glioblastoma, the clinical course in which PsP after radiotherapy and PsP during immunotherapy appear is different, 2) The underlying mechanism for PsP during immunotherapy is probably distinct from the mechanism associated with radiotherapy and may lead to different time windows and imaging manifestations, 3) Unlike PsP after radiotherapy, PsP during immunotherapy is not limited within the RT field, and a completely new contrast-enhancing lesion appearing at a distant site could also be PsP in patients treated with immunotherapy, whereas a new contrast-enhancing lesion appearing outside the RT field after radiation treatment is tumor progression but not PsP after radiotherapy . Immunotherapy is a rapidly emerging treatment modality for patients with recurrent IDH-wildtype glioblastoma, although its survival benefit has not been demonstrated . This includes various modalities such as vaccination therapy, oncolytic viral therapy, immune checkpoint inhibitors, and CAR T cell therapy. Vaccination therapy relies on dendritic cell-mediated presentation of peptide vaccines, whereas oncolytic viral therapy creates viruses that selectively infect or replicate in tumor cells. Immune checkpoint inhibitors such as anti-PD-1 antibody (nivolumab or pembrolizumab) and anti-CTLA-4 antibody (ipilimumab), enable cytotoxic T cell activation, whereas CAR T cell therapy uses genetically modified T cells to target the tumor. Clinical Presentation The timeframe for PsP during immunotherapy is usually several months longer than that for PsP after radiotherapy (within 12–24 weeks after completing RT) and remains to be defined. The previous Immunotherapy RANO criteria defined the period up to 6 months after initiating immunotherapy . Furthermore, the timeframe may differ depending on the class of immunotherapy administered. Patients may present with worsening neurological symptoms during immunotherapy owing to the increased mass effect. Pathophysiology and Histopathology In PsP during immunotherapy, intratumoral immune cell infiltrates, including macrophage cytotoxic T cells, are associated with geographic necrosis and vascular wall hyalinization . Increased cellularity can be observed because of reactive astrocytosis with occasional atypical cells . Imaging Little information is available regarding the differentiation of PsPs during immunotherapy from tumor progression in terms of both conventional and advanced imaging in IDH-wildtype glioblastoma. It is generally presumed that advanced imaging may play a much larger role in the accurate diagnosis of PsP. However, as previous studies evaluating the role of advanced imaging were mostly single-institution studies with a limited number of patients, with different immunotherapy modalities, and different imaging sequences , we speculate that there is no strong conclusion yet. These studies showed discordant results regarding the significance of each imaging parameter in predicting PsP during immunotherapy , and some new MRI contrast agents (such as ultrasmall superparamagnetic iron oxide) or PET radiotracers are not widely available . Future research directions include a multicenter study with a comprehensive evaluation of both MRI and PET imaging parameters to identify imaging biomarkers that predict PsP in immunotherapy .
This section focuses on PsP after radiotherapy. PsP during immunotherapy will be discussed separately because of the different clinical and imaging characteristics from PsP after radiotherapy . PsP after radiotherapy is defined as an enlarged or new contrast enhancement within the radiation field mimicking tumor progression that resolves spontaneously without modifying the treatment on follow-up imaging . An accurate diagnosis of PsP is important because effective treatment might be erroneously terminated if PsP is misdiagnosed as tumor progression, with a potentially negative impact on survival, whereas the efficacy of subsequent therapy may be overestimated. Data on whether there is a true survival advantage for patients with PsP are controversial . Reliable discrimination between PsP and early tumor progression can be achieved through follow-up imaging or histopathological confirmation; however, histopathological confirmation is rarely performed in cases where PsP is strongly suspected owing to its invasive approach, and a uniform pathological diagnosis of PsP is lacking . Therefore, radiologists play a critical role in the diagnosis of PsP, as the diagnosis is primarily made by radiologic assessment with a short-term period of imaging follow-up, and rarely by histopathological confirmation. Clinical Presentation PsP occurs in 30%–40% of patients with IDH-wildtype glioblastoma within 12–24 weeks of completing radiotherapy . Although the RANO 2.0 criteria restrict the time period of PsP to within 12 weeks of completing radiotherapy , PsP may also be seen within 24 weeks of completing radiotherapy with a lower incidence than within 12 weeks . In terms of clinical findings, the literature shows discordant results on whether neurological status is associated with PsP, whereas others suggest less neurological deterioration in PsP than in tumor progression , and some studies suggest no significant difference in neurological status between PsP and tumor progression . PsP eventually resolves spontaneously without further treatment. Risk Factors MGMT promoter methylation is a well-known predictive biomarker of temozolomide treatment as well as a strong prognostic biomarker in patients with IDH-wildtype glioblastoma and is observed in approximately 40% of patients . MGMT promoter methylation is also acknowledged as a strong risk factor for PsP, with a higher likelihood of developing PsP than in patients without MGMT promoter methylation, although previous studies have shown variable results in terms of the incidence of PsP in MGMT promoter-methylated versus unmethylated tumors . Nonetheless, it should be noted that patients with MGMT promoter methylation are more likely to develop PsP (at least twice as much as patients without MGMT promoter methylation), and a vast majority of early imaging changes in patients with MGMT promoter methylation represent PsP rather than tumor progression . Pathophysiology and Histopathology The mechanism of PsP remains poorly understood; however, it is thought to represent edema and increased vascular permeability secondary to radiotherapy-induced tumor and endothelial cell death . Transient interruption of myelin synthesis secondary to radiation injury in oligodendrocytes, leading to inflammation and increased permeability, is a possible mechanism . Temozolomide, an alkylating agent, damages DNA not only in tumor cells but also in the surrounding normal tissue, amplifying the inflammatory response and contributing to PsP . It should be noted that no specific histopathological classification criteria currently exist for the diagnosis of PsP or radiation necrosis; the final diagnosis depends largely on each pathologist’s personal judgment and may show high interobserver variability . Acquiring an adequate specimen is critical for effective histological analysis, which is unfortunately not always possible during reoperation . Moreover, tissues may frequently contain a mixture of PsP and residual or recurrent tumors in varying proportions. Although routine pathological distinction between residual and recurrent tumors is recommended, it is not always feasible . Several more problems lie in the reproducible differentiation between PsP and tumor recurrence. There is no cutoff threshold for the overall percentage of recurrent tumor tissue to diagnose tumor recurrence in a mixture of PsP and tumor recurrence, and whether this tumor tissue should include only “bona fide viable tumor” or should also include “nonviable tumor” has not been established . In other words, in post-treatment diagnosis of PsP, radiologists cannot simply pass on the burden of accurate diagnosis to the pathology department, and radiological impressions should be actively communicated with clinicians and pathologists. Imaging Conventional imaging has low value in differentiating PsP from tumor recurrence . Although contrast-enhancing lesions showing callosal involvement, crossing of the midline, and subependymal spread are reportedly highly associated with tumor recurrence compared to PsP, these imaging features are also commonly observed in PsP . Therefore, radiologists should not rely heavily on conventional imaging for PsP diagnosis. The RT planning field should be routinely checked by a radiologist because a newly developed contrast-enhancing lesion outside the RT field strongly suggests tumor recurrence rather than PsP. In terms of advanced imaging, PsP shows a higher apparent diffusion coefficient (ADC) value and lower relative cerebral blood volume (rCBV) than tumor recurrence because tumor tissue shows higher cellularity and vascularity, leading to low ADC and high rCBV values, respectively . However, as there are overlapping ADC and rCBV values between PsP and tumor recurrence , a comparison of the ADC and rCBV values between the initial preoperative tumor and current post-treatment imaging should also be considered for accurate differentiation between PsP and tumor recurrence. The recurrent tumor on post-treatment imaging usually shows a similar trend in ADC and rCBV values as the tumor on preoperative imaging. On MR spectroscopy, relatively higher values of N-acetylaspartate (NAA) and creatinine (Cr) with lower values of choline (Cho) elevation, leading to lower Cho/NAA and lower Cho/Cr, are observed in PsP than in tumor recurrence . In amide proton transfer imaging, lower APT signals are observed in PsP than in tumor recurrence . In amino acid PET, less tracker uptake is observed in PsP than in tumor recurrence , and amino acid PET may provide useful information when perfusion imaging shows inconclusive results in differentiating PsP from tumor recurrence . , , and show representative imaging cases of PsP, early tumor recurrence outside the RT field, and early tumor recurrence inside the RT field, respectively, all within 12 weeks of completing radiotherapy. Note that in all cases, the clinical context and advanced imaging findings were considered for the final interpretation, rather than rigorously adhering to RANO 2.0. summarizes the clinical and imaging differences between PsP after radiotherapy and tumor progression and recurrence.
PsP occurs in 30%–40% of patients with IDH-wildtype glioblastoma within 12–24 weeks of completing radiotherapy . Although the RANO 2.0 criteria restrict the time period of PsP to within 12 weeks of completing radiotherapy , PsP may also be seen within 24 weeks of completing radiotherapy with a lower incidence than within 12 weeks . In terms of clinical findings, the literature shows discordant results on whether neurological status is associated with PsP, whereas others suggest less neurological deterioration in PsP than in tumor progression , and some studies suggest no significant difference in neurological status between PsP and tumor progression . PsP eventually resolves spontaneously without further treatment.
MGMT promoter methylation is a well-known predictive biomarker of temozolomide treatment as well as a strong prognostic biomarker in patients with IDH-wildtype glioblastoma and is observed in approximately 40% of patients . MGMT promoter methylation is also acknowledged as a strong risk factor for PsP, with a higher likelihood of developing PsP than in patients without MGMT promoter methylation, although previous studies have shown variable results in terms of the incidence of PsP in MGMT promoter-methylated versus unmethylated tumors . Nonetheless, it should be noted that patients with MGMT promoter methylation are more likely to develop PsP (at least twice as much as patients without MGMT promoter methylation), and a vast majority of early imaging changes in patients with MGMT promoter methylation represent PsP rather than tumor progression .
The mechanism of PsP remains poorly understood; however, it is thought to represent edema and increased vascular permeability secondary to radiotherapy-induced tumor and endothelial cell death . Transient interruption of myelin synthesis secondary to radiation injury in oligodendrocytes, leading to inflammation and increased permeability, is a possible mechanism . Temozolomide, an alkylating agent, damages DNA not only in tumor cells but also in the surrounding normal tissue, amplifying the inflammatory response and contributing to PsP . It should be noted that no specific histopathological classification criteria currently exist for the diagnosis of PsP or radiation necrosis; the final diagnosis depends largely on each pathologist’s personal judgment and may show high interobserver variability . Acquiring an adequate specimen is critical for effective histological analysis, which is unfortunately not always possible during reoperation . Moreover, tissues may frequently contain a mixture of PsP and residual or recurrent tumors in varying proportions. Although routine pathological distinction between residual and recurrent tumors is recommended, it is not always feasible . Several more problems lie in the reproducible differentiation between PsP and tumor recurrence. There is no cutoff threshold for the overall percentage of recurrent tumor tissue to diagnose tumor recurrence in a mixture of PsP and tumor recurrence, and whether this tumor tissue should include only “bona fide viable tumor” or should also include “nonviable tumor” has not been established . In other words, in post-treatment diagnosis of PsP, radiologists cannot simply pass on the burden of accurate diagnosis to the pathology department, and radiological impressions should be actively communicated with clinicians and pathologists.
Conventional imaging has low value in differentiating PsP from tumor recurrence . Although contrast-enhancing lesions showing callosal involvement, crossing of the midline, and subependymal spread are reportedly highly associated with tumor recurrence compared to PsP, these imaging features are also commonly observed in PsP . Therefore, radiologists should not rely heavily on conventional imaging for PsP diagnosis. The RT planning field should be routinely checked by a radiologist because a newly developed contrast-enhancing lesion outside the RT field strongly suggests tumor recurrence rather than PsP. In terms of advanced imaging, PsP shows a higher apparent diffusion coefficient (ADC) value and lower relative cerebral blood volume (rCBV) than tumor recurrence because tumor tissue shows higher cellularity and vascularity, leading to low ADC and high rCBV values, respectively . However, as there are overlapping ADC and rCBV values between PsP and tumor recurrence , a comparison of the ADC and rCBV values between the initial preoperative tumor and current post-treatment imaging should also be considered for accurate differentiation between PsP and tumor recurrence. The recurrent tumor on post-treatment imaging usually shows a similar trend in ADC and rCBV values as the tumor on preoperative imaging. On MR spectroscopy, relatively higher values of N-acetylaspartate (NAA) and creatinine (Cr) with lower values of choline (Cho) elevation, leading to lower Cho/NAA and lower Cho/Cr, are observed in PsP than in tumor recurrence . In amide proton transfer imaging, lower APT signals are observed in PsP than in tumor recurrence . In amino acid PET, less tracker uptake is observed in PsP than in tumor recurrence , and amino acid PET may provide useful information when perfusion imaging shows inconclusive results in differentiating PsP from tumor recurrence . , , and show representative imaging cases of PsP, early tumor recurrence outside the RT field, and early tumor recurrence inside the RT field, respectively, all within 12 weeks of completing radiotherapy. Note that in all cases, the clinical context and advanced imaging findings were considered for the final interpretation, rather than rigorously adhering to RANO 2.0. summarizes the clinical and imaging differences between PsP after radiotherapy and tumor progression and recurrence.
Radiation necrosis requires recognition because effective treatment might be erroneously terminated if it is misdiagnosed as tumor recurrence or progression . Because radiation necrosis manifests as a gradually enlarging contrast-enhanced lesion on follow-up imaging, differentiating it from tumor recurrence is often difficult using conventional imaging. Clinical Presentation Radiation necrosis typically occurs 9–12 months after treatment and can occur up to several years later, with an incidence of up to 25% . The clinical presentation of radiation necrosis typically mimics that of tumor progression, showing neurological decline. Compared with PsP, which usually shows transient clinical symptoms, radiation necrosis may persist for a longer period with a worse prognosis . The treatment of radiation necrosis includes corticosteroids to relieve cerebral edema and surgical decompression in cases of severe mass effects. Bevacizumab has also shown efficacy in radiation necrosis in terms of improved neurological symptoms , although the dose is usually lower compared to treatment in recurrent tumors (median dose of 7.5 mg/kg in radiation necrosis compared to a standard dose of 10 mg/kg in recurrent tumors) . Risk Factors Re-irradiation, particularly at high doses and large treatment volumes, increases the risk of radiation necrosis . As these risk factors are widely acknowledged, re-irradiation is now carefully planned in patients with recurrent IDH-wildtype glioblastoma to conform to the cumulative biologically equivalent radiation dose that avoids radiation necrosis . Pathophysiology and Histopathology Compared to PsP, radiation necrosis shows more severe tissue reactions. Radiation-induced vascular insult leads to endothelial cell damage, vascular hyalinization, cellular swelling, and necrosis . Oligodendrocyte and white matter damage is also induced by DNA-damaging free radicals. Upregulated vascular endothelial growth factor (VEGF) expression is associated with the magnitude of edema and BBB breakdown . Histologically, radiation necrosis is characterized by coagulative necrosis accompanied by gemistocytic astrocytes, indicating gliosis with atypia. Collections of abnormally dilated and thin-walled telangiectasias can also be observed . As explained in detail in the Pathophysiology and Histopathology section of PsP, there are currently no specific guidelines for the histopathological characterization of radiation necrosis. Pathological differentiation from tumor recurrence is not always easy in tissues with mixed radiation necrosis and tumor recurrence; details are presented in the previous section (Pathophysiology and Histopathology section of PsP). Imaging In conventional imaging, radiation necrosis usually occurs in the white matter within the radiation field. Internal enhancement patterns such as “Swiss cheese” or “soap bubble” patterns have been shown to be more typical of radiation necrosis than true tumor recurrence . However, the evaluation of these imaging patterns remains subjective and inaccurate . Contrast-enhancing lesions showing multiplicity, callosal involvement, crossing of the midline, and subependymal spread are reportedly more highly associated with tumor recurrence than with radiation necrosis . However, these imaging findings commonly overlap between radiation necrosis and tumor recurrence, and relying on conventional imaging alone to differentiate between radiation necrosis and tumor recurrence may not be optimal. On advanced imaging, radiation necrosis shows a higher ADC value and lower rCBV than tumor recurrence . The presence of centrally restricted diffusion in the necrotic portion of the ring-enhancing lesion may indicate radiation necrosis rather than tumor recurrence; this area is thought to represent coagulative necrosis during radiation necrosis . The rCBV value may be more helpful than the ADC value in distinguishing radiation necrosis from tumor recurrence . On MR spectroscopy, relatively higher values of NAA and lower values of Cho favor radiation necrosis over tumor recurrence, leading to lower Cho/NAA and Cho/Cr ratios, whereas an elevated lipid-lactate peak may also suggest radiation necrosis . In amide proton transfer imaging, lower APT signals are observed in radiation necrosis than in tumor recurrence . In amino acid PET, less tracker uptake is observed in radiation necrosis than in tumor recurrence ; however, false-positive uptake was reported in a patient who underwent re-irradiation with a high cumulative radiation dose, in which strong tracer uptake can be related to strong reactive astrogliosis . shows a representative case of pathologically confirmed radiation-induced necrosis. According to RANO 2.0, this case could have been defined as progressive disease because of an increased tumor burden based on conventional imaging; however, careful interpretation based on the clinical context and advanced imaging may lead to a more accurate and plausible diagnosis of radiation necrosis. summarizes the clinical and imaging differences between radiation necrosis and tumor recurrence or progression.
Radiation necrosis typically occurs 9–12 months after treatment and can occur up to several years later, with an incidence of up to 25% . The clinical presentation of radiation necrosis typically mimics that of tumor progression, showing neurological decline. Compared with PsP, which usually shows transient clinical symptoms, radiation necrosis may persist for a longer period with a worse prognosis . The treatment of radiation necrosis includes corticosteroids to relieve cerebral edema and surgical decompression in cases of severe mass effects. Bevacizumab has also shown efficacy in radiation necrosis in terms of improved neurological symptoms , although the dose is usually lower compared to treatment in recurrent tumors (median dose of 7.5 mg/kg in radiation necrosis compared to a standard dose of 10 mg/kg in recurrent tumors) .
Re-irradiation, particularly at high doses and large treatment volumes, increases the risk of radiation necrosis . As these risk factors are widely acknowledged, re-irradiation is now carefully planned in patients with recurrent IDH-wildtype glioblastoma to conform to the cumulative biologically equivalent radiation dose that avoids radiation necrosis .
Compared to PsP, radiation necrosis shows more severe tissue reactions. Radiation-induced vascular insult leads to endothelial cell damage, vascular hyalinization, cellular swelling, and necrosis . Oligodendrocyte and white matter damage is also induced by DNA-damaging free radicals. Upregulated vascular endothelial growth factor (VEGF) expression is associated with the magnitude of edema and BBB breakdown . Histologically, radiation necrosis is characterized by coagulative necrosis accompanied by gemistocytic astrocytes, indicating gliosis with atypia. Collections of abnormally dilated and thin-walled telangiectasias can also be observed . As explained in detail in the Pathophysiology and Histopathology section of PsP, there are currently no specific guidelines for the histopathological characterization of radiation necrosis. Pathological differentiation from tumor recurrence is not always easy in tissues with mixed radiation necrosis and tumor recurrence; details are presented in the previous section (Pathophysiology and Histopathology section of PsP).
In conventional imaging, radiation necrosis usually occurs in the white matter within the radiation field. Internal enhancement patterns such as “Swiss cheese” or “soap bubble” patterns have been shown to be more typical of radiation necrosis than true tumor recurrence . However, the evaluation of these imaging patterns remains subjective and inaccurate . Contrast-enhancing lesions showing multiplicity, callosal involvement, crossing of the midline, and subependymal spread are reportedly more highly associated with tumor recurrence than with radiation necrosis . However, these imaging findings commonly overlap between radiation necrosis and tumor recurrence, and relying on conventional imaging alone to differentiate between radiation necrosis and tumor recurrence may not be optimal. On advanced imaging, radiation necrosis shows a higher ADC value and lower rCBV than tumor recurrence . The presence of centrally restricted diffusion in the necrotic portion of the ring-enhancing lesion may indicate radiation necrosis rather than tumor recurrence; this area is thought to represent coagulative necrosis during radiation necrosis . The rCBV value may be more helpful than the ADC value in distinguishing radiation necrosis from tumor recurrence . On MR spectroscopy, relatively higher values of NAA and lower values of Cho favor radiation necrosis over tumor recurrence, leading to lower Cho/NAA and Cho/Cr ratios, whereas an elevated lipid-lactate peak may also suggest radiation necrosis . In amide proton transfer imaging, lower APT signals are observed in radiation necrosis than in tumor recurrence . In amino acid PET, less tracker uptake is observed in radiation necrosis than in tumor recurrence ; however, false-positive uptake was reported in a patient who underwent re-irradiation with a high cumulative radiation dose, in which strong tracer uptake can be related to strong reactive astrogliosis . shows a representative case of pathologically confirmed radiation-induced necrosis. According to RANO 2.0, this case could have been defined as progressive disease because of an increased tumor burden based on conventional imaging; however, careful interpretation based on the clinical context and advanced imaging may lead to a more accurate and plausible diagnosis of radiation necrosis. summarizes the clinical and imaging differences between radiation necrosis and tumor recurrence or progression.
PsR occurs during bevacizumab treatment, an antiangiogenic therapy that targets VEGF. Bevacizumab may prolong PFS, but not OS, in patients with recurrent tumors . PsR is characterized by a decrease in contrast enhancement without a true antitumor effect, whereas the lesion remains stable or has progressed on T2/FLAIR images . The term PsR has historically been used to describe the phenomenon in which a seemingly rapid response to bevacizumab is observed (for example, a markedly decreased size of contrast-enhancing tumor on post-contrast T1-weighted images) without any difference in OS in clinical trials of patients with recurrent tumors . This term is not used as commonly as PsP or radiation necrosis nowadays because it is important to determine whether the patient has progressive or stable disease in either contrast-enhancing or non-enhancing tumors rather than specifically determining PsR, which may include both progressive and stable disease (visualized as non-enhancing tumors on T2/FLAIR images) by its definition. However, understanding the concept of PsR is important for correctly interpreting post-treatment imaging results during bevacizumab treatment. Clinical Presentation PsR is usually observed shortly after the initiation of bevacizumab treatment during follow-up imaging . Owing to the rapid decrease in vasogenic edema and mass effect, neurological symptoms may improve. Approximately 30% of patients undergoing bevacizumab treatment may show PsR . In terms of tumor progression patterns, the proportion of predominantly non-enhancing tumor recurrence patterns, which may be considered a form of PsR, was reported to be 34.2% in a meta-analysis of recurrent high-grade gliomas . Pathophysiology and Histopathology By targeting VEGF, bevacizumab not only reduces tumor vascularity but also normalizes tumor vasculature, improving the distribution of blood supply while also reducing tumor-associated edema and tissue hypoxia . When angiogenesis is blocked with bevacizumab, the growth pattern of IDH-wildtype glioblastoma may change, leading to the utilization of mature vasculature after infiltrating normal host tissue (called “vessel co-option”) . The histopathology of PsR is not well described in the literature because reoperation with pathological confirmation is usually not performed in this state of imaging manifestation. Imaging On imaging, PsR shows a rapid decrease in contrast-enhancing tumor and peritumoral edema, usually at the first follow-up imaging after bevacizumab treatment initiation, whereas the non-enhancing tumor remains stable or has increased in size . Therefore, careful examination of the T2/FLAIR sequence is required for follow-up imaging after antiangiogenic therapy to delineate the extent of non-enhancing tumors, even when contrast-enhancing tumors decrease or nearly disappear on post-contrast T1-weighted imaging. The non-enhancing tumor must be differentiated from peritumoral edema, because the effect of bevacizumab may result in a decrease in peritumoral edema, whereas the extent of the non-enhancing tumor may actually increase in PsR. A detailed explanation of the imaging differentiation of non-enhancing tumors from peritumoral edema has been provided in Part I Review and elsewhere . Few studies have evaluated the advanced imaging of PsR. One study reported a trend of normalization of ADC values in previous contrast-enhancing tumors and FLAIR hyperintense areas in patients with PsR, which may be attributed to decreased edema . However, this study included both peritumoral edema and non-enhancing tumors as FLAIR hyperintense areas. In our experience, non-enhancing tumors (apart from peritumoral edema) may not show normalization of ADC values. shows a representative case of PsR tumor progression.
PsR is usually observed shortly after the initiation of bevacizumab treatment during follow-up imaging . Owing to the rapid decrease in vasogenic edema and mass effect, neurological symptoms may improve. Approximately 30% of patients undergoing bevacizumab treatment may show PsR . In terms of tumor progression patterns, the proportion of predominantly non-enhancing tumor recurrence patterns, which may be considered a form of PsR, was reported to be 34.2% in a meta-analysis of recurrent high-grade gliomas .
By targeting VEGF, bevacizumab not only reduces tumor vascularity but also normalizes tumor vasculature, improving the distribution of blood supply while also reducing tumor-associated edema and tissue hypoxia . When angiogenesis is blocked with bevacizumab, the growth pattern of IDH-wildtype glioblastoma may change, leading to the utilization of mature vasculature after infiltrating normal host tissue (called “vessel co-option”) . The histopathology of PsR is not well described in the literature because reoperation with pathological confirmation is usually not performed in this state of imaging manifestation.
On imaging, PsR shows a rapid decrease in contrast-enhancing tumor and peritumoral edema, usually at the first follow-up imaging after bevacizumab treatment initiation, whereas the non-enhancing tumor remains stable or has increased in size . Therefore, careful examination of the T2/FLAIR sequence is required for follow-up imaging after antiangiogenic therapy to delineate the extent of non-enhancing tumors, even when contrast-enhancing tumors decrease or nearly disappear on post-contrast T1-weighted imaging. The non-enhancing tumor must be differentiated from peritumoral edema, because the effect of bevacizumab may result in a decrease in peritumoral edema, whereas the extent of the non-enhancing tumor may actually increase in PsR. A detailed explanation of the imaging differentiation of non-enhancing tumors from peritumoral edema has been provided in Part I Review and elsewhere . Few studies have evaluated the advanced imaging of PsR. One study reported a trend of normalization of ADC values in previous contrast-enhancing tumors and FLAIR hyperintense areas in patients with PsR, which may be attributed to decreased edema . However, this study included both peritumoral edema and non-enhancing tumors as FLAIR hyperintense areas. In our experience, non-enhancing tumors (apart from peritumoral edema) may not show normalization of ADC values. shows a representative case of PsR tumor progression.
During immunotherapy, PsP may manifest as enlarged or newly developed contrast-enhancing lesions with increased perilesional edema, which may decrease in size during follow-up without further treatment. PsP during immunotherapy is discussed separately from PsP after radiotherapy because of the different clinical and imaging characteristics of these two conditions: 1) As concurrent chemoradiotherapy is the standard of care in initially diagnosed patients, whereas immunotherapy is mostly performed in recurrent patients with IDH-wildtype glioblastoma, the clinical course in which PsP after radiotherapy and PsP during immunotherapy appear is different, 2) The underlying mechanism for PsP during immunotherapy is probably distinct from the mechanism associated with radiotherapy and may lead to different time windows and imaging manifestations, 3) Unlike PsP after radiotherapy, PsP during immunotherapy is not limited within the RT field, and a completely new contrast-enhancing lesion appearing at a distant site could also be PsP in patients treated with immunotherapy, whereas a new contrast-enhancing lesion appearing outside the RT field after radiation treatment is tumor progression but not PsP after radiotherapy . Immunotherapy is a rapidly emerging treatment modality for patients with recurrent IDH-wildtype glioblastoma, although its survival benefit has not been demonstrated . This includes various modalities such as vaccination therapy, oncolytic viral therapy, immune checkpoint inhibitors, and CAR T cell therapy. Vaccination therapy relies on dendritic cell-mediated presentation of peptide vaccines, whereas oncolytic viral therapy creates viruses that selectively infect or replicate in tumor cells. Immune checkpoint inhibitors such as anti-PD-1 antibody (nivolumab or pembrolizumab) and anti-CTLA-4 antibody (ipilimumab), enable cytotoxic T cell activation, whereas CAR T cell therapy uses genetically modified T cells to target the tumor. Clinical Presentation The timeframe for PsP during immunotherapy is usually several months longer than that for PsP after radiotherapy (within 12–24 weeks after completing RT) and remains to be defined. The previous Immunotherapy RANO criteria defined the period up to 6 months after initiating immunotherapy . Furthermore, the timeframe may differ depending on the class of immunotherapy administered. Patients may present with worsening neurological symptoms during immunotherapy owing to the increased mass effect. Pathophysiology and Histopathology In PsP during immunotherapy, intratumoral immune cell infiltrates, including macrophage cytotoxic T cells, are associated with geographic necrosis and vascular wall hyalinization . Increased cellularity can be observed because of reactive astrocytosis with occasional atypical cells . Imaging Little information is available regarding the differentiation of PsPs during immunotherapy from tumor progression in terms of both conventional and advanced imaging in IDH-wildtype glioblastoma. It is generally presumed that advanced imaging may play a much larger role in the accurate diagnosis of PsP. However, as previous studies evaluating the role of advanced imaging were mostly single-institution studies with a limited number of patients, with different immunotherapy modalities, and different imaging sequences , we speculate that there is no strong conclusion yet. These studies showed discordant results regarding the significance of each imaging parameter in predicting PsP during immunotherapy , and some new MRI contrast agents (such as ultrasmall superparamagnetic iron oxide) or PET radiotracers are not widely available . Future research directions include a multicenter study with a comprehensive evaluation of both MRI and PET imaging parameters to identify imaging biomarkers that predict PsP in immunotherapy .
The timeframe for PsP during immunotherapy is usually several months longer than that for PsP after radiotherapy (within 12–24 weeks after completing RT) and remains to be defined. The previous Immunotherapy RANO criteria defined the period up to 6 months after initiating immunotherapy . Furthermore, the timeframe may differ depending on the class of immunotherapy administered. Patients may present with worsening neurological symptoms during immunotherapy owing to the increased mass effect.
In PsP during immunotherapy, intratumoral immune cell infiltrates, including macrophage cytotoxic T cells, are associated with geographic necrosis and vascular wall hyalinization . Increased cellularity can be observed because of reactive astrocytosis with occasional atypical cells .
Little information is available regarding the differentiation of PsPs during immunotherapy from tumor progression in terms of both conventional and advanced imaging in IDH-wildtype glioblastoma. It is generally presumed that advanced imaging may play a much larger role in the accurate diagnosis of PsP. However, as previous studies evaluating the role of advanced imaging were mostly single-institution studies with a limited number of patients, with different immunotherapy modalities, and different imaging sequences , we speculate that there is no strong conclusion yet. These studies showed discordant results regarding the significance of each imaging parameter in predicting PsP during immunotherapy , and some new MRI contrast agents (such as ultrasmall superparamagnetic iron oxide) or PET radiotracers are not widely available . Future research directions include a multicenter study with a comprehensive evaluation of both MRI and PET imaging parameters to identify imaging biomarkers that predict PsP in immunotherapy .
The long journey of explaining various treatment-related changes has been to correctly diagnose tumor recurrence or progression. Tumor recurrence/progression is an inevitable and formidable process in IDH-wildtype glioblastomas, indicating the failure of the current treatment regimen. The median OS after the first tumor recurrence or progression is approximately 9 months . Although there is no standard of care for recurrent tumors, various treatment options can be selected according to the patient’s age, performance status, MGMT promoter methylation status, and patterns of tumor recurrence/progression . Therefore, accurate diagnosis of tumor recurrence/progression and accurate description of the pattern are important for treatment decisions. There is no unified classification for the pattern of tumor recurrence, and the definition varies in the literature depending on whether the criteria are based on the distance from the resection cavity or the isodose surface of the radiation field . The pattern can be roughly divided into local recurrence, distant recurrence, and mixed recurrence (showing both local and distant recurrences) (schematic illustrations are shown in ). Local recurrence occurs adjacent to the resection cavity (≤2 cm) or within the clinical target volume of the radiation field and is the most commonly observed recurrence/progression pattern, whereas distant recurrence occurs distant (>2 cm) to the resection cavity or beyond the radiation field and is less frequently observed than local recurrence (usually less than 20% among tumor recurrence patterns) . The clinical and molecular characteristics of local and distant recurrences require further investigation. Distant recurrence was previously regarded as reflecting the capability of tumor cells to migrate throughout the brain, with a longer period of manifestation leading to a longer PFS . However, caution should be taken when interpreting these results, as these included CNS WHO grade 4 IDH-mutant astrocytomas along with IDH-wildtype glioblastomas before the 2021 WHO classification, which confounded the results. A recent multicenter study suggested that distant recurrence patterns were associated with better survival outcomes in cases of re-resection, which can be attributed to the distinct accessibility of extensive resection compared to local recurrence . In terms of imaging, tumor recurrence/progression is mostly observed as a contrast-enhancing tumor, typically with necrosis, low ADC, and high rCBV values. Non-enhancing tumor recurrence may also be observed, especially after bevacizumab treatment . In MR spectroscopy, higher values of Cho and lower values of NAA and Cr were observed, indicating high Cho/NAA and Cho/Cr ratios . In amide proton transfer imaging, high APT signals are observed during tumor recurrence . In amino acid PET, increased tracker uptake has been observed during tumor recurrence . LM usually occur concurrently with local or distant recurrence; however, there is often a solitary manifestation of LM as the tumor progresses, which can easily be overlooked or missed. The detection of ventricular enlargement, compared to the previous study, is an important imaging finding that can be easily detected and leads to suspicion of LM . Post-contrast FLAIR, in addition to routinely acquired pre-contrast FLAIR, greatly increases the sensitivity of LM diagnosis at recurrence . shows representative cases of tumor recurrence/progression with mixed recurrence (local recurrence with LM) and distant recurrence, whereas shows a representative case of tumor recurrence manifesting as a solitary LM.
Interpretation of post-treatment imaging in IDH-wildtype glioblastoma is complicated and challenging; a deep understanding of both the imaging information and clinical background is essential to provide an accurate diagnosis to clinicians. It is essential to note that radiologists are an integral part of the multidisciplinary neuro-oncology team struggling to achieve optimal care for patients with IDH-wildtype glioblastoma. Therefore, an accurate diagnosis of true tumor progression, apart from confounding treatment-related changes such as PsP or radiation necrosis, is essential. Effective communication between neurosurgeons, neurologists, radiation oncologists, and pathologists regarding post-treatment imaging will ultimately lead to an enhanced understanding of the disease and significant advancement toward a successful fight against IDH-wildtype glioblastoma.
|
Characterization and technological functions of different lactic acid bacteria from traditionally produced Kırklareli white brined cheese during the ripening period | 7a2ac1d7-5a12-4e0e-b2e3-8d37d42d0bca | 11379737 | Microbiology[mh] | Manufacturing of local dairy products is a tradition that has been preserved for centuries. The products have been characterized by great diversity, and some of them have been known since ancient times. All cheeses from a certain geographic area represent a potential national treasure and cultural heritage (Terzić-Vidojević et al. ). Exemplary for this are white brined cheeses, which are produced especially in the Balkans and in Mediterranean countries such as Turkey, Egypt, and Greece. This type of cheese is matured in brine (10–18% NaCl) for a long period. The level of salt in the brine is crucial for selective properties thereof for microorganisms (Albayrak and Duran ). In principle, flavor, texture, and preservative properties of many fermented foods such as cheese are determined by the use of different species of the five major LAB genera: Lactobacillus spp., Lactococcus spp., Leuconostoc spp., Enterococcus spp., and Streptococcus spp. LAB are a heterogeneous group of gram-positive bacteria with a strictly fermentative metabolism, of which lactic acid is the most important metabolite (Temmerman et al. ). According to their specific roles, LAB involved in fermentation processes can be divided into two groups: starter lactic acid bacteria (sLAB) and non-starter LAB (nsLAB). sLAB may be added as starters and adjunct cultures. A starter is a culture of living microorganisms which are used to begin fermentation, producing specific changes in the chemical composition and sensory properties of the food product. On the other hand, nsLAB usually originate from the production and processing environments as spontaneous microbiota (Grujović et al. ). In the traditional production method, in contrast to the basic principle using the five major LAB, a high proportion of various nsLAB can be observed during the ripening period, depending on the type of cheese. The presence of various microbial groups could influence the lipolysis and proteolysis process and therefore ultimately the ripening characteristics of the specific cheeses (Öner et al. ). In traditionally produced cheeses, such as matured or fermented white cheese, a wide variety of LAB and various other specific bacteria can be found, which have a wide variety of functions or also act as indicator bacteria. Therefore, the aim of this study was (i) to investigate the physicochemical and microbiological properties of traditional Kırklareli white brined cheese from local dairies during the 3-month ripening period, (ii) to characterize the obtained LAB and their influence on the technological properties, and (iii) to determine their potential as starter cultures for traditional cheese production.
Collection of cheese samples In this study, Kırklareli white brined cheese samples ( n = 56) were taken from 14 different cheese manufacturing facilities with a daily production volume varying from 10 to 70 tons of cheese in Kırklareli Province, Turkey. The samples were received prior to the packaging process and delivered to the laboratory at the Department of Food Engineering of Kırklareli University under suitable conditions. From each factory, four mold cheese samples (weighing approximately 650 g) were taken, one of which was tested for analysis directly on the first day. When the pH of the samples reached 5.0, the samples were transferred to 1 L containers and stored in their own brine at 4 °C during the ripening period. The ripening of the samples took place over a 3-month period. Physicochemical and microbiological analyses were carried out at days 1, 15, 30, and 90 of the ripening period. The overall study design is presented in the study design section. Physicochemical and microbiological analysis The acidity was determined by titration, and a salt analysis was carried out on the cheese samples as described previously (Dertli et al. ). For the microbiological analysis, a total of 56 white cheese samples were tested for the presence of Enterobacteriaceae, Escherichia ( E. ) coli , Staphylococcus ( S. ) aureus , molds, and yeasts. For this purpose, 10 g of each cheese sample was serially diluted in 90 mL of sterile saline peptone water (0.9% NaCl, 0.1% peptone (Oxoid Ltd., Basingstoke, UK), pH 7.0) and was homogenized using a Stomacher Lab-Blender 400 (Seward Medical Ltd., London, UK). Serial dilutions were used to perform the plating procedure. The appropriate agars and incubation periods were applied as follows: Enterobacteriaceae were determined using Violet-Red-Bile-Dextrose (VRBD) Agar (Merck KGaA, Darmstadt, Germany) at 37 °C and incubated aerobically for 24–48 h (ISO ); total count of molds and yeasts was determined by DRBC Agar (Merck) at 25 °C and incubated aerobically for 5 days (Tournas et al. ); Baird Parker (BP) Agar (Oxoid) with egg-yolk tellurite addition was used to determine S. aureus and was incubated aerobically for 30–48 h at 35–37 °C (Tallent et al. ). The number of E. coli was determined using Tryptone Bile X-Glucuronide (TBX) Agar medium (Oxoid), and the respective agar plates were incubated for 4 h at 30 °C, followed by 18 h at 44 °C. Then, the bluish-green colonies formed after aerobic incubation were evaluated as E. coli (Feng et al. ). Isolation and identification of LAB and nsLAB Isolation from the cheese samples Appropriate dilution series of white cheese samples were plated out on M17 Agar (Merck) and De Man, Rogosa, and Sharpe (MRS) Agar (Merck) for enumeration of total viable LAB. The corresponding plates were incubated aerobically at 30 °C for 48 h for the growth of LAB. Different colonies were selected from the agar plates and subjected to gram-staining, cell morphology, and catalase reaction tests as described previously (Dertli et al. ). Identification by 16S rRNA gene sequencing After selecting the LAB isolates according to phenotypic characteristics, 31 typical isolates for each phenotype were identified as the corresponding bacterial species by sequence analysis of the 16S rRNA gene. The genomic DNA of the isolates was extracted using the Qiagen Bacterial DNA extraction kit (Vivantis Technologies Sdn Bhd, Selangor Darul Ehsan, Malaysia) in accordance with the manufacturer’s recommendations. A total of 1 μL of each genomic DNA from the isolates was used as a template for preparing each PCR reaction. Other components of the PCR approach, such as master mix and probes, had been previously prepared, and PCR analysis and sequencing were performed as described previously (Dertli et al. ). The obtained gene sequences were submitted to the National Center for Biotechnology Information (NCBI) BLAST database, aligned, and identified with a similarity criterion of 97–100%. Using Molecular Evolutionary Genetic Analysis (MEGA X) software, the 16S rDNA sequences of the cheese isolates were arranged to perform the phylogenetic analysis (Tamura et al. ). For this purpose, the neighbor-joining (NJ) method with 1000 bootstrap replicates was used and the phylogenetic tree was constructed (Saitou and Nei ). The partial 16S rDNA sequences of the 21 isolates identified in this study were deposited in the NCBI GenBank under accession numbers MT345607 to MT345627. Determination of technological properties of isolates Proteolytic activity The levels of proteolytic activity of the isolates were determined by spectrophotometric measurement of tyrosine formation. This method has been described previously (Citti et al. ) and was applied in this study. The spectrophotometric measurements were performed at a wavelength of 650 nm (Shimadzu UV-120–02, Kyoto, Japan). The values obtained were compared with the results of the tyrosine standard and expressed as the tyrosine equivalent (µg/mL). Hydrogen sulfide production To determine the ability of the found isolates to produce hydrogen sulfide, active cultures were cultivated in Triple Sugar Iron (TSI) Agar (Oxoid) medium and incubated for 2 weeks at 30 °C. At the end of the aerobic incubation period, a blackening of the color of the medium was observed, demonstrating the production of hydrogen sulfide (Lee and Simard ). Lactic acid-producing abilities The isolates should also be tested for their ability to produce lactic acid. For this purpose, 1% inoculations were performed in skimmed milk medium (Biolife Italiana S.r.l., Milan, Italy) with 18-h active cultures. This was then incubated for 6 and 24 h, respectively, at 30 °C under aerobic conditions. To determine lactic acid development, pH values were determined at the end of the incubation period. The difference between the initial pH value of skimmed milk medium and the pH value after incubation (ΔpH) was considered in the evaluation (Sarantinopoulos et al. ). Antibacterial activities The cheese isolates were grown in MRS broth at 37 °C for 24 h under aerobic conditions, and the culture supernatants were obtained by filtration. The inhibitory effect of H 2 O 2 was eliminated by adjusting the pH of the supernatants to 6.5 and adding catalase reagent (5 mg/mL) (Sigma, Missouri, USA). Nutrient agar media (Oxoid) containing 18-h grown pathogenic cultures of Bacillus cereus FMC 19, Escherichia coli ATCC 25922, Listeria monocytogenes RSKK 472 (serovar 1/2b), Salmonella typhimurium NRRLE 4463, and S. aureus ATCC 28213 were poured into petri dishes. Then, wells of 6 mm diameter were formed on the solidified medium. A supernatant of the isolate was placed in each well to be tested for antibacterial activity. The diameters of the inhibition zones formed were recorded after a 24-h incubation period (Dertli et al. ). Antibiotic sensitivity Antibiotic susceptibility testing of the isolates against ampicillin (AMP, 10 µg), gentamycin (GEN, 120 µg), clindamycin (CLI, 2 µg), chloramphenicol (CHL, 30 µg), vancomycin (VAN, 30 µg), kanamycin (KAN, 30 µg), streptomycin (STR, 10 µg), erythromycin (ERY, 10 µg), and tetracycline hydrochloride (TET, 30 µg) (Bioanalyse, Ankara, Turkey) was performed by disk diffusion assay on Mueller–Hinton agar (Bioanalyse) according to EFSA .
In this study, Kırklareli white brined cheese samples ( n = 56) were taken from 14 different cheese manufacturing facilities with a daily production volume varying from 10 to 70 tons of cheese in Kırklareli Province, Turkey. The samples were received prior to the packaging process and delivered to the laboratory at the Department of Food Engineering of Kırklareli University under suitable conditions. From each factory, four mold cheese samples (weighing approximately 650 g) were taken, one of which was tested for analysis directly on the first day. When the pH of the samples reached 5.0, the samples were transferred to 1 L containers and stored in their own brine at 4 °C during the ripening period. The ripening of the samples took place over a 3-month period. Physicochemical and microbiological analyses were carried out at days 1, 15, 30, and 90 of the ripening period. The overall study design is presented in the study design section.
The acidity was determined by titration, and a salt analysis was carried out on the cheese samples as described previously (Dertli et al. ). For the microbiological analysis, a total of 56 white cheese samples were tested for the presence of Enterobacteriaceae, Escherichia ( E. ) coli , Staphylococcus ( S. ) aureus , molds, and yeasts. For this purpose, 10 g of each cheese sample was serially diluted in 90 mL of sterile saline peptone water (0.9% NaCl, 0.1% peptone (Oxoid Ltd., Basingstoke, UK), pH 7.0) and was homogenized using a Stomacher Lab-Blender 400 (Seward Medical Ltd., London, UK). Serial dilutions were used to perform the plating procedure. The appropriate agars and incubation periods were applied as follows: Enterobacteriaceae were determined using Violet-Red-Bile-Dextrose (VRBD) Agar (Merck KGaA, Darmstadt, Germany) at 37 °C and incubated aerobically for 24–48 h (ISO ); total count of molds and yeasts was determined by DRBC Agar (Merck) at 25 °C and incubated aerobically for 5 days (Tournas et al. ); Baird Parker (BP) Agar (Oxoid) with egg-yolk tellurite addition was used to determine S. aureus and was incubated aerobically for 30–48 h at 35–37 °C (Tallent et al. ). The number of E. coli was determined using Tryptone Bile X-Glucuronide (TBX) Agar medium (Oxoid), and the respective agar plates were incubated for 4 h at 30 °C, followed by 18 h at 44 °C. Then, the bluish-green colonies formed after aerobic incubation were evaluated as E. coli (Feng et al. ).
Isolation from the cheese samples Appropriate dilution series of white cheese samples were plated out on M17 Agar (Merck) and De Man, Rogosa, and Sharpe (MRS) Agar (Merck) for enumeration of total viable LAB. The corresponding plates were incubated aerobically at 30 °C for 48 h for the growth of LAB. Different colonies were selected from the agar plates and subjected to gram-staining, cell morphology, and catalase reaction tests as described previously (Dertli et al. ). Identification by 16S rRNA gene sequencing After selecting the LAB isolates according to phenotypic characteristics, 31 typical isolates for each phenotype were identified as the corresponding bacterial species by sequence analysis of the 16S rRNA gene. The genomic DNA of the isolates was extracted using the Qiagen Bacterial DNA extraction kit (Vivantis Technologies Sdn Bhd, Selangor Darul Ehsan, Malaysia) in accordance with the manufacturer’s recommendations. A total of 1 μL of each genomic DNA from the isolates was used as a template for preparing each PCR reaction. Other components of the PCR approach, such as master mix and probes, had been previously prepared, and PCR analysis and sequencing were performed as described previously (Dertli et al. ). The obtained gene sequences were submitted to the National Center for Biotechnology Information (NCBI) BLAST database, aligned, and identified with a similarity criterion of 97–100%. Using Molecular Evolutionary Genetic Analysis (MEGA X) software, the 16S rDNA sequences of the cheese isolates were arranged to perform the phylogenetic analysis (Tamura et al. ). For this purpose, the neighbor-joining (NJ) method with 1000 bootstrap replicates was used and the phylogenetic tree was constructed (Saitou and Nei ). The partial 16S rDNA sequences of the 21 isolates identified in this study were deposited in the NCBI GenBank under accession numbers MT345607 to MT345627.
Appropriate dilution series of white cheese samples were plated out on M17 Agar (Merck) and De Man, Rogosa, and Sharpe (MRS) Agar (Merck) for enumeration of total viable LAB. The corresponding plates were incubated aerobically at 30 °C for 48 h for the growth of LAB. Different colonies were selected from the agar plates and subjected to gram-staining, cell morphology, and catalase reaction tests as described previously (Dertli et al. ).
After selecting the LAB isolates according to phenotypic characteristics, 31 typical isolates for each phenotype were identified as the corresponding bacterial species by sequence analysis of the 16S rRNA gene. The genomic DNA of the isolates was extracted using the Qiagen Bacterial DNA extraction kit (Vivantis Technologies Sdn Bhd, Selangor Darul Ehsan, Malaysia) in accordance with the manufacturer’s recommendations. A total of 1 μL of each genomic DNA from the isolates was used as a template for preparing each PCR reaction. Other components of the PCR approach, such as master mix and probes, had been previously prepared, and PCR analysis and sequencing were performed as described previously (Dertli et al. ). The obtained gene sequences were submitted to the National Center for Biotechnology Information (NCBI) BLAST database, aligned, and identified with a similarity criterion of 97–100%. Using Molecular Evolutionary Genetic Analysis (MEGA X) software, the 16S rDNA sequences of the cheese isolates were arranged to perform the phylogenetic analysis (Tamura et al. ). For this purpose, the neighbor-joining (NJ) method with 1000 bootstrap replicates was used and the phylogenetic tree was constructed (Saitou and Nei ). The partial 16S rDNA sequences of the 21 isolates identified in this study were deposited in the NCBI GenBank under accession numbers MT345607 to MT345627.
Proteolytic activity The levels of proteolytic activity of the isolates were determined by spectrophotometric measurement of tyrosine formation. This method has been described previously (Citti et al. ) and was applied in this study. The spectrophotometric measurements were performed at a wavelength of 650 nm (Shimadzu UV-120–02, Kyoto, Japan). The values obtained were compared with the results of the tyrosine standard and expressed as the tyrosine equivalent (µg/mL). Hydrogen sulfide production To determine the ability of the found isolates to produce hydrogen sulfide, active cultures were cultivated in Triple Sugar Iron (TSI) Agar (Oxoid) medium and incubated for 2 weeks at 30 °C. At the end of the aerobic incubation period, a blackening of the color of the medium was observed, demonstrating the production of hydrogen sulfide (Lee and Simard ). Lactic acid-producing abilities The isolates should also be tested for their ability to produce lactic acid. For this purpose, 1% inoculations were performed in skimmed milk medium (Biolife Italiana S.r.l., Milan, Italy) with 18-h active cultures. This was then incubated for 6 and 24 h, respectively, at 30 °C under aerobic conditions. To determine lactic acid development, pH values were determined at the end of the incubation period. The difference between the initial pH value of skimmed milk medium and the pH value after incubation (ΔpH) was considered in the evaluation (Sarantinopoulos et al. ). Antibacterial activities The cheese isolates were grown in MRS broth at 37 °C for 24 h under aerobic conditions, and the culture supernatants were obtained by filtration. The inhibitory effect of H 2 O 2 was eliminated by adjusting the pH of the supernatants to 6.5 and adding catalase reagent (5 mg/mL) (Sigma, Missouri, USA). Nutrient agar media (Oxoid) containing 18-h grown pathogenic cultures of Bacillus cereus FMC 19, Escherichia coli ATCC 25922, Listeria monocytogenes RSKK 472 (serovar 1/2b), Salmonella typhimurium NRRLE 4463, and S. aureus ATCC 28213 were poured into petri dishes. Then, wells of 6 mm diameter were formed on the solidified medium. A supernatant of the isolate was placed in each well to be tested for antibacterial activity. The diameters of the inhibition zones formed were recorded after a 24-h incubation period (Dertli et al. ). Antibiotic sensitivity Antibiotic susceptibility testing of the isolates against ampicillin (AMP, 10 µg), gentamycin (GEN, 120 µg), clindamycin (CLI, 2 µg), chloramphenicol (CHL, 30 µg), vancomycin (VAN, 30 µg), kanamycin (KAN, 30 µg), streptomycin (STR, 10 µg), erythromycin (ERY, 10 µg), and tetracycline hydrochloride (TET, 30 µg) (Bioanalyse, Ankara, Turkey) was performed by disk diffusion assay on Mueller–Hinton agar (Bioanalyse) according to EFSA .
The levels of proteolytic activity of the isolates were determined by spectrophotometric measurement of tyrosine formation. This method has been described previously (Citti et al. ) and was applied in this study. The spectrophotometric measurements were performed at a wavelength of 650 nm (Shimadzu UV-120–02, Kyoto, Japan). The values obtained were compared with the results of the tyrosine standard and expressed as the tyrosine equivalent (µg/mL).
To determine the ability of the found isolates to produce hydrogen sulfide, active cultures were cultivated in Triple Sugar Iron (TSI) Agar (Oxoid) medium and incubated for 2 weeks at 30 °C. At the end of the aerobic incubation period, a blackening of the color of the medium was observed, demonstrating the production of hydrogen sulfide (Lee and Simard ).
The isolates should also be tested for their ability to produce lactic acid. For this purpose, 1% inoculations were performed in skimmed milk medium (Biolife Italiana S.r.l., Milan, Italy) with 18-h active cultures. This was then incubated for 6 and 24 h, respectively, at 30 °C under aerobic conditions. To determine lactic acid development, pH values were determined at the end of the incubation period. The difference between the initial pH value of skimmed milk medium and the pH value after incubation (ΔpH) was considered in the evaluation (Sarantinopoulos et al. ).
The cheese isolates were grown in MRS broth at 37 °C for 24 h under aerobic conditions, and the culture supernatants were obtained by filtration. The inhibitory effect of H 2 O 2 was eliminated by adjusting the pH of the supernatants to 6.5 and adding catalase reagent (5 mg/mL) (Sigma, Missouri, USA). Nutrient agar media (Oxoid) containing 18-h grown pathogenic cultures of Bacillus cereus FMC 19, Escherichia coli ATCC 25922, Listeria monocytogenes RSKK 472 (serovar 1/2b), Salmonella typhimurium NRRLE 4463, and S. aureus ATCC 28213 were poured into petri dishes. Then, wells of 6 mm diameter were formed on the solidified medium. A supernatant of the isolate was placed in each well to be tested for antibacterial activity. The diameters of the inhibition zones formed were recorded after a 24-h incubation period (Dertli et al. ).
Antibiotic susceptibility testing of the isolates against ampicillin (AMP, 10 µg), gentamycin (GEN, 120 µg), clindamycin (CLI, 2 µg), chloramphenicol (CHL, 30 µg), vancomycin (VAN, 30 µg), kanamycin (KAN, 30 µg), streptomycin (STR, 10 µg), erythromycin (ERY, 10 µg), and tetracycline hydrochloride (TET, 30 µg) (Bioanalyse, Ankara, Turkey) was performed by disk diffusion assay on Mueller–Hinton agar (Bioanalyse) according to EFSA .
Physicochemical analysis of cheese samples The salt and acidity values analyzed during the ripening period in traditional white cheese samples are visualized in Fig. . The percentage salt levels of the cheese samples were determined to be 2.88 ± 0.72%, 4.16 ± 0.84%, 5.13 ± 1.46%, and 5.51 ± 1.22% at days 1, 15, 30, and 90 of the ripening period, respectively. Similar to the salt levels of the cheese samples, there was an increase in the acidity of samples during the ripening period. Similar to the findings of this study, Uğur and Öner reported the average salt content of white cheese samples to be 4.29% and 5.59% at days 1 and 90 of the ripening period, respectively. Total titratable acidity (g lactic acid per 100 g cheese) of the samples was found to be 0.38 ± 0.18%, 0.43 ± 0.09%, 0.34 ± 0.14%, and 0.66 ± 0.14% at days 1, 15, 30, and 90 of the ripening period, respectively. However, the findings of the present study were similar to the previous findings reported for the white cheese samples collected from different regions. Hayaloglu et al. reported the titratable acidity of pickled white cheese samples to be between 0.37 and 3.80%. Çakmakçı and Kurt determined the titration acidity of fresh white cheese samples to be 0.37% and that of ripened ones to be 0.76% in their study. While the titration acidity of the cheese samples was lower on the first day, the acidity increased by the end of the ripening period. Fluctuations in titratable acidity in the later stages of ripening were caused by the formation of alkaline substances in the medium due to proteolysis during ripening and the change in dry matter (Öner and Sarıdağ ). It is thought that the lipolysis process, that is, the resulting fatty acid composition, also has an effect on the increase in acidity that occurs after the 90th day. Dağdemir et al. and Hayaloglu et al. also obtained similar phenomenon to our study. Microbiological characteristics of cheese samples The total contamination level given in CFU per gram cheese of E. coli , Enterobacteriaceae, Staphylococcus spp., S. aureus , and mold-yeast was determined four times during the 90-day ripening period. All results are displayed in Table . At the beginning of the ripening period, a total of 6 of 14 cheese samples demonstrated E. coli numbers of 2–4.6 log CFU/g, while only 1 cheese sample was positive at the end of the ripening period (day 90). The initial high level of E. coli in cheese samples at the beginning of ripening might be a result of fecal contamination and an insufficient heating process (Beuchat and Ryu ). According to the Turkish Food Codex (Codex ), the maximum E. coli numbers in white cheese should be 10 2 CFU/g. One cheese sample in this study was therefore unsuitable in terms of E. coli numbers. The cheese samples were also tested for the presence of Enterobacteriaceae, and counts of Enterobacteriaceae were observed between 3.5 and 6.4 log CFU/g at the first day of the ripening period (day 1). At the end of the ripening period, the lowest and highest Enterobacteriaceae numbers were 1.2 and 5.1 log CFU/g, respectively. As an indicator of the Enterobacteriaceae microbial group, high numbers might reveal poor hygiene and sanitation conditions as well as fecal contamination (Yücel and Ulusoy ). The testing of Staphylococcus spp. and S. aureus numbers during the first day of the ripening period showed counts between 4.2 and 7.2 log CFU/g, and an approximate decrease of 1.5 log level CFU/g was observed in staphylococcus numbers at day 90. For S. aureus , eight samples tested negative, and in six cheese samples, S. aureus was observed with a contamination level of 3.3–5.1 log CFU/g. At the end of the ripening period, in three cheese samples, S. aureus was still observed (2.0–3.4 log CFU/g). The cell count in these was higher than the allowed detection limits of S. aureus according to the Turkish Food Codex (Codex ). Previously, higher numbers of coagulase positive S. aureus were reported from different regions in traditional cheese samples (Rola et al. ; Saka and Terzi Gulel ). Therefore, more attention should be paid to the milk quality by practicing more hygiene to avoid the occurrence of possible problems associated with these traditional cheese samples. Producers can select longer ripening periods recognized in the industry, such as 6 months and 1 year, as previously discussed (Öner et al. ). The yeast and mold numbers in cheese samples were observed to be between 2.1–5.9 log CFU/g and 1.2–6.2 log CFU/g at days 1 and 90, respectively. These numbers were similar to previous observations (Macedo et al. ; Öner et al. ), and in general, no decrease in the yeast and mold counts during the ripening period was observed, which was in agreement with previous findings (Öner et al. ). The non-inhibition of the yeasts and molds in white cheese samples was associated with their potential to metabolize lactic acid, and they might also contribute to the ripening of cheese (Macedo et al. ). Nonetheless, in terms of white cheese, these high numbers of cells are not acceptable according to the Turkish Food Codex (Codex ). Identification of LAB and nsLAB During the time of ripening, a slightly increase in the count of the LAB was observed. At the first sampling time, a mean count of 7 log CFU/g for Lactobacillus spp. and 9 log CFU/g for Lactococcus spp. was obtained. Further into the ripening phase, the numbers increased slightly and the highest counts of all microbial groups (> 10 log CFU/g) were reached at 90 days of ripening (Fig. ). Our result was also in accordance with previous studies (Öner et al. ; Dertli et al. ). The numbers on the two different media (M17 and MRS agar) at the same sampling time were generally similar. After selecting the colonies according to phenotypic characteristics on both agars M17 and MRS, a total of 375 isolates (three to five isolates for each phenotype) were obtained from the different cheese samples during different ripening periods. We selected one isolate for each typical phenotype for further cultural characterization. Additionally, we selected a total of 51 isolates (one isolate for each typical phenotype) for further phenotypic identification (Table ). The genotypic identification of the selected isolates ( n = 32) by sequence analysis of the 16S rRNA gene revealed the presence of eight Lactococcus ( Lc. ) lactis , two Latilactobacillus ( Lt. ) curvatus and each one isolate of Lactobacillus ( Lb. ) casei and Lb. plantarum , eight Enterococcus ( E. ) durans , four E. faecalis , one E. faecium , five Streptococcus ( St. ) macedonicus , and one Weissella ( W. ) paramesenteroides , collected from traditional white cheese samples. Figure demonstrates the MEGA X alignments of the 16S rRNA partial gene sequences of selected distinct LAB isolates ( n = 32). This reveals their phylogenetic relationship, which resulted in the formation of different subgroups according to their species identification. So far, several LAB species, especially enterococci, were reported to be present in the natural microflora of white cheese samples (Hayaloglu et al. ; İspirli et al. ; Uymaz et al. ). Importantly, all isolates of enterococci obtained in this study were still present in white cheese samples at day 90 of the ripening period. These observations indicate that enterococci may play a role as nsLAB in Kırklareli white cheese. Similar to our findings, it has been previously reported that E. durans and E. faecalis were dominant species in Turkish white cheese (İspirli et al. ). Another species isolated in the present study was St. macedonicus , and previous studies reported the presence of this species in different cheeses including Turkish white cheese (Lombardi et al. ; Ozteber and Başbülbül ; Uymaz et al. ). St. macedonicus was suggested to play a role in the formation of the characteristic flavor of some cheese types (Gobbetti et al. ) as a potential nsLAB. It should be noted that unlike previous findings, a lower level of St. macedonicus was present in Turkish white cheese, indicating that this bacteria may also be present in moderate amounts in Turkish white cheese (Uymaz et al. ). Compared to the enterococci isolates, St. macedonicus was present in the white cheese samples up until day 15 of ripening, which was also the case for the isolate W. paramesenteroides #11–15-A. Previous reports also confirmed the presence of Weissella species in several cheeses, including Ezine cheese (Gerasi et al. ; Uymaz et al. ). Furthermore, within important LAB species for cheese manufacturing as starter cultures, eight isolates of Lc. lactis , two isolates of Lt. curvatus , and each one isolate of Lb. plantarum and Lb. casei were obtained from the different samples in the present study. Similar to our findings, it has been previously reported that these species were dominant starter cultures in Turkish white cheese (Ertürkmen and Öner ; İspirli et al. ). Two distinct isolates of Lt. curvatus (#12–30-B and #8–30-D) were isolated. Previously, Lt. curvatus as nsLAB and supplementary culture (Gobbetti et al. ) have been shown to be present in the microflora of different cheese types as well as other dairy products (Antonsson et al. ; Ozteber and Başbülbül ). The main LAB species isolated in this study were also Lc. lactis , confirming previous observations (Ertürkmen and Öner ; İspirli et al. ). Overall, the results of this study showed the presence of different LAB isolates, all of which could be associated with ripening of different cheese samples. Technological characteristics of LAB A rich microbial diversity in LAB was observed in Kırklareli white brined cheese samples, which can originate from low pasteurization standards, as temperatures above 65 °C were not reached during the cheese production. These LAB constitute the natural starter microflora of white cheese, as no starter cultures are used in the production of Kırklareli white cheese. To understand the starter potential of LAB from white cheese, several properties of selected isolates were tested, as their moderate acid-forming ability and proteolytic activity are crucial for their use as starter cultures during white cheese production (Settanni and Moschetti ). In terms of acid production, a rapid pH decline is essential to achieve adequate coagulation, curd firmness, and control of bacterial pathogen growth. The ΔpH value of the isolates, where a value of < 1 is considered low, between 1 and 1.5 is considered medium, and greater than 1.5 is considered to have a high acid-forming level (Bradley et al. ), was assessed. All isolates showed a low acidogenic activity in skimmed milk, with a pH decrease (ΔpH6) after 6-h incubation at 30 °C ranging from 0.00 to 0.96 pH units. Generally, acid production levels were at an intermediate level for the entire 24-h incubation period ranging from 0.00 to 2.12 pH units, and the highest acidification activity was observed in Lc. lactis isolate #7–30-D, suggesting their potential as starter lactic acid bacteria (LAB) (Marshall ; Settanni and Moschetti ). In this context, the acidification activity of Lc. and Lb. isolates was determined by isolate-specific characteristics, indicating their potential use as starter and/or adjunct cultures to prevent defective fermentations. The results are presented in Table . Another important characteristic of the cheese isolates was the proteolytic activity, which ranged from 34.05 to 76.45%, the expression of which was determined by isolate-specific characteristics. None of the isolates was positive for the production of H 2 S. Ammar et al. categorized the proteolytic activity values of the isolates as strong, moderate, and low, with proteolytic activity values of 100–200, 50–100, and less than 50 μg tyrosine in milliliters, respectively. The results of the present study showed that all found isolates had moderate proteolytic activity except isolates E. faecalis #P13 12–90-B and St. macedonicus #P18 2–15-C, which had low levels of proteolytic activity. In general, isolates with moderate proteolytic activity should be favored for white cheese production in order to avoid the development of bitterness during ripening. As shown in Table , all isolates were sensitive to most antibiotics tested in this study, and 25 different antibiotic profiles (AP1 to AP25) were observed in terms of the degree of inhibition depending on isolate-specific conditions and the antibiotics tested. For example, all isolates were strongly inhibited by ampicillin, whereas the zones of inhibition were generally lower for kanamycin and streptomycin. These two antibiotics have also been shown to be among the antibiotics to which LAB can exhibit high levels of resistance (Pesavento et al. ; İspirli et al. ). Importantly, the presence of vancomycin-resistant was noteworthy in a few isolates within the antibiotic profiles AP1, AP2, and AP3 in this study. It is worth mentioning that previous reports have documented certain isolates of enterococci derived from human feces and cheese as vancomycin-resistant (İspirli et al. , ). Moreover, no antibiotic resistance was detected in various St. macedonicus isolates isolated in this study. The results presented here are consistent with previous observations demonstrating the susceptibility of St. macedonicus isolates (Lombardi et al. ). Similar to previous reports of low antibiotic resistance in W. paramesenteroides (Jeong and Lee ), the cheese isolate W. paramesenteroides # P19 11–15-A was not found to be resistant to the antibiotics tested. Despite the promising technological properties observed, the isolates of Enterococcus spp., St. macedonicus , and W. paramesenteroides found in the current study cannot be considered for incorporation into cheese production. This is because this species does not have qualified presumption of safety status due to being among the leading causes of community- and nosocomial infections (EFSA ). Furthermore, in contrast to previous studies showing resistance of the Lt. curvatus isolate to kanamycin and streptomycin (Shazali et al. ), the two Lt. curvatus isolates from white cheese samples were sensitive to all tested antibiotics, including kanamycin and streptomycin. Overall, the results of this study showed that the selected Lc. lactis and Lb. plantarum and Lb. casei isolates within both antibiotic susceptibility profiles AP24 and AP25 (Table ) were not found to be resistant to antibiotics, which may be a positive feature for their use in industrial production of these cheeses. Another important characteristic of LAB cultures from fermented food products is their antibacterial activity. The antibacterial activities of different LAB isolates were tested against important food pathogens B. cereus FMC 19, E. coli ATCC 25922, L. monocytogenes RSKK 472, S. typhimurium NRRLE 4463, and S. aureus ATCC 28213 (Table ). In general, the antibacterial activity of the isolates was low, as only three of 18 isolates showed antibacterial activity (Table ). The highest antibacterial activity was observed for the isolate E. durans #P3 5–1-C, as this isolate strongly inhibited B. cereus FMC 19, E. coli ATCC 25922, and L. monocytogenes RSKK 472. The isolate E. durans #P4 9–90-B similarly inhibits E. coli ATCC 25922 and L. monocytogenes RSKK 472, although this isolate was not effective against B. cereus FMC 19. The last isolate to show antibacterial activity in this study was Lt. curvatus #12–30-B, which was only effective against L. monocytogenes RSKK 472. Apart from these three isolates, no other 33 isolates including Lc. lactis and Lb. casei and Lb. plantarum showed antibacterial activity, suggesting that isolate-specific characteristics determine antibacterial activity, which may be due to the production of antibacterial substances such as bacteriocins (İspirli et al. ). Studies testing the genotypic and phenotypic characteristics of the bacteriocin production abilities of these isolates are still ongoing.
The salt and acidity values analyzed during the ripening period in traditional white cheese samples are visualized in Fig. . The percentage salt levels of the cheese samples were determined to be 2.88 ± 0.72%, 4.16 ± 0.84%, 5.13 ± 1.46%, and 5.51 ± 1.22% at days 1, 15, 30, and 90 of the ripening period, respectively. Similar to the salt levels of the cheese samples, there was an increase in the acidity of samples during the ripening period. Similar to the findings of this study, Uğur and Öner reported the average salt content of white cheese samples to be 4.29% and 5.59% at days 1 and 90 of the ripening period, respectively. Total titratable acidity (g lactic acid per 100 g cheese) of the samples was found to be 0.38 ± 0.18%, 0.43 ± 0.09%, 0.34 ± 0.14%, and 0.66 ± 0.14% at days 1, 15, 30, and 90 of the ripening period, respectively. However, the findings of the present study were similar to the previous findings reported for the white cheese samples collected from different regions. Hayaloglu et al. reported the titratable acidity of pickled white cheese samples to be between 0.37 and 3.80%. Çakmakçı and Kurt determined the titration acidity of fresh white cheese samples to be 0.37% and that of ripened ones to be 0.76% in their study. While the titration acidity of the cheese samples was lower on the first day, the acidity increased by the end of the ripening period. Fluctuations in titratable acidity in the later stages of ripening were caused by the formation of alkaline substances in the medium due to proteolysis during ripening and the change in dry matter (Öner and Sarıdağ ). It is thought that the lipolysis process, that is, the resulting fatty acid composition, also has an effect on the increase in acidity that occurs after the 90th day. Dağdemir et al. and Hayaloglu et al. also obtained similar phenomenon to our study.
The total contamination level given in CFU per gram cheese of E. coli , Enterobacteriaceae, Staphylococcus spp., S. aureus , and mold-yeast was determined four times during the 90-day ripening period. All results are displayed in Table . At the beginning of the ripening period, a total of 6 of 14 cheese samples demonstrated E. coli numbers of 2–4.6 log CFU/g, while only 1 cheese sample was positive at the end of the ripening period (day 90). The initial high level of E. coli in cheese samples at the beginning of ripening might be a result of fecal contamination and an insufficient heating process (Beuchat and Ryu ). According to the Turkish Food Codex (Codex ), the maximum E. coli numbers in white cheese should be 10 2 CFU/g. One cheese sample in this study was therefore unsuitable in terms of E. coli numbers. The cheese samples were also tested for the presence of Enterobacteriaceae, and counts of Enterobacteriaceae were observed between 3.5 and 6.4 log CFU/g at the first day of the ripening period (day 1). At the end of the ripening period, the lowest and highest Enterobacteriaceae numbers were 1.2 and 5.1 log CFU/g, respectively. As an indicator of the Enterobacteriaceae microbial group, high numbers might reveal poor hygiene and sanitation conditions as well as fecal contamination (Yücel and Ulusoy ). The testing of Staphylococcus spp. and S. aureus numbers during the first day of the ripening period showed counts between 4.2 and 7.2 log CFU/g, and an approximate decrease of 1.5 log level CFU/g was observed in staphylococcus numbers at day 90. For S. aureus , eight samples tested negative, and in six cheese samples, S. aureus was observed with a contamination level of 3.3–5.1 log CFU/g. At the end of the ripening period, in three cheese samples, S. aureus was still observed (2.0–3.4 log CFU/g). The cell count in these was higher than the allowed detection limits of S. aureus according to the Turkish Food Codex (Codex ). Previously, higher numbers of coagulase positive S. aureus were reported from different regions in traditional cheese samples (Rola et al. ; Saka and Terzi Gulel ). Therefore, more attention should be paid to the milk quality by practicing more hygiene to avoid the occurrence of possible problems associated with these traditional cheese samples. Producers can select longer ripening periods recognized in the industry, such as 6 months and 1 year, as previously discussed (Öner et al. ). The yeast and mold numbers in cheese samples were observed to be between 2.1–5.9 log CFU/g and 1.2–6.2 log CFU/g at days 1 and 90, respectively. These numbers were similar to previous observations (Macedo et al. ; Öner et al. ), and in general, no decrease in the yeast and mold counts during the ripening period was observed, which was in agreement with previous findings (Öner et al. ). The non-inhibition of the yeasts and molds in white cheese samples was associated with their potential to metabolize lactic acid, and they might also contribute to the ripening of cheese (Macedo et al. ). Nonetheless, in terms of white cheese, these high numbers of cells are not acceptable according to the Turkish Food Codex (Codex ).
During the time of ripening, a slightly increase in the count of the LAB was observed. At the first sampling time, a mean count of 7 log CFU/g for Lactobacillus spp. and 9 log CFU/g for Lactococcus spp. was obtained. Further into the ripening phase, the numbers increased slightly and the highest counts of all microbial groups (> 10 log CFU/g) were reached at 90 days of ripening (Fig. ). Our result was also in accordance with previous studies (Öner et al. ; Dertli et al. ). The numbers on the two different media (M17 and MRS agar) at the same sampling time were generally similar. After selecting the colonies according to phenotypic characteristics on both agars M17 and MRS, a total of 375 isolates (three to five isolates for each phenotype) were obtained from the different cheese samples during different ripening periods. We selected one isolate for each typical phenotype for further cultural characterization. Additionally, we selected a total of 51 isolates (one isolate for each typical phenotype) for further phenotypic identification (Table ). The genotypic identification of the selected isolates ( n = 32) by sequence analysis of the 16S rRNA gene revealed the presence of eight Lactococcus ( Lc. ) lactis , two Latilactobacillus ( Lt. ) curvatus and each one isolate of Lactobacillus ( Lb. ) casei and Lb. plantarum , eight Enterococcus ( E. ) durans , four E. faecalis , one E. faecium , five Streptococcus ( St. ) macedonicus , and one Weissella ( W. ) paramesenteroides , collected from traditional white cheese samples. Figure demonstrates the MEGA X alignments of the 16S rRNA partial gene sequences of selected distinct LAB isolates ( n = 32). This reveals their phylogenetic relationship, which resulted in the formation of different subgroups according to their species identification. So far, several LAB species, especially enterococci, were reported to be present in the natural microflora of white cheese samples (Hayaloglu et al. ; İspirli et al. ; Uymaz et al. ). Importantly, all isolates of enterococci obtained in this study were still present in white cheese samples at day 90 of the ripening period. These observations indicate that enterococci may play a role as nsLAB in Kırklareli white cheese. Similar to our findings, it has been previously reported that E. durans and E. faecalis were dominant species in Turkish white cheese (İspirli et al. ). Another species isolated in the present study was St. macedonicus , and previous studies reported the presence of this species in different cheeses including Turkish white cheese (Lombardi et al. ; Ozteber and Başbülbül ; Uymaz et al. ). St. macedonicus was suggested to play a role in the formation of the characteristic flavor of some cheese types (Gobbetti et al. ) as a potential nsLAB. It should be noted that unlike previous findings, a lower level of St. macedonicus was present in Turkish white cheese, indicating that this bacteria may also be present in moderate amounts in Turkish white cheese (Uymaz et al. ). Compared to the enterococci isolates, St. macedonicus was present in the white cheese samples up until day 15 of ripening, which was also the case for the isolate W. paramesenteroides #11–15-A. Previous reports also confirmed the presence of Weissella species in several cheeses, including Ezine cheese (Gerasi et al. ; Uymaz et al. ). Furthermore, within important LAB species for cheese manufacturing as starter cultures, eight isolates of Lc. lactis , two isolates of Lt. curvatus , and each one isolate of Lb. plantarum and Lb. casei were obtained from the different samples in the present study. Similar to our findings, it has been previously reported that these species were dominant starter cultures in Turkish white cheese (Ertürkmen and Öner ; İspirli et al. ). Two distinct isolates of Lt. curvatus (#12–30-B and #8–30-D) were isolated. Previously, Lt. curvatus as nsLAB and supplementary culture (Gobbetti et al. ) have been shown to be present in the microflora of different cheese types as well as other dairy products (Antonsson et al. ; Ozteber and Başbülbül ). The main LAB species isolated in this study were also Lc. lactis , confirming previous observations (Ertürkmen and Öner ; İspirli et al. ). Overall, the results of this study showed the presence of different LAB isolates, all of which could be associated with ripening of different cheese samples.
A rich microbial diversity in LAB was observed in Kırklareli white brined cheese samples, which can originate from low pasteurization standards, as temperatures above 65 °C were not reached during the cheese production. These LAB constitute the natural starter microflora of white cheese, as no starter cultures are used in the production of Kırklareli white cheese. To understand the starter potential of LAB from white cheese, several properties of selected isolates were tested, as their moderate acid-forming ability and proteolytic activity are crucial for their use as starter cultures during white cheese production (Settanni and Moschetti ). In terms of acid production, a rapid pH decline is essential to achieve adequate coagulation, curd firmness, and control of bacterial pathogen growth. The ΔpH value of the isolates, where a value of < 1 is considered low, between 1 and 1.5 is considered medium, and greater than 1.5 is considered to have a high acid-forming level (Bradley et al. ), was assessed. All isolates showed a low acidogenic activity in skimmed milk, with a pH decrease (ΔpH6) after 6-h incubation at 30 °C ranging from 0.00 to 0.96 pH units. Generally, acid production levels were at an intermediate level for the entire 24-h incubation period ranging from 0.00 to 2.12 pH units, and the highest acidification activity was observed in Lc. lactis isolate #7–30-D, suggesting their potential as starter lactic acid bacteria (LAB) (Marshall ; Settanni and Moschetti ). In this context, the acidification activity of Lc. and Lb. isolates was determined by isolate-specific characteristics, indicating their potential use as starter and/or adjunct cultures to prevent defective fermentations. The results are presented in Table . Another important characteristic of the cheese isolates was the proteolytic activity, which ranged from 34.05 to 76.45%, the expression of which was determined by isolate-specific characteristics. None of the isolates was positive for the production of H 2 S. Ammar et al. categorized the proteolytic activity values of the isolates as strong, moderate, and low, with proteolytic activity values of 100–200, 50–100, and less than 50 μg tyrosine in milliliters, respectively. The results of the present study showed that all found isolates had moderate proteolytic activity except isolates E. faecalis #P13 12–90-B and St. macedonicus #P18 2–15-C, which had low levels of proteolytic activity. In general, isolates with moderate proteolytic activity should be favored for white cheese production in order to avoid the development of bitterness during ripening. As shown in Table , all isolates were sensitive to most antibiotics tested in this study, and 25 different antibiotic profiles (AP1 to AP25) were observed in terms of the degree of inhibition depending on isolate-specific conditions and the antibiotics tested. For example, all isolates were strongly inhibited by ampicillin, whereas the zones of inhibition were generally lower for kanamycin and streptomycin. These two antibiotics have also been shown to be among the antibiotics to which LAB can exhibit high levels of resistance (Pesavento et al. ; İspirli et al. ). Importantly, the presence of vancomycin-resistant was noteworthy in a few isolates within the antibiotic profiles AP1, AP2, and AP3 in this study. It is worth mentioning that previous reports have documented certain isolates of enterococci derived from human feces and cheese as vancomycin-resistant (İspirli et al. , ). Moreover, no antibiotic resistance was detected in various St. macedonicus isolates isolated in this study. The results presented here are consistent with previous observations demonstrating the susceptibility of St. macedonicus isolates (Lombardi et al. ). Similar to previous reports of low antibiotic resistance in W. paramesenteroides (Jeong and Lee ), the cheese isolate W. paramesenteroides # P19 11–15-A was not found to be resistant to the antibiotics tested. Despite the promising technological properties observed, the isolates of Enterococcus spp., St. macedonicus , and W. paramesenteroides found in the current study cannot be considered for incorporation into cheese production. This is because this species does not have qualified presumption of safety status due to being among the leading causes of community- and nosocomial infections (EFSA ). Furthermore, in contrast to previous studies showing resistance of the Lt. curvatus isolate to kanamycin and streptomycin (Shazali et al. ), the two Lt. curvatus isolates from white cheese samples were sensitive to all tested antibiotics, including kanamycin and streptomycin. Overall, the results of this study showed that the selected Lc. lactis and Lb. plantarum and Lb. casei isolates within both antibiotic susceptibility profiles AP24 and AP25 (Table ) were not found to be resistant to antibiotics, which may be a positive feature for their use in industrial production of these cheeses. Another important characteristic of LAB cultures from fermented food products is their antibacterial activity. The antibacterial activities of different LAB isolates were tested against important food pathogens B. cereus FMC 19, E. coli ATCC 25922, L. monocytogenes RSKK 472, S. typhimurium NRRLE 4463, and S. aureus ATCC 28213 (Table ). In general, the antibacterial activity of the isolates was low, as only three of 18 isolates showed antibacterial activity (Table ). The highest antibacterial activity was observed for the isolate E. durans #P3 5–1-C, as this isolate strongly inhibited B. cereus FMC 19, E. coli ATCC 25922, and L. monocytogenes RSKK 472. The isolate E. durans #P4 9–90-B similarly inhibits E. coli ATCC 25922 and L. monocytogenes RSKK 472, although this isolate was not effective against B. cereus FMC 19. The last isolate to show antibacterial activity in this study was Lt. curvatus #12–30-B, which was only effective against L. monocytogenes RSKK 472. Apart from these three isolates, no other 33 isolates including Lc. lactis and Lb. casei and Lb. plantarum showed antibacterial activity, suggesting that isolate-specific characteristics determine antibacterial activity, which may be due to the production of antibacterial substances such as bacteriocins (İspirli et al. ). Studies testing the genotypic and phenotypic characteristics of the bacteriocin production abilities of these isolates are still ongoing.
This study characterized the physicochemical and microbiological properties of traditionally produced Kırklareli white brined cheese, and the nsLAB/sLAB profile was determined during ripening. As potential starter cultures, Lc. lactis dominated the bacterial profile in the cheese samples, followed to a lesser extent by Lb. casei and Lb. plantarum . Worthy of mention, no antibiotic resistance was observed for any of these isolates. Moderate levels of acidifying and proteolytic activity were observed in all isolates, confirming their potential as starter culture in traditional white cheese production in the Kırklareli region. The results presented here allow a first conclusion to be drawn about the suitability of these isolated properties in cheese production, thus forming the basis for further investigations. The presence of these Lc. lactis , Lb. casei , and Lb. plantarum isolates as potential adjunct cultures in white cheese should be further investigated. Indeed, studies on this topic were already initiated with some of these isolates.
|
Paediatricians play a key role in preventing early harmful events that could permanently influence the development of the gut microbiota in childhood | 6a731c23-fb2f-485b-8bfe-b53e3b99f166 | 6852013 | Pediatrics[mh] | The microbial communities hosted by the human gut have been forged over millions of years of co‐evolution with humans, to achieve a symbiotic relationship leading to physiological homoeostasis. The gut microbiota has become a new, fascinating and promising area of research, which enables us to understand the development of gut functions and some health disorders and diseases, as well as their treatment or prevention. The development of the gut microbiota occurs primarily during infancy. Evidence regarding the implications of the gut microbiota in children is increasing, and new insights have been reported about the development of the microbiome during early life. For example, advances in genome sequencing technology and metagenomic analysis are increasing our broader understanding of the gut microbiota and highlighting differences between healthy and diseased states. Healthcare professionals involved in paediatric care may find it difficult to interpret the complex data published in specialised literature. However, this information is of considerable importance in paediatric practice. Different definitions have also caused confusion. These include the interchangeable use of the basic terms microbiome and microbiota by the medical community and the general public when they are talking about the local mini‐ecosystem of a collection of microorganisms in the gut.
The European Paediatric Association, the Union of the National European Paediatric Societies and Associations, convened a panel of eight independent European experts from five countries to outline the essential elements of the current knowledge on the gut microbiota that may be useful for general paediatricians in their practice. The panel was chosen based on the experts’ scientific profiles and publication history, and all members were active participants in the work and activities of the association. The panel held their first meeting with regard to this review at the 8 th Europaediatrics Congress in Bucharest in June 2017, where they discussed relevant issues about the definition and function of the gut microbiota. They decided that a particular focus of this review would be to highlight the factors that influence the gut microbiota in early life, as well as their potential harmful effects in later life, for the benefit of general paediatricians. Each panel member was responsible for reviewing the literature on a given topic, according to their specific expertise. They searched for papers published in English up to June 2018 by using PubMed and the Cochrane Library. The members then summarised the relevant findings on their given topic, and the panel discussed the findings discussed in a series of meetings until they reached a final consensus.
A microbiological approach to understanding the gut microbiota Previously called the gut microflora, the microbial communities are composed of approximately 10 14 bacteria, which is approximately 10 times the number of cells in the human body . The term gut microbiota refers to the organisms that comprise the microbial community, while the term microbiome refers to the collective genomes of the microbes, including bacteria, bacteriophages, fungi, protozoa and viruses that live inside and on the human body. The gut microbiota may be considered a human organ that can be transplanted, and it has its own functions, such as modulating the expression of genes involved in mucosal barrier fortification, angiogenesis and postnatal intestinal maturation of several gut‐associated systems . The gut microbiota comprises more than 2000 microbial species. Its diversity has been revealed by the application of metagenomics: 16S ribosomal ribonucleic acid gene or deoxyribonucleic acid . Firmicutes and Bacteroidetes are the two dominant bacterial phyla in most individuals. Other phyla include Proteobacteria, Actinobacteria , Fusobacteria and Verrucomicrobia . Groups of bacterial families have been classified into enterotypes on the basis of their functions. The term enterotype and its definition remain debated. For example, the classification may be based on the metabolism of dietary components and the ability to metabolise drugs. The aim of this classification is to help us to understand the role of the gut microbiota in health and disease. Ageing is associated with changes in the diversity of noncultured species that current laboratory culturing techniques are unable to grow in the laboratory. These are a greater proportion of Bacteroides , a distinct abundance of Clostridium clusters, an increased enterobacteria population and a lower number of bifidobacteria. The taxonomic alterations may be due to changes in diets, such as less fibre, and/or, the increased use of antibiotics with advancing age . There is no definition of a normal microbiota, since the bacterial species vary in different groups of individuals. The vast majority of microbial species give rise to symbiotic host–bacterial interactions that are fundamental for human health. Disrupting the development of a stable gut microbiota, which is known as dysbiosis, may be associated with several clinical conditions. These include nosocomial infections, necrotising enterocolitis in premature infants, inflammatory bowel disease, obesity, autoimmune diseases, allergies or even functional bowel disorders or behavioural problems. Factors influencing neonatal intestinal colonisation Foetal colonisation and prematurity The sterility of the gut of the foetus in utero has been challenged by studies that have identified bacteria, bacterial deoxyribonucleic acid or bacterial products in the meconium, amniotic fluid and placenta. These indicate the initiation of microbial colonisation from the mother to offspring , . Therefore, during developmental phases, the foetus could encounter bacteria in utero that might contribute to establishing the microbiota before delivery. This prenatal bacterial colonisation of the foetal gut might be a source of microbial stimulation, providing a primary signal for the maturation of a balanced postnatal innate and adaptive immune system. However, studies stating the existence of this in utero microbiota remain controversial , . Importantly it has been shown that meconium with low bacterial diversity has been associated with a more frequent onset of sepsis in very low birth weight babies . The first and most important phase of normal colonisation occurs when the newborn foetus passes through the birth canal and ingests maternal vaginal and faecal microorganisms. These bacteria proliferate further when oral feeding is initiated. After 48 hours, the number of bacteria is already as high as around10 4 –10 6 colony‐forming units per millilitre of intestinal content. However, many factors can influence this process and they may potentially impair the establishment of what is known as symbiosis (Fig. ) . The pattern of bacterial colonisation in preterm infants differs from the pattern observed in the healthy gut of full‐term infants during the neonatal period . This abnormal colonisation, which is mostly due to the routine use of sterile formula and antibiotics in neonatal intensive care units, could play a central role in feeding intolerance. It could also be indicated in the development of necrotising enterocolitis, which is a severe disease primarily that affects premature infants and often leads to death or short bowel syndrome, which requires an extensive bowel resection . Mode of delivery The microbiota of vaginally delivered infants mirrors the vaginal and gut microbiota of the mother. Infants delivered by Caesarean section have reduced bacterial biodiversity, and colonisation by Bifidobacteria can be delayed by up to six months, in contrast to vaginally delivered infants , . Infants delivered by Caesarean section exhibit bacterial communities composed of prominent genera, such as Lactobacillus , Prevotella , Escherichia , Bacteroides and Bifidobacterium . After a Caesarean section, the gut microbiota is characterised by a reduced number of Bifidobacteria species. Although vaginally delivered neonates exhibit individual microbial profiles, these are characterised by predominant groups, such as Bifidobacterium longum and Bifidobacterium catenulatum . Dominguez‐Bello et al. used multiplex 16S ribosomal ribonucleic acid gene pyrosequencing to characterise the bacterial communities of mothers and their neonates. Interestingly, they reported that vaginally delivered infants acquired bacterial communities that resembled their own mothers’ vaginal microbiota and that these were dominated by Lactobacillus , Prevotella or Sneathia spp. In contrast, infants delivered by Caesarean section harboured bacterial communities similar to those found on the skin surface and these were dominated by Staphylococcus , Corynebacterium and Propionibacterium spp. . Influence of feeding The mode of oral feeding may influence the composition of the gut microbiota in infants. Breastfeeding has been associated with higher diversity, as assessed using the Shannon index . Human milk contains beneficial factors for the gut microbiota, such as oligosaccharides . Oligosaccharides function as prebiotics, by stimulating the growth of Bifidobacterium and Lactobacillus species, thereby selectively altering the microbial composition of the intestine . It is likely that evolutionary selective pressure has equipped Bifidobacterium longum subsp . infantis with multiple enzymes to deconstruct human milk glycans. As a result, this subspecies is able to outcompete other Bifidobacteria as well as other commensals and pathogens in the gut lumen of healthy breastfed infants . In formula‐fed infants, Enterococci , Bacteroides and Clostridia predominate. When breastfed infants are one month of age, there is a direct association between the levels of secretory immunoglobulin A in intestinal secretions and the number of Bifidobacteria in the gut. Furthermore, the level of the proinflammatory cytokine interleukin‐6 in intestinal secretions is inversely related to the number of Bifidobacterium fragilis organisms in the gut at one month of age. It has been suggested that human milk oligosaccharides do not just stimulate Bifidobacterium longum subsp . infantis proliferation, they also activate important genes involved in the proinflammatory and anti‐inflammatory balance in the intestinal mucosa . These observations provide additional evidence of the beneficial effects of breastfeeding for the newborn infant (Fig. ). In addition to human milk oligosaccharides, human milk contains other glycans with antimicrobial and prebiotic activity that are thought to have beneficial effects on the infant . On the other hand, there is accumulating evidence that human milk is not sterile, but contains maternally derived bacterial molecular motifs that are thought to influence the development of the newborn infant’s immune system . This mechanism, which has been called bacterial imprinting, requires further research . However, comparative studies with formula‐fed infants have not carefully documented the effects of formula feeding on the gut microbiota or health‐promoting bacteria. There is growing evidence that the microbiota does not reach its adult composition until two to three years of age . Finally, host defences can be improved by breastfeeding, which helps the immature intestinal mucosal immune system to develop and respond appropriately to highly variable bacterial colonisation and food antigen loads. Later in life, the type of food consumed influences the profile of the gut microbiota and short‐chain fatty acids play a central role . Short‐chain fatty acids are organic fatty acids that are produced in the distal gut by the bacterial fermentation of macro‐fibrous material that escapes digestion in the upper gastrointestinal tract and enters the colon. They are central to the physiology and metabolism of the colon. Resident bacteria can also metabolise dietary carcinogens, synthesise vitamins and assist in the absorption of various molecules. Research has shown that 90‐95% the short‐chain fatty acids present in the colon are made up of acetate (60%), propionate (25%) and butyrate (15%). Butyrate is a major energy source for the colonic epithelium. Short‐chain fatty acids have been associated with improved metabolic functions in individuals with type 2 diabetes mellitus, as they help to control blood glucose levels, insulin resistance and glucagon‐like peptide‐1 secretion . Gut microbiota predators The use of broad‐spectrum antibiotics significantly reduces the relative abundance of Bacteroidetes and increases the abundance of Firmicutes at the same time. A reduction in microbial diversity is often observed in infants under one year of age who have received oral antibiotics. Complete recovery of the initial bacterial composition is not always achieved. The response depends on the type of antibiotics, the duration of administration and the baseline microbiome. Studies have reported that antibiotics that target specific pathogenic infections and diseases may alter the gut microbiota ecology, and interactions with the host metabolism, to a much greater degree than previously assumed . The prolonged use of antibiotics, which is common in preterm infants, profoundly decreases microbial diversity and promotes the growth of predominant pathogens, such as Clostridium , Klebsiella and Veillonella , which have been associated with neonatal sepsis. It has been suggested that there may be healthy microbiota present in extremely premature neonates that may ameliorate the risk of sepsis . More research is needed to determine whether different antibiotics, probiotics or other novel therapies could re‐establish a healthy microbiome in neonates. It has also been reported that when low‐dose antibiotic exposure disrupted the microbiota during maturation, this altered the host metabolism and adiposity in mice . A study that gave mice low‐dose penicillin immediately after birth demonstrated that metabolic alterations and changes in the ileal expression of genes were involved in immunity . Administering low‐dose penicillin sufficiently perturbs the microbiota to modify body composition, even when these drugs are limited to early life. This indicates that microbiota interactions in infancy may be critical determinants of long‐term host metabolic effects. Other xenobiotics, such as proton pump inhibitors, may alter the gut microbiota. Meta‐analyses have shown that the use of proton pump inhibitors potentially increased the risk of enteric infections caused by Clostridium difficile. They have also led to small intestinal bacterial overgrowth, spontaneous bacterial peritonitis, community‐acquired pneumonia, hepatic encephalopathy and adverse outcomes of inflammatory bowel disease . The role the gut microbiota plays in gut maturation A study by Hooper et al, published in 2001, reported that a single bacterial species, Bacteroides thetaiotaomicron , which is a prominent component of normal mouse and human intestinal microbiomes, modulated the expression of genes involved in several important intestinal functions. These included nutrient absorption, mucosal barrier fortification, xenobiotic metabolism, angiogenesis and postnatal intestinal maturation . Another study that covered gastrointestinal motility, found that bacterial metabolites, such as short‐chain fatty acids and deconjugated bile salts, generated potent motor responses . Colonised mice have been shown to have a faster intestinal transit time than germ‐free mice . Collectively, the gut microbiota influences tissue regeneration, the permeability of the epithelium, the vascularisation of the gut and tissue homoeostasis. Role of the gut microbiota in the development of the gut immune system The intestine is an important immune organ that harbours approximately 60% of the total immunoglobulins and more than 10 6 lymphocytes per gram of tissue. The largest pool of immune‐competent cells in the body is housed in the intestinal mucosa. The number of T lymphocytes and plasmocytes within the intestinal lamina propria increases markedly in response to intestinal colonisation. Although immunoglobulin A producing cells are virtually absent in germ‐free mice, high levels are detectable in the mucosa when bacterial colonisation occurs . The gut microbiota exerts positive stimulatory effects on the intestinal innate and adaptive immune systems, by modulating the development of the intestinal mucous layer and lymphoid structures, immune‐cell differentiation and the production of immune mediators , . The innate immune system must discriminate between pathogens and the harmless commensal bacteria of the gut microbiota. Pathogen recognition receptors, such as Toll‐like receptors and nucleotide‐binding oligomerisation domain receptors, enable us to recognise a restricted number of bacterial motifs. These can be either microbe‐associated molecular patterns or, in the case of pathogens, pathogen‐associated molecular patterns . Both types of pathogen recognition receptors are naturally expressed by the intestinal epithelial and antigen‐presenting cells, such as dendritic cells or macrophages, and this enables them to sense any bacterial motifs easily. The intestinal epithelial barrier is protected by a highly viscous microfilm to avoid permanent and unwanted stimulation of the innate immune system. This prevents close contact between the commensal bacteria and intestinal epithelial cells. The intestinal mucosal barrier function can be defined as the capacity of the intestine to host commensal bacteria and molecules, while preserving the ability to absorb nutrients and prevent the invasion of host tissues by resident bacteria. The dense communities of bacteria in the intestine are separated from body tissues by a monolayer of intestinal epithelial cells. The assembly of the multiple components of the intestinal barrier is initiated during foetal development and continues during early postnatal life. This means that the intestinal barrier is not completely developed soon after birth, particularly in preterm infants. The secretion of mucus‐forming mucins, secretory immunoglobulin A and antimicrobial peptides reinforces the mucosal barrier on the extra‐epithelial side, while a variety of immune cells contribute to mucosal defence on the inner side. Thus, the mucosal barrier is physical, biochemical and immune in nature. In addition, the microbiota may be viewed as part of this system because of the mutual influence of the host and the luminal microorganisms. Altered mucosal barrier function, accompanied by increased permeability and/or bacterial translocation, has been linked to a variety of conditions. These have included metabolic disorders, such as type 2 diabetes mellitus, insulin resistance, obesity and inflammatory bowel diseases . Genetic and environmental factors may converge to evoke defective functioning of the barrier, which, in turn, may lead to overt inflammation of the intestine as a result of an exacerbated immune reaction towards the microbiota. Inflammatory bowel diseases may be both precipitated and treated by either stimulation or downregulation of the different elements of the mucosal barrier, and the outcome depends on the timing, the types of cells affected and other factors. Fermentation products of commensal bacteria have been shown to enhance the intestinal barrier’s function, by facilitating the assembly of tight junctions through the activation of adenosine monophosphate‐activated protein kinases . On the other hand, removing the entire detectable commensal gut microbiota by using a four‐week course of four orally administered antibiotics – vancomycin, neomycin, metronidazole and ampicillin – led to more severe intestinal mucosal injury in a mouse colitis model induced by dextran sulphate sodium . Early treatments with broad‐spectrum antibiotics have been shown to alter the gastrointestinal tract’s gene expression profile and intestinal barrier development . This finding underlines the importance of normal bacterial colonisation in the development and maintenance of the intestinal barrier. Antibiotic therapy between birth and five years of age might increase the risk of Crohn disease by disrupting the pattern of gut colonisation . A meta‐analysis confirmed that antibiotic use was associated with an increased risk of new‐onset Crohn disease, but not of ulcerative colitis . In summary, the gut microbiota protects against pathogens, influences the development of the intestinal barrier and its functions and plays many roles in the development of the gut immune system. It acts by competing for nutrients and receptors, by producing antimicrobial compounds and by stimulating a multiple‐cell signalling process that can limit the release of virulence factors. Role of the gut microbiota in health and disease As emphasised above, microorganisms colonise the human gut from birth, and even before that, and stimulate the development of the local and systemic immune systems. In addition, the newly developed immune system shapes the gut flora, which means that it is unique for every individual. An imbalance or alteration in the composition and/or function of the microbiota, which is usually called dysbiosis, has been found to be associated with many chronic diseases . However, in this relationship, it is almost impossible to delineate the causes from the consequences, as few studies have shown that changes in the microbiota precede inflammation . Inflammatory bowel disease The current hypothesis of the aetiology of inflammatory bowel disease suggests that the inflammation is a consequence of an unrestrained or aberrant immune response to the gut flora, which is shaped by different environmental factors in a genetically predisposed individual . The most consistent changes that have been described have been a reduction in the diversity of the gut microbiota, increased abundance of Bacteroidetes and Proteobacteria and the loss of Firmicutes . Furthermore, the loss of certain specific beneficial microbes, such as Faecalibacterium prausnitzii and members of Clostridium clusters XIVa and IV, has previously been described . The importance of these specific microorganisms has been further demonstrated by their ability to inhibit inflammation and affect the differentiation of regulatory T cells. More precisely, Faecalibacterium prausnitzii has the ability to stimulate the production of interleukin‐10 and inhibit proinflammatory cytokines such as interleukin‐12 and interferon‐gamma. Other mechanisms could also be involved, such as decreased production of short‐chain fatty acids, which then affects the differentiation and expansion of regulatory T cells and the growth of epithelial cells. Another well‐described feature of patients with inflammatory bowel diseases is altered intestinal barrier function, and this has mainly been increased permeability and decreased mucus production. Both of these factors can be influenced by the microbiota, but they can also give bacteria easier access to the mucosa, allowing them get closer to immunocompetent cells. Patients with inflammatory bowel diseases exhibit increased colonisation by bacteria that are able to adhere to the intestinal epithelium, causing altered permeability of the intestine . This adherence can be further promoted by the increased number of mucolytic bacteria, such Ruminococcus gnavus and Ruminococcus torques . In addition, the number of sulphate‐reducing bacteria, such as Desulfovibrio , is increased in patients with inflammatory bowel disease. This has been shown to result in the production of hydrogen sulphate, which damages intestinal epithelial cells and induces mucosal inflammation . Functional gastrointestinal disorders The pathogenesis of functional gastrointestinal disorders has not yet been fully explained, but the proposed mechanisms include mild gastrointestinal inflammation, visceral hypersensitivity, an altered brain–gut axis and altered gut microflora . The most notable changes in microbial intestinal colonisation during the first weeks and months of life have been described in infants with infant colic . These infants were reported to have decreased faecal‐bacterial diversity, increased gram‐negative bacterial colonisation and a lack of Actinobacteria and Firmicutes , which appears to have a protective effect. More specifically, infants with colic have been shown to have more Proteobacteria and less Bifidobacteria and Lactobacillus . Although a cause versus effect phenomenon has not been fully described, there is evidence that changes in the gut microbiota precede the development of infantile colic . Similar changes in the microbiome have been reported in older children with functional gastrointestinal disorders. One meta‐analysis, published in 2017, identified downregulated colonisation of Lactobacillus , Bifidobacterium and Faecalibacterium prausnitzii in patients with irritable bowel syndrome, particularly in irritable bowel syndrome where diarrhoea predominated . Furthermore, a greater proportion of the Proteobacteria phylum and of genera, such as Dorea, Haemophilus , Ruminococcus and Clostridium species, were found in the same group of patients, . These changes might have altered or influenced visceral perception, gut motility, gut permeability and intestinal gas production, which can lead to functional gastrointestinal disorders where pain is the predominant complaint. Allergies The immune system of the gastrointestinal tract is in close proximity to many antigens that originate mainly from food and the gut microbiota, both of which can affect immune tolerance. The normal commensal microflora play an essential role in inflammatory homoeostasis and appropriate immune regulation and may therefore influence the development of allergic diseases. It has been suggested that alterations in the microbiota can disrupt mucosal immune tolerance, leading to allergic diseases, such as food allergies, atopic dermatitis and even asthma . The early microbiota of children who later developed allergies has been characterised by lower bacterial diversity, with predominant Firmicutes , higher counts of the Bacteroidaceae , increased numbers of the anaerobic Bacteroides fragilis , Escherichia coli , Clostridium difficile , Bifidobacterium catenulatum , Bifidobacterium bifidum and Bifidobacterium longum. In contrast and decreased numbers of Bifidobacterium adolescentis , Bifidobacterium bifidum and Lactobacillus have been reported . When the microbiota of children with allergies was assessed at the onset of allergic symptoms in one study, it showed a different pattern, with higher counts of Bacteroides , lower counts of Akkermansia muciniphila, Faecalibacterium prausnitzii and Clostridium and overall lower bacterial diversity . The potential mechanisms underlying an increased risk of sensitisation and allergy development, detected as a consequence of dysbiosis in animal models, have been related to various alterations in mucosal regulatory T cells. Other reported effects were defects in the epithelial barrier function, as evidenced by increased mucosal permeability, diminished secretory immunoglobulin A production and excretion and altered dendritic and B‐cell function . Obesity and liver disease Studies have shown that gut microbiota could also play an important role in the etiopathogenesis of obesity and other prevalent chronic liver diseases, such as nonalcoholic fatty liver disease and nonalcoholic steatohepatitis. Nonalcoholic fatty liver disease has become one of the most frequent causes of liver disease and represents a spectrum of pathologies, varying from steatosis to nonalcoholic steatohepatitis, with or without cirrhosis, and possible evolution to hepatocellular carcinoma. Nonalcoholic fatty liver disease is a multifactorial disease that is affected by genetic, metabolic, dietary and environmental factors. The most commonly proposed theory is the multiple hit hypothesis, which also involves changes to the gut microbiota . The gut microbiota plays an important role in obesity, and this is primarily based on its influence on energy balance. Dysbiosis affects short‐chain fatty acid production and metabolism and adipocyte lipid deposition, with a decrease in mitochondrial fatty acid oxidation. Human studies have reported that the balance between Bacteroidetes and Firmicutes has been related to obesity. Lean subjects have more Bacteroidetes in their gut microbiota, and diet that restricted fats and carbohydrates was shown to increase the ratio in favour of Bacteroidetes . With regard to chronic liver disease, the proposed mechanisms for the negative effects of dysbiosis include small intestine bacterial overgrowth, altered release of inflammatory cytokines, alteration of the intestinal barrier, choline metabolism, endogenous ethanol production, regulation of hepatic toll‐like receptors expression in patients with nonalcoholic fatty liver disease or nonalcoholic steatohepatitis and an alteration in bile acid metabolism . Furthermore, there is evidence that gut dysbiosis promotes the progression of nonalcoholic steatohepatitis to cirrhosis and hepatocellular carcinoma via an increase in tumour necrosis factor alpha and interleukin‐8, the activation of toll‐like receptor‐4 and toll‐like receptor‐9 and the production of interleukin‐1beta in Kupffer cells, favouring lipid accumulation, hepatocyte death, steatosis, inflammation and fibrosis . There have been many animal studies that have evaluated the gut microbiota differences associated with nonalcoholic fatty liver disease or nonalcoholic steatohepatitis, but few studies have been performed in humans and they have produced inconsistent results. Patients with nonalcoholic steatohepatitis, including children, have been reported to have lower levels of Bacteroidetes than patients with liver steatosis or healthy individuals . Firmicutes have been found in higher levels in individuals with nonalcoholic fatty liver disease than in healthy subjects , but the results have not been consistent . Modulation of the gut microbiota The gut microbiota can be modulated to achieve health‐promoting effects . The beneficial manipulation of the composition and metabolic footprint of the gut microbiota can be achieved by using probiotics. These can be defined as a preparation of, or a product containing, viable microorganisms in an adequate number to enable such dietary preparations to favourably modulate the gut microbiota , . The ability to exert a beneficial modulation on the gut microbiota may be enhanced by combining probiotics with other ingredients (64), namely prebiotics, which are capable of favouring the growth and/or activity of microorganisms. Prebiotics appear to be poorly understood by the general public in this regard . It is important to correctly define, and understand, prebiotics and their potential when they are combined with probiotics. This information needs to be disseminated beyond the scientific community, so that regulatory agencies, the food industry and healthcare professionals can correctly describe them and suggest how they should be used. The combined use of prebiotics and probiotics may be described as synbiotic if the net health benefit is synergistic and scientifically validated . Finally, the terms paraprobiotic and postbiotic describe nonviable bacterial cells and soluble factors that are secreted as metabolic by‐products by live bacteria. Such products, which could also be released after bacterial lysis, can provide additional physiological benefits to the host organism. That is why they have received increasing attention from scientific researchers and industry, due to their potential food and pharmaceutical applications (Table ). Probiotics Probiotics have been defined as live microorganisms that, when administered in adequate amounts, confer a health benefit on the host , . The term probiotics is used widely, but not always properly, in the scientific literature and by the industry. Their fundamental characteristics have been described extensively in the literature , including their microbial origin, their viability and their benefit to the health of the host (Table ). The microbial origin of a probiotic product must be guaranteed by identifying a taxonomically defined microbe or combination of microbes. A probiotic must therefore be properly identified by strain‐genotypically and phenotypically characterised. An essential characteristic of a probiotic is its viability , as it must be a live microorganism that is able to survive the acidity of the stomach in order to reach and colonise the intestinal tract. Moreover, a probiotic must be guaranteed to remain viable and stable throughout the technical procedures, during its production, use and storage. A consensus statement was issued by the International Scientific Association for Probiotics and Prebiotics in 2013 with regard to the possible benefits of probiotics to human health. The statement sought to further clarify the appropriate use and scope of the term probiotic and stated that probiotics should exert specific general benefits, which it defined as core benefits . These benefits include contributing to establishing and sustaining a healthy gut microbiota. They are expected to be obtained by creating a favourable intestinal environment through nonstrain‐specific beneficial actions that are shared by most probiotics, which sustain a healthy digestive tract and immune system. In fact, some effects of probiotics can be observed across taxonomic groups and are achieved through general mechanisms, such as the inhibition of pathogens and the production of beneficial metabolites. These effects should be distinguished from other benefits, such as neurological or endocrinological effects, which are strain specific. An important aspect of probiotic activity is identifying the adequate amount that is able to confer health benefits on the host and a specific accepted definition of this is not currently available. Nevertheless, some regulatory approaches in Canada and Italy , have suggested that a probiotic product should contain at least 1^10 9 colony‐forming units per serving to be able to exert the claimed beneficial effects. The 2013 Statement also describes the different categories of live microorganisms for human use, in order to distinguish what can and cannot be considered a probiotic, according to health claims . Products claiming to contain live and active cultures should not be considered probiotics, because the simple use of the terms live and active does not imply any probiotic activity. Foods or supplements that state they contain probiotics have no specific health claims, and their expected effects are those related to the core benefits, as demonstrated by well‐conducted human studies. Products containing probiotics that make specific health claims are those that claim to have any beneficial health effects, according to documented evidence from well‐designed observational studies. Products containing probiotics that claim they can prevent or treat a specific disease need to be backed up by appropriate trials to meet the regulatory standards for drugs. Probiotics are commonly used in paediatric practice, and a summary of the indications and limitations is reported in Table . Their use includes preventing common and nosocomial infections, allergies and antibiotic‐associated diarrhoea, treating acute gastroenteritis and functional abdominal pain disorders and preventing and treating infantile colic. Guidelines by Hojsak et al. on using probiotics in clinical practice for children were published in 2018 , and the study reported that they seemed to be safe in general, even when provided in high doses. The authors provided a detailed description of the correct conditions for their use, together with specific positive instructions for the use of strictly defined strains for various clinical conditions. These conditions include preventing upper respiratory tract infections in children attending day care centres, nosocomial diarrhoea and antibiotic‐associated diarrhoea and treating acute gastroenteritis and infantile colic in breastfed infants. Prebiotics The definition of prebiotics has undergone an important evolution over time. They were initially referred to as nondigestible food ingredients that beneficially affect the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria already residing in the colon (S61). Several studies have focused on the nondigestible oligosaccharides fructans, namely fructooligosaccharides and inulin, and galactans, namely galactooligosaccharides, and how they exert their effects through the enrichment of Lactobacillus and/or Bifidobacterium spp. (S62). Prebiotics have been described as nondigestible compounds that confer a beneficial physiological effect on the host (S63). They do this by metabolising microorganisms in the gut, which then modulate the composition and/or activity of the gut microbiota. In 2017, the International Scientific Association for Probiotics and Prebiotics Consensus Statement proposed a new definition for prebiotics (S64). The document discussed the concept of selectivity with respect to fermentation by bacteria and suggested that prebiotics were defined as substrates that are selectively utilised by host microorganisms and confer a health benefit on the host (Table ). Incorporating the concept of selectivity in the definition is important, as it distinguishes between prebiotics and other substances. The term selective does not mean that only lactobacilli and bifidobacteria are affected by prebiotics. It means that a broader range of microorganisms, but not all, can be affected. Substances that can affect the composition of the microbiota, but are not selectively used by microorganisms, are not prebiotics. The use of prebiotics in paediatric clinical practice is currently limited. Human milk oligosaccharides are a group of prebiotics that can influence a newborn infant’s gastrointestinal health by favouring the development of a healthy gut microbiota through some metabolic and immunological activities. It has been demonstrated that an infant’s consumption of human milk oligosaccharides increases the proportion of human milk oligosaccharide‐consuming Bifidobacteriaceae , particularly Bifidobacterium longum subsp. infantis and Bacteroidaceae (S65). The mechanisms of action in the newborn infant’s intestine include immune regulation and preventing the adhesion of pathogens to the intestinal epithelium, which protects the infant from infections (S66). Some compounds that are equivalent to human milk oligosaccharides or bovine milk oligosaccharides are obtained by enzymatic synthesis. It is still a matter of debate whether these are able to exert beneficial effects on human health by selectively stimulating the microbiota and thus acting as prebiotics. The existing literature does not provide definitive conclusions, but some human milk oligosaccharides may be considered candidate prebiotics. Studies have reported that prebiotics containing immunoactive oligosaccharides could effectively prevent atopic dermatitis in low‐atopy risk infants and that they could potentially be used to prevent adolescents becoming overweight. However, the clinical significance and efficacy of prebiotics and their possible widespread use in paediatric practice still needs to be clarified (S67‐69). Synbiotics Synbiotics are commonly described as a combination of probiotics and prebiotics in functional food compounds. Functional food is a food that has been modified and claims to improve a person’s health or well‐being by providing benefits that extend beyond the traditional nutrients it contains. Examples of functional foods include bread, cereals and drinks that are fortified with vitamins or selected herbs. They can also contain nutraceuticals, which have physiological benefits or provide protection against chronic disease. Studies have reported that their combined use has facilitated the survival of live microbial dietary supplements and their implantation in the gastrointestinal tract (S66–68). This mechanism has been reported to generate a beneficial effect in the host organism, by the metabolic activation of a restricted type of bacteria, which is considered to be health promoting, and the selective stimulation of its growth (S69). It has been suggested that these combined conditions have improved the host’s welfare (S70). Single products containing an appropriate combination of probiotics and prebiotics have been reported to guarantee a greater effect than when they have been used separately. In fact, the synbiotic activity of foods containing a combination of prebiotics and probiotics is based on their elective action in two different areas of the gut. Probiotics are mainly active in the small and large intestine, while prebiotics are mainly active in the large intestine (S69). Synbiotics act in combination in two main ways: by improving the viability of probiotic microorganisms and by providing specific benefits for the host’s health. The rationale of using a synbiotic formulation of prebiotics is because they function as a selective medium, favouring the growth of certain probiotic strains, their fermentation and their intestinal passage. Furthermore, several studies have reported that prebiotics have positively influenced the ability of probiotic microorganisms to develop higher tolerance to particular situations caused by the presence of possibly challenging conditions. These include oxygenation and the pH and temperature of the intestines (S71). In brief, the main reason for using synbiotics is that the survival of probiotics in the digestive system is challenging in normal conditions and when an appropriate prebiotic is not present. Therefore, using prebiotics to stimulate the effectiveness of probiotics appears to be a good way of inducing the beneficial modulation of the metabolic activity of probiotics in the intestine. At the same time, this preserves the intestinal biostructure, favours the development and maintenance of a beneficial microbiota and inhibits the growth of potential pathogens in the gut. In general, the beneficial outcomes of synbiotics for the host’s health have been related to significant increases in short‐chain fatty acid levels, ketones, carbon disulphides and methyl acetates. In particular, the potential beneficial activity of synbiotics in clinical practice has been described in different clinical conditions (Table ). The reported potential therapeutic properties of synbiotics include anticarcinogenic, anti‐allergic and antibacterial effects (S67). A few studies, which need to be confirmed or validated, have also suggested that synbiotics could be used to prevent constipation, diarrhoea and osteoporosis and in treating brain diseases associated with altered hepatic function (S72). Studies suggest that the synbiotic activity exerted by a combination of prebiotics and probiotics in functional food products is mainly due to their ability to modulate the host’s immune system. This means that they can be used in clinical practice for selected conditions. It has been reported that healthcare professionals have used synbiotics in clinical practice before using antibiotics and surgery interventions and that their use may be related to cost‐effectiveness and safety considerations. Finally, the availability of synbiotic‐based commercial products is rapidly increasing, due to the large number of possible existing combinations of prebiotics and probiotics. This may offer increased therapeutic options in the near future (S72). Paraprobiotics and postbiotics In addition to the factors provided by the host organisms, further regulatory elements are able to support the maintenance and growth of the gut microbiota by favouring bacterial development, reproduction, protection from external insults and intercellular communication (S73). Data from the literature have emphasised that bacterial viability, which characterises probiotic activity, is not the exclusive factor involved in exerting health‐promoting effects (S74). In this regard, the term paraprobiotics is used to identify nonviable, inactivated microbial cells that have shown dose‐related beneficial effects for consumers. It has been suggested that paraprobiotics are safer than viable bacterial products for selected groups of patients, such as individuals with impaired immune systems, because they pose a reduced risk of infection, microbial translocation or potential inflammatory responses. Inactivated bacterial cells are typically obtained artificially, through chemical or physical methods such as heating, acid deactivation, freeze‐drying, sonication and ultraviolet treatment. This means that they are able to modify the cell structure and/or the physiological functions of the bacteria while preserving the beneficial properties of their viable forms (S74). The term postbiotics describes soluble factors that may be secreted by viable bacteria or by‐products resulting from bacterial lysis (S73, S74) (Table ). Several bacterial strains have shown the ability to express a wide range of soluble factors of different natures, including cell surface proteins, vitamins, enzymes, peptides, teichoic acids, plasmalogens and organic and short‐chain fatty acids. These cell‐free supernatant metabolites have been reported to possess antimicrobial, antioxidant and immunomodulatory properties. These can positively influence microbiotal homoeostasis as well as the metabolic and/or signalling pathways of the host organism. The active structure and mechanism of action that enable postbiotics to produce a beneficial effect in the context of physiological, immunological, neuro‐hormone biological and metabolic reactions in the host have not yet been clarified. Investigations are currently in progress to explain the beneficial health effects of postbiotic products reported in the literature. These health effects have been previously been related to their possible anti‐inflammatory, antiproliferative, antioxidant, hypocholesterolaemic, antihypertensive, anti‐obesogenic, hepatoprotective and antimicrobial activities (S74). The definitions of probiotics and prebiotics have a long history, particularly with regard to the standing of probiotics in international regulations, but there is still no consensus on their definitions. It is unclear whether these terms will be maintained or changed in future. Specific commercial products use fermentation technology, such as fermented milk‐based infant formulas, and these can be used in clinical practice to beneficially modulate the gut microbiota and gut immunity. Selected lactic acid bacterial strains are used in industrial processes to ferment cows’ milk, and these are combined with heat treatment. The end products are formulas that contain no viable bacteria or prebiotic components, but do contain specific active factors resulting from the fermentation process (S75). Metabolites produced through fermentation processes are used as raw materials for pharmaceutical products, healthcare supplements and functional foods. Experimental in vitro and in vivo studies have indicated that specific fermentation products are involved in establishing immune balance and oral tolerance, although the mechanism of action underlying these functions has not yet been fully explained (S75). Changes in the microbiome have been documented in many chronic, mainly immune‐mediated, gastrointestinal and liver diseases and distinct patterns have been associated with each specific disease. However, causality and the mechanisms by which the gut microflora influences the aetiopathogenesis of a disease have not been fully explained, so they may be considered limiting factors of this review. Paediatricians are on the frontline when it comes to caring for children’s health and well‐being. The strength of this review was that we have emphasised the key roles that paediatricians’ play in minimising preventable early harmful events that could permanently influence the composition and/or function of the gut microbiota.
Previously called the gut microflora, the microbial communities are composed of approximately 10 14 bacteria, which is approximately 10 times the number of cells in the human body . The term gut microbiota refers to the organisms that comprise the microbial community, while the term microbiome refers to the collective genomes of the microbes, including bacteria, bacteriophages, fungi, protozoa and viruses that live inside and on the human body. The gut microbiota may be considered a human organ that can be transplanted, and it has its own functions, such as modulating the expression of genes involved in mucosal barrier fortification, angiogenesis and postnatal intestinal maturation of several gut‐associated systems . The gut microbiota comprises more than 2000 microbial species. Its diversity has been revealed by the application of metagenomics: 16S ribosomal ribonucleic acid gene or deoxyribonucleic acid . Firmicutes and Bacteroidetes are the two dominant bacterial phyla in most individuals. Other phyla include Proteobacteria, Actinobacteria , Fusobacteria and Verrucomicrobia . Groups of bacterial families have been classified into enterotypes on the basis of their functions. The term enterotype and its definition remain debated. For example, the classification may be based on the metabolism of dietary components and the ability to metabolise drugs. The aim of this classification is to help us to understand the role of the gut microbiota in health and disease. Ageing is associated with changes in the diversity of noncultured species that current laboratory culturing techniques are unable to grow in the laboratory. These are a greater proportion of Bacteroides , a distinct abundance of Clostridium clusters, an increased enterobacteria population and a lower number of bifidobacteria. The taxonomic alterations may be due to changes in diets, such as less fibre, and/or, the increased use of antibiotics with advancing age . There is no definition of a normal microbiota, since the bacterial species vary in different groups of individuals. The vast majority of microbial species give rise to symbiotic host–bacterial interactions that are fundamental for human health. Disrupting the development of a stable gut microbiota, which is known as dysbiosis, may be associated with several clinical conditions. These include nosocomial infections, necrotising enterocolitis in premature infants, inflammatory bowel disease, obesity, autoimmune diseases, allergies or even functional bowel disorders or behavioural problems.
Foetal colonisation and prematurity The sterility of the gut of the foetus in utero has been challenged by studies that have identified bacteria, bacterial deoxyribonucleic acid or bacterial products in the meconium, amniotic fluid and placenta. These indicate the initiation of microbial colonisation from the mother to offspring , . Therefore, during developmental phases, the foetus could encounter bacteria in utero that might contribute to establishing the microbiota before delivery. This prenatal bacterial colonisation of the foetal gut might be a source of microbial stimulation, providing a primary signal for the maturation of a balanced postnatal innate and adaptive immune system. However, studies stating the existence of this in utero microbiota remain controversial , . Importantly it has been shown that meconium with low bacterial diversity has been associated with a more frequent onset of sepsis in very low birth weight babies . The first and most important phase of normal colonisation occurs when the newborn foetus passes through the birth canal and ingests maternal vaginal and faecal microorganisms. These bacteria proliferate further when oral feeding is initiated. After 48 hours, the number of bacteria is already as high as around10 4 –10 6 colony‐forming units per millilitre of intestinal content. However, many factors can influence this process and they may potentially impair the establishment of what is known as symbiosis (Fig. ) . The pattern of bacterial colonisation in preterm infants differs from the pattern observed in the healthy gut of full‐term infants during the neonatal period . This abnormal colonisation, which is mostly due to the routine use of sterile formula and antibiotics in neonatal intensive care units, could play a central role in feeding intolerance. It could also be indicated in the development of necrotising enterocolitis, which is a severe disease primarily that affects premature infants and often leads to death or short bowel syndrome, which requires an extensive bowel resection . Mode of delivery The microbiota of vaginally delivered infants mirrors the vaginal and gut microbiota of the mother. Infants delivered by Caesarean section have reduced bacterial biodiversity, and colonisation by Bifidobacteria can be delayed by up to six months, in contrast to vaginally delivered infants , . Infants delivered by Caesarean section exhibit bacterial communities composed of prominent genera, such as Lactobacillus , Prevotella , Escherichia , Bacteroides and Bifidobacterium . After a Caesarean section, the gut microbiota is characterised by a reduced number of Bifidobacteria species. Although vaginally delivered neonates exhibit individual microbial profiles, these are characterised by predominant groups, such as Bifidobacterium longum and Bifidobacterium catenulatum . Dominguez‐Bello et al. used multiplex 16S ribosomal ribonucleic acid gene pyrosequencing to characterise the bacterial communities of mothers and their neonates. Interestingly, they reported that vaginally delivered infants acquired bacterial communities that resembled their own mothers’ vaginal microbiota and that these were dominated by Lactobacillus , Prevotella or Sneathia spp. In contrast, infants delivered by Caesarean section harboured bacterial communities similar to those found on the skin surface and these were dominated by Staphylococcus , Corynebacterium and Propionibacterium spp. . Influence of feeding The mode of oral feeding may influence the composition of the gut microbiota in infants. Breastfeeding has been associated with higher diversity, as assessed using the Shannon index . Human milk contains beneficial factors for the gut microbiota, such as oligosaccharides . Oligosaccharides function as prebiotics, by stimulating the growth of Bifidobacterium and Lactobacillus species, thereby selectively altering the microbial composition of the intestine . It is likely that evolutionary selective pressure has equipped Bifidobacterium longum subsp . infantis with multiple enzymes to deconstruct human milk glycans. As a result, this subspecies is able to outcompete other Bifidobacteria as well as other commensals and pathogens in the gut lumen of healthy breastfed infants . In formula‐fed infants, Enterococci , Bacteroides and Clostridia predominate. When breastfed infants are one month of age, there is a direct association between the levels of secretory immunoglobulin A in intestinal secretions and the number of Bifidobacteria in the gut. Furthermore, the level of the proinflammatory cytokine interleukin‐6 in intestinal secretions is inversely related to the number of Bifidobacterium fragilis organisms in the gut at one month of age. It has been suggested that human milk oligosaccharides do not just stimulate Bifidobacterium longum subsp . infantis proliferation, they also activate important genes involved in the proinflammatory and anti‐inflammatory balance in the intestinal mucosa . These observations provide additional evidence of the beneficial effects of breastfeeding for the newborn infant (Fig. ). In addition to human milk oligosaccharides, human milk contains other glycans with antimicrobial and prebiotic activity that are thought to have beneficial effects on the infant . On the other hand, there is accumulating evidence that human milk is not sterile, but contains maternally derived bacterial molecular motifs that are thought to influence the development of the newborn infant’s immune system . This mechanism, which has been called bacterial imprinting, requires further research . However, comparative studies with formula‐fed infants have not carefully documented the effects of formula feeding on the gut microbiota or health‐promoting bacteria. There is growing evidence that the microbiota does not reach its adult composition until two to three years of age . Finally, host defences can be improved by breastfeeding, which helps the immature intestinal mucosal immune system to develop and respond appropriately to highly variable bacterial colonisation and food antigen loads. Later in life, the type of food consumed influences the profile of the gut microbiota and short‐chain fatty acids play a central role . Short‐chain fatty acids are organic fatty acids that are produced in the distal gut by the bacterial fermentation of macro‐fibrous material that escapes digestion in the upper gastrointestinal tract and enters the colon. They are central to the physiology and metabolism of the colon. Resident bacteria can also metabolise dietary carcinogens, synthesise vitamins and assist in the absorption of various molecules. Research has shown that 90‐95% the short‐chain fatty acids present in the colon are made up of acetate (60%), propionate (25%) and butyrate (15%). Butyrate is a major energy source for the colonic epithelium. Short‐chain fatty acids have been associated with improved metabolic functions in individuals with type 2 diabetes mellitus, as they help to control blood glucose levels, insulin resistance and glucagon‐like peptide‐1 secretion . Gut microbiota predators The use of broad‐spectrum antibiotics significantly reduces the relative abundance of Bacteroidetes and increases the abundance of Firmicutes at the same time. A reduction in microbial diversity is often observed in infants under one year of age who have received oral antibiotics. Complete recovery of the initial bacterial composition is not always achieved. The response depends on the type of antibiotics, the duration of administration and the baseline microbiome. Studies have reported that antibiotics that target specific pathogenic infections and diseases may alter the gut microbiota ecology, and interactions with the host metabolism, to a much greater degree than previously assumed . The prolonged use of antibiotics, which is common in preterm infants, profoundly decreases microbial diversity and promotes the growth of predominant pathogens, such as Clostridium , Klebsiella and Veillonella , which have been associated with neonatal sepsis. It has been suggested that there may be healthy microbiota present in extremely premature neonates that may ameliorate the risk of sepsis . More research is needed to determine whether different antibiotics, probiotics or other novel therapies could re‐establish a healthy microbiome in neonates. It has also been reported that when low‐dose antibiotic exposure disrupted the microbiota during maturation, this altered the host metabolism and adiposity in mice . A study that gave mice low‐dose penicillin immediately after birth demonstrated that metabolic alterations and changes in the ileal expression of genes were involved in immunity . Administering low‐dose penicillin sufficiently perturbs the microbiota to modify body composition, even when these drugs are limited to early life. This indicates that microbiota interactions in infancy may be critical determinants of long‐term host metabolic effects. Other xenobiotics, such as proton pump inhibitors, may alter the gut microbiota. Meta‐analyses have shown that the use of proton pump inhibitors potentially increased the risk of enteric infections caused by Clostridium difficile. They have also led to small intestinal bacterial overgrowth, spontaneous bacterial peritonitis, community‐acquired pneumonia, hepatic encephalopathy and adverse outcomes of inflammatory bowel disease .
The sterility of the gut of the foetus in utero has been challenged by studies that have identified bacteria, bacterial deoxyribonucleic acid or bacterial products in the meconium, amniotic fluid and placenta. These indicate the initiation of microbial colonisation from the mother to offspring , . Therefore, during developmental phases, the foetus could encounter bacteria in utero that might contribute to establishing the microbiota before delivery. This prenatal bacterial colonisation of the foetal gut might be a source of microbial stimulation, providing a primary signal for the maturation of a balanced postnatal innate and adaptive immune system. However, studies stating the existence of this in utero microbiota remain controversial , . Importantly it has been shown that meconium with low bacterial diversity has been associated with a more frequent onset of sepsis in very low birth weight babies . The first and most important phase of normal colonisation occurs when the newborn foetus passes through the birth canal and ingests maternal vaginal and faecal microorganisms. These bacteria proliferate further when oral feeding is initiated. After 48 hours, the number of bacteria is already as high as around10 4 –10 6 colony‐forming units per millilitre of intestinal content. However, many factors can influence this process and they may potentially impair the establishment of what is known as symbiosis (Fig. ) . The pattern of bacterial colonisation in preterm infants differs from the pattern observed in the healthy gut of full‐term infants during the neonatal period . This abnormal colonisation, which is mostly due to the routine use of sterile formula and antibiotics in neonatal intensive care units, could play a central role in feeding intolerance. It could also be indicated in the development of necrotising enterocolitis, which is a severe disease primarily that affects premature infants and often leads to death or short bowel syndrome, which requires an extensive bowel resection .
The microbiota of vaginally delivered infants mirrors the vaginal and gut microbiota of the mother. Infants delivered by Caesarean section have reduced bacterial biodiversity, and colonisation by Bifidobacteria can be delayed by up to six months, in contrast to vaginally delivered infants , . Infants delivered by Caesarean section exhibit bacterial communities composed of prominent genera, such as Lactobacillus , Prevotella , Escherichia , Bacteroides and Bifidobacterium . After a Caesarean section, the gut microbiota is characterised by a reduced number of Bifidobacteria species. Although vaginally delivered neonates exhibit individual microbial profiles, these are characterised by predominant groups, such as Bifidobacterium longum and Bifidobacterium catenulatum . Dominguez‐Bello et al. used multiplex 16S ribosomal ribonucleic acid gene pyrosequencing to characterise the bacterial communities of mothers and their neonates. Interestingly, they reported that vaginally delivered infants acquired bacterial communities that resembled their own mothers’ vaginal microbiota and that these were dominated by Lactobacillus , Prevotella or Sneathia spp. In contrast, infants delivered by Caesarean section harboured bacterial communities similar to those found on the skin surface and these were dominated by Staphylococcus , Corynebacterium and Propionibacterium spp. .
The mode of oral feeding may influence the composition of the gut microbiota in infants. Breastfeeding has been associated with higher diversity, as assessed using the Shannon index . Human milk contains beneficial factors for the gut microbiota, such as oligosaccharides . Oligosaccharides function as prebiotics, by stimulating the growth of Bifidobacterium and Lactobacillus species, thereby selectively altering the microbial composition of the intestine . It is likely that evolutionary selective pressure has equipped Bifidobacterium longum subsp . infantis with multiple enzymes to deconstruct human milk glycans. As a result, this subspecies is able to outcompete other Bifidobacteria as well as other commensals and pathogens in the gut lumen of healthy breastfed infants . In formula‐fed infants, Enterococci , Bacteroides and Clostridia predominate. When breastfed infants are one month of age, there is a direct association between the levels of secretory immunoglobulin A in intestinal secretions and the number of Bifidobacteria in the gut. Furthermore, the level of the proinflammatory cytokine interleukin‐6 in intestinal secretions is inversely related to the number of Bifidobacterium fragilis organisms in the gut at one month of age. It has been suggested that human milk oligosaccharides do not just stimulate Bifidobacterium longum subsp . infantis proliferation, they also activate important genes involved in the proinflammatory and anti‐inflammatory balance in the intestinal mucosa . These observations provide additional evidence of the beneficial effects of breastfeeding for the newborn infant (Fig. ). In addition to human milk oligosaccharides, human milk contains other glycans with antimicrobial and prebiotic activity that are thought to have beneficial effects on the infant . On the other hand, there is accumulating evidence that human milk is not sterile, but contains maternally derived bacterial molecular motifs that are thought to influence the development of the newborn infant’s immune system . This mechanism, which has been called bacterial imprinting, requires further research . However, comparative studies with formula‐fed infants have not carefully documented the effects of formula feeding on the gut microbiota or health‐promoting bacteria. There is growing evidence that the microbiota does not reach its adult composition until two to three years of age . Finally, host defences can be improved by breastfeeding, which helps the immature intestinal mucosal immune system to develop and respond appropriately to highly variable bacterial colonisation and food antigen loads. Later in life, the type of food consumed influences the profile of the gut microbiota and short‐chain fatty acids play a central role . Short‐chain fatty acids are organic fatty acids that are produced in the distal gut by the bacterial fermentation of macro‐fibrous material that escapes digestion in the upper gastrointestinal tract and enters the colon. They are central to the physiology and metabolism of the colon. Resident bacteria can also metabolise dietary carcinogens, synthesise vitamins and assist in the absorption of various molecules. Research has shown that 90‐95% the short‐chain fatty acids present in the colon are made up of acetate (60%), propionate (25%) and butyrate (15%). Butyrate is a major energy source for the colonic epithelium. Short‐chain fatty acids have been associated with improved metabolic functions in individuals with type 2 diabetes mellitus, as they help to control blood glucose levels, insulin resistance and glucagon‐like peptide‐1 secretion .
The use of broad‐spectrum antibiotics significantly reduces the relative abundance of Bacteroidetes and increases the abundance of Firmicutes at the same time. A reduction in microbial diversity is often observed in infants under one year of age who have received oral antibiotics. Complete recovery of the initial bacterial composition is not always achieved. The response depends on the type of antibiotics, the duration of administration and the baseline microbiome. Studies have reported that antibiotics that target specific pathogenic infections and diseases may alter the gut microbiota ecology, and interactions with the host metabolism, to a much greater degree than previously assumed . The prolonged use of antibiotics, which is common in preterm infants, profoundly decreases microbial diversity and promotes the growth of predominant pathogens, such as Clostridium , Klebsiella and Veillonella , which have been associated with neonatal sepsis. It has been suggested that there may be healthy microbiota present in extremely premature neonates that may ameliorate the risk of sepsis . More research is needed to determine whether different antibiotics, probiotics or other novel therapies could re‐establish a healthy microbiome in neonates. It has also been reported that when low‐dose antibiotic exposure disrupted the microbiota during maturation, this altered the host metabolism and adiposity in mice . A study that gave mice low‐dose penicillin immediately after birth demonstrated that metabolic alterations and changes in the ileal expression of genes were involved in immunity . Administering low‐dose penicillin sufficiently perturbs the microbiota to modify body composition, even when these drugs are limited to early life. This indicates that microbiota interactions in infancy may be critical determinants of long‐term host metabolic effects. Other xenobiotics, such as proton pump inhibitors, may alter the gut microbiota. Meta‐analyses have shown that the use of proton pump inhibitors potentially increased the risk of enteric infections caused by Clostridium difficile. They have also led to small intestinal bacterial overgrowth, spontaneous bacterial peritonitis, community‐acquired pneumonia, hepatic encephalopathy and adverse outcomes of inflammatory bowel disease .
A study by Hooper et al, published in 2001, reported that a single bacterial species, Bacteroides thetaiotaomicron , which is a prominent component of normal mouse and human intestinal microbiomes, modulated the expression of genes involved in several important intestinal functions. These included nutrient absorption, mucosal barrier fortification, xenobiotic metabolism, angiogenesis and postnatal intestinal maturation . Another study that covered gastrointestinal motility, found that bacterial metabolites, such as short‐chain fatty acids and deconjugated bile salts, generated potent motor responses . Colonised mice have been shown to have a faster intestinal transit time than germ‐free mice . Collectively, the gut microbiota influences tissue regeneration, the permeability of the epithelium, the vascularisation of the gut and tissue homoeostasis. Role of the gut microbiota in the development of the gut immune system The intestine is an important immune organ that harbours approximately 60% of the total immunoglobulins and more than 10 6 lymphocytes per gram of tissue. The largest pool of immune‐competent cells in the body is housed in the intestinal mucosa. The number of T lymphocytes and plasmocytes within the intestinal lamina propria increases markedly in response to intestinal colonisation. Although immunoglobulin A producing cells are virtually absent in germ‐free mice, high levels are detectable in the mucosa when bacterial colonisation occurs . The gut microbiota exerts positive stimulatory effects on the intestinal innate and adaptive immune systems, by modulating the development of the intestinal mucous layer and lymphoid structures, immune‐cell differentiation and the production of immune mediators , . The innate immune system must discriminate between pathogens and the harmless commensal bacteria of the gut microbiota. Pathogen recognition receptors, such as Toll‐like receptors and nucleotide‐binding oligomerisation domain receptors, enable us to recognise a restricted number of bacterial motifs. These can be either microbe‐associated molecular patterns or, in the case of pathogens, pathogen‐associated molecular patterns . Both types of pathogen recognition receptors are naturally expressed by the intestinal epithelial and antigen‐presenting cells, such as dendritic cells or macrophages, and this enables them to sense any bacterial motifs easily. The intestinal epithelial barrier is protected by a highly viscous microfilm to avoid permanent and unwanted stimulation of the innate immune system. This prevents close contact between the commensal bacteria and intestinal epithelial cells. The intestinal mucosal barrier function can be defined as the capacity of the intestine to host commensal bacteria and molecules, while preserving the ability to absorb nutrients and prevent the invasion of host tissues by resident bacteria. The dense communities of bacteria in the intestine are separated from body tissues by a monolayer of intestinal epithelial cells. The assembly of the multiple components of the intestinal barrier is initiated during foetal development and continues during early postnatal life. This means that the intestinal barrier is not completely developed soon after birth, particularly in preterm infants. The secretion of mucus‐forming mucins, secretory immunoglobulin A and antimicrobial peptides reinforces the mucosal barrier on the extra‐epithelial side, while a variety of immune cells contribute to mucosal defence on the inner side. Thus, the mucosal barrier is physical, biochemical and immune in nature. In addition, the microbiota may be viewed as part of this system because of the mutual influence of the host and the luminal microorganisms. Altered mucosal barrier function, accompanied by increased permeability and/or bacterial translocation, has been linked to a variety of conditions. These have included metabolic disorders, such as type 2 diabetes mellitus, insulin resistance, obesity and inflammatory bowel diseases . Genetic and environmental factors may converge to evoke defective functioning of the barrier, which, in turn, may lead to overt inflammation of the intestine as a result of an exacerbated immune reaction towards the microbiota. Inflammatory bowel diseases may be both precipitated and treated by either stimulation or downregulation of the different elements of the mucosal barrier, and the outcome depends on the timing, the types of cells affected and other factors. Fermentation products of commensal bacteria have been shown to enhance the intestinal barrier’s function, by facilitating the assembly of tight junctions through the activation of adenosine monophosphate‐activated protein kinases . On the other hand, removing the entire detectable commensal gut microbiota by using a four‐week course of four orally administered antibiotics – vancomycin, neomycin, metronidazole and ampicillin – led to more severe intestinal mucosal injury in a mouse colitis model induced by dextran sulphate sodium . Early treatments with broad‐spectrum antibiotics have been shown to alter the gastrointestinal tract’s gene expression profile and intestinal barrier development . This finding underlines the importance of normal bacterial colonisation in the development and maintenance of the intestinal barrier. Antibiotic therapy between birth and five years of age might increase the risk of Crohn disease by disrupting the pattern of gut colonisation . A meta‐analysis confirmed that antibiotic use was associated with an increased risk of new‐onset Crohn disease, but not of ulcerative colitis . In summary, the gut microbiota protects against pathogens, influences the development of the intestinal barrier and its functions and plays many roles in the development of the gut immune system. It acts by competing for nutrients and receptors, by producing antimicrobial compounds and by stimulating a multiple‐cell signalling process that can limit the release of virulence factors.
The intestine is an important immune organ that harbours approximately 60% of the total immunoglobulins and more than 10 6 lymphocytes per gram of tissue. The largest pool of immune‐competent cells in the body is housed in the intestinal mucosa. The number of T lymphocytes and plasmocytes within the intestinal lamina propria increases markedly in response to intestinal colonisation. Although immunoglobulin A producing cells are virtually absent in germ‐free mice, high levels are detectable in the mucosa when bacterial colonisation occurs . The gut microbiota exerts positive stimulatory effects on the intestinal innate and adaptive immune systems, by modulating the development of the intestinal mucous layer and lymphoid structures, immune‐cell differentiation and the production of immune mediators , . The innate immune system must discriminate between pathogens and the harmless commensal bacteria of the gut microbiota. Pathogen recognition receptors, such as Toll‐like receptors and nucleotide‐binding oligomerisation domain receptors, enable us to recognise a restricted number of bacterial motifs. These can be either microbe‐associated molecular patterns or, in the case of pathogens, pathogen‐associated molecular patterns . Both types of pathogen recognition receptors are naturally expressed by the intestinal epithelial and antigen‐presenting cells, such as dendritic cells or macrophages, and this enables them to sense any bacterial motifs easily. The intestinal epithelial barrier is protected by a highly viscous microfilm to avoid permanent and unwanted stimulation of the innate immune system. This prevents close contact between the commensal bacteria and intestinal epithelial cells. The intestinal mucosal barrier function can be defined as the capacity of the intestine to host commensal bacteria and molecules, while preserving the ability to absorb nutrients and prevent the invasion of host tissues by resident bacteria. The dense communities of bacteria in the intestine are separated from body tissues by a monolayer of intestinal epithelial cells. The assembly of the multiple components of the intestinal barrier is initiated during foetal development and continues during early postnatal life. This means that the intestinal barrier is not completely developed soon after birth, particularly in preterm infants. The secretion of mucus‐forming mucins, secretory immunoglobulin A and antimicrobial peptides reinforces the mucosal barrier on the extra‐epithelial side, while a variety of immune cells contribute to mucosal defence on the inner side. Thus, the mucosal barrier is physical, biochemical and immune in nature. In addition, the microbiota may be viewed as part of this system because of the mutual influence of the host and the luminal microorganisms. Altered mucosal barrier function, accompanied by increased permeability and/or bacterial translocation, has been linked to a variety of conditions. These have included metabolic disorders, such as type 2 diabetes mellitus, insulin resistance, obesity and inflammatory bowel diseases . Genetic and environmental factors may converge to evoke defective functioning of the barrier, which, in turn, may lead to overt inflammation of the intestine as a result of an exacerbated immune reaction towards the microbiota. Inflammatory bowel diseases may be both precipitated and treated by either stimulation or downregulation of the different elements of the mucosal barrier, and the outcome depends on the timing, the types of cells affected and other factors. Fermentation products of commensal bacteria have been shown to enhance the intestinal barrier’s function, by facilitating the assembly of tight junctions through the activation of adenosine monophosphate‐activated protein kinases . On the other hand, removing the entire detectable commensal gut microbiota by using a four‐week course of four orally administered antibiotics – vancomycin, neomycin, metronidazole and ampicillin – led to more severe intestinal mucosal injury in a mouse colitis model induced by dextran sulphate sodium . Early treatments with broad‐spectrum antibiotics have been shown to alter the gastrointestinal tract’s gene expression profile and intestinal barrier development . This finding underlines the importance of normal bacterial colonisation in the development and maintenance of the intestinal barrier. Antibiotic therapy between birth and five years of age might increase the risk of Crohn disease by disrupting the pattern of gut colonisation . A meta‐analysis confirmed that antibiotic use was associated with an increased risk of new‐onset Crohn disease, but not of ulcerative colitis . In summary, the gut microbiota protects against pathogens, influences the development of the intestinal barrier and its functions and plays many roles in the development of the gut immune system. It acts by competing for nutrients and receptors, by producing antimicrobial compounds and by stimulating a multiple‐cell signalling process that can limit the release of virulence factors.
As emphasised above, microorganisms colonise the human gut from birth, and even before that, and stimulate the development of the local and systemic immune systems. In addition, the newly developed immune system shapes the gut flora, which means that it is unique for every individual. An imbalance or alteration in the composition and/or function of the microbiota, which is usually called dysbiosis, has been found to be associated with many chronic diseases . However, in this relationship, it is almost impossible to delineate the causes from the consequences, as few studies have shown that changes in the microbiota precede inflammation . Inflammatory bowel disease The current hypothesis of the aetiology of inflammatory bowel disease suggests that the inflammation is a consequence of an unrestrained or aberrant immune response to the gut flora, which is shaped by different environmental factors in a genetically predisposed individual . The most consistent changes that have been described have been a reduction in the diversity of the gut microbiota, increased abundance of Bacteroidetes and Proteobacteria and the loss of Firmicutes . Furthermore, the loss of certain specific beneficial microbes, such as Faecalibacterium prausnitzii and members of Clostridium clusters XIVa and IV, has previously been described . The importance of these specific microorganisms has been further demonstrated by their ability to inhibit inflammation and affect the differentiation of regulatory T cells. More precisely, Faecalibacterium prausnitzii has the ability to stimulate the production of interleukin‐10 and inhibit proinflammatory cytokines such as interleukin‐12 and interferon‐gamma. Other mechanisms could also be involved, such as decreased production of short‐chain fatty acids, which then affects the differentiation and expansion of regulatory T cells and the growth of epithelial cells. Another well‐described feature of patients with inflammatory bowel diseases is altered intestinal barrier function, and this has mainly been increased permeability and decreased mucus production. Both of these factors can be influenced by the microbiota, but they can also give bacteria easier access to the mucosa, allowing them get closer to immunocompetent cells. Patients with inflammatory bowel diseases exhibit increased colonisation by bacteria that are able to adhere to the intestinal epithelium, causing altered permeability of the intestine . This adherence can be further promoted by the increased number of mucolytic bacteria, such Ruminococcus gnavus and Ruminococcus torques . In addition, the number of sulphate‐reducing bacteria, such as Desulfovibrio , is increased in patients with inflammatory bowel disease. This has been shown to result in the production of hydrogen sulphate, which damages intestinal epithelial cells and induces mucosal inflammation . Functional gastrointestinal disorders The pathogenesis of functional gastrointestinal disorders has not yet been fully explained, but the proposed mechanisms include mild gastrointestinal inflammation, visceral hypersensitivity, an altered brain–gut axis and altered gut microflora . The most notable changes in microbial intestinal colonisation during the first weeks and months of life have been described in infants with infant colic . These infants were reported to have decreased faecal‐bacterial diversity, increased gram‐negative bacterial colonisation and a lack of Actinobacteria and Firmicutes , which appears to have a protective effect. More specifically, infants with colic have been shown to have more Proteobacteria and less Bifidobacteria and Lactobacillus . Although a cause versus effect phenomenon has not been fully described, there is evidence that changes in the gut microbiota precede the development of infantile colic . Similar changes in the microbiome have been reported in older children with functional gastrointestinal disorders. One meta‐analysis, published in 2017, identified downregulated colonisation of Lactobacillus , Bifidobacterium and Faecalibacterium prausnitzii in patients with irritable bowel syndrome, particularly in irritable bowel syndrome where diarrhoea predominated . Furthermore, a greater proportion of the Proteobacteria phylum and of genera, such as Dorea, Haemophilus , Ruminococcus and Clostridium species, were found in the same group of patients, . These changes might have altered or influenced visceral perception, gut motility, gut permeability and intestinal gas production, which can lead to functional gastrointestinal disorders where pain is the predominant complaint. Allergies The immune system of the gastrointestinal tract is in close proximity to many antigens that originate mainly from food and the gut microbiota, both of which can affect immune tolerance. The normal commensal microflora play an essential role in inflammatory homoeostasis and appropriate immune regulation and may therefore influence the development of allergic diseases. It has been suggested that alterations in the microbiota can disrupt mucosal immune tolerance, leading to allergic diseases, such as food allergies, atopic dermatitis and even asthma . The early microbiota of children who later developed allergies has been characterised by lower bacterial diversity, with predominant Firmicutes , higher counts of the Bacteroidaceae , increased numbers of the anaerobic Bacteroides fragilis , Escherichia coli , Clostridium difficile , Bifidobacterium catenulatum , Bifidobacterium bifidum and Bifidobacterium longum. In contrast and decreased numbers of Bifidobacterium adolescentis , Bifidobacterium bifidum and Lactobacillus have been reported . When the microbiota of children with allergies was assessed at the onset of allergic symptoms in one study, it showed a different pattern, with higher counts of Bacteroides , lower counts of Akkermansia muciniphila, Faecalibacterium prausnitzii and Clostridium and overall lower bacterial diversity . The potential mechanisms underlying an increased risk of sensitisation and allergy development, detected as a consequence of dysbiosis in animal models, have been related to various alterations in mucosal regulatory T cells. Other reported effects were defects in the epithelial barrier function, as evidenced by increased mucosal permeability, diminished secretory immunoglobulin A production and excretion and altered dendritic and B‐cell function . Obesity and liver disease Studies have shown that gut microbiota could also play an important role in the etiopathogenesis of obesity and other prevalent chronic liver diseases, such as nonalcoholic fatty liver disease and nonalcoholic steatohepatitis. Nonalcoholic fatty liver disease has become one of the most frequent causes of liver disease and represents a spectrum of pathologies, varying from steatosis to nonalcoholic steatohepatitis, with or without cirrhosis, and possible evolution to hepatocellular carcinoma. Nonalcoholic fatty liver disease is a multifactorial disease that is affected by genetic, metabolic, dietary and environmental factors. The most commonly proposed theory is the multiple hit hypothesis, which also involves changes to the gut microbiota . The gut microbiota plays an important role in obesity, and this is primarily based on its influence on energy balance. Dysbiosis affects short‐chain fatty acid production and metabolism and adipocyte lipid deposition, with a decrease in mitochondrial fatty acid oxidation. Human studies have reported that the balance between Bacteroidetes and Firmicutes has been related to obesity. Lean subjects have more Bacteroidetes in their gut microbiota, and diet that restricted fats and carbohydrates was shown to increase the ratio in favour of Bacteroidetes . With regard to chronic liver disease, the proposed mechanisms for the negative effects of dysbiosis include small intestine bacterial overgrowth, altered release of inflammatory cytokines, alteration of the intestinal barrier, choline metabolism, endogenous ethanol production, regulation of hepatic toll‐like receptors expression in patients with nonalcoholic fatty liver disease or nonalcoholic steatohepatitis and an alteration in bile acid metabolism . Furthermore, there is evidence that gut dysbiosis promotes the progression of nonalcoholic steatohepatitis to cirrhosis and hepatocellular carcinoma via an increase in tumour necrosis factor alpha and interleukin‐8, the activation of toll‐like receptor‐4 and toll‐like receptor‐9 and the production of interleukin‐1beta in Kupffer cells, favouring lipid accumulation, hepatocyte death, steatosis, inflammation and fibrosis . There have been many animal studies that have evaluated the gut microbiota differences associated with nonalcoholic fatty liver disease or nonalcoholic steatohepatitis, but few studies have been performed in humans and they have produced inconsistent results. Patients with nonalcoholic steatohepatitis, including children, have been reported to have lower levels of Bacteroidetes than patients with liver steatosis or healthy individuals . Firmicutes have been found in higher levels in individuals with nonalcoholic fatty liver disease than in healthy subjects , but the results have not been consistent .
The current hypothesis of the aetiology of inflammatory bowel disease suggests that the inflammation is a consequence of an unrestrained or aberrant immune response to the gut flora, which is shaped by different environmental factors in a genetically predisposed individual . The most consistent changes that have been described have been a reduction in the diversity of the gut microbiota, increased abundance of Bacteroidetes and Proteobacteria and the loss of Firmicutes . Furthermore, the loss of certain specific beneficial microbes, such as Faecalibacterium prausnitzii and members of Clostridium clusters XIVa and IV, has previously been described . The importance of these specific microorganisms has been further demonstrated by their ability to inhibit inflammation and affect the differentiation of regulatory T cells. More precisely, Faecalibacterium prausnitzii has the ability to stimulate the production of interleukin‐10 and inhibit proinflammatory cytokines such as interleukin‐12 and interferon‐gamma. Other mechanisms could also be involved, such as decreased production of short‐chain fatty acids, which then affects the differentiation and expansion of regulatory T cells and the growth of epithelial cells. Another well‐described feature of patients with inflammatory bowel diseases is altered intestinal barrier function, and this has mainly been increased permeability and decreased mucus production. Both of these factors can be influenced by the microbiota, but they can also give bacteria easier access to the mucosa, allowing them get closer to immunocompetent cells. Patients with inflammatory bowel diseases exhibit increased colonisation by bacteria that are able to adhere to the intestinal epithelium, causing altered permeability of the intestine . This adherence can be further promoted by the increased number of mucolytic bacteria, such Ruminococcus gnavus and Ruminococcus torques . In addition, the number of sulphate‐reducing bacteria, such as Desulfovibrio , is increased in patients with inflammatory bowel disease. This has been shown to result in the production of hydrogen sulphate, which damages intestinal epithelial cells and induces mucosal inflammation .
The pathogenesis of functional gastrointestinal disorders has not yet been fully explained, but the proposed mechanisms include mild gastrointestinal inflammation, visceral hypersensitivity, an altered brain–gut axis and altered gut microflora . The most notable changes in microbial intestinal colonisation during the first weeks and months of life have been described in infants with infant colic . These infants were reported to have decreased faecal‐bacterial diversity, increased gram‐negative bacterial colonisation and a lack of Actinobacteria and Firmicutes , which appears to have a protective effect. More specifically, infants with colic have been shown to have more Proteobacteria and less Bifidobacteria and Lactobacillus . Although a cause versus effect phenomenon has not been fully described, there is evidence that changes in the gut microbiota precede the development of infantile colic . Similar changes in the microbiome have been reported in older children with functional gastrointestinal disorders. One meta‐analysis, published in 2017, identified downregulated colonisation of Lactobacillus , Bifidobacterium and Faecalibacterium prausnitzii in patients with irritable bowel syndrome, particularly in irritable bowel syndrome where diarrhoea predominated . Furthermore, a greater proportion of the Proteobacteria phylum and of genera, such as Dorea, Haemophilus , Ruminococcus and Clostridium species, were found in the same group of patients, . These changes might have altered or influenced visceral perception, gut motility, gut permeability and intestinal gas production, which can lead to functional gastrointestinal disorders where pain is the predominant complaint.
The immune system of the gastrointestinal tract is in close proximity to many antigens that originate mainly from food and the gut microbiota, both of which can affect immune tolerance. The normal commensal microflora play an essential role in inflammatory homoeostasis and appropriate immune regulation and may therefore influence the development of allergic diseases. It has been suggested that alterations in the microbiota can disrupt mucosal immune tolerance, leading to allergic diseases, such as food allergies, atopic dermatitis and even asthma . The early microbiota of children who later developed allergies has been characterised by lower bacterial diversity, with predominant Firmicutes , higher counts of the Bacteroidaceae , increased numbers of the anaerobic Bacteroides fragilis , Escherichia coli , Clostridium difficile , Bifidobacterium catenulatum , Bifidobacterium bifidum and Bifidobacterium longum. In contrast and decreased numbers of Bifidobacterium adolescentis , Bifidobacterium bifidum and Lactobacillus have been reported . When the microbiota of children with allergies was assessed at the onset of allergic symptoms in one study, it showed a different pattern, with higher counts of Bacteroides , lower counts of Akkermansia muciniphila, Faecalibacterium prausnitzii and Clostridium and overall lower bacterial diversity . The potential mechanisms underlying an increased risk of sensitisation and allergy development, detected as a consequence of dysbiosis in animal models, have been related to various alterations in mucosal regulatory T cells. Other reported effects were defects in the epithelial barrier function, as evidenced by increased mucosal permeability, diminished secretory immunoglobulin A production and excretion and altered dendritic and B‐cell function .
Studies have shown that gut microbiota could also play an important role in the etiopathogenesis of obesity and other prevalent chronic liver diseases, such as nonalcoholic fatty liver disease and nonalcoholic steatohepatitis. Nonalcoholic fatty liver disease has become one of the most frequent causes of liver disease and represents a spectrum of pathologies, varying from steatosis to nonalcoholic steatohepatitis, with or without cirrhosis, and possible evolution to hepatocellular carcinoma. Nonalcoholic fatty liver disease is a multifactorial disease that is affected by genetic, metabolic, dietary and environmental factors. The most commonly proposed theory is the multiple hit hypothesis, which also involves changes to the gut microbiota . The gut microbiota plays an important role in obesity, and this is primarily based on its influence on energy balance. Dysbiosis affects short‐chain fatty acid production and metabolism and adipocyte lipid deposition, with a decrease in mitochondrial fatty acid oxidation. Human studies have reported that the balance between Bacteroidetes and Firmicutes has been related to obesity. Lean subjects have more Bacteroidetes in their gut microbiota, and diet that restricted fats and carbohydrates was shown to increase the ratio in favour of Bacteroidetes . With regard to chronic liver disease, the proposed mechanisms for the negative effects of dysbiosis include small intestine bacterial overgrowth, altered release of inflammatory cytokines, alteration of the intestinal barrier, choline metabolism, endogenous ethanol production, regulation of hepatic toll‐like receptors expression in patients with nonalcoholic fatty liver disease or nonalcoholic steatohepatitis and an alteration in bile acid metabolism . Furthermore, there is evidence that gut dysbiosis promotes the progression of nonalcoholic steatohepatitis to cirrhosis and hepatocellular carcinoma via an increase in tumour necrosis factor alpha and interleukin‐8, the activation of toll‐like receptor‐4 and toll‐like receptor‐9 and the production of interleukin‐1beta in Kupffer cells, favouring lipid accumulation, hepatocyte death, steatosis, inflammation and fibrosis . There have been many animal studies that have evaluated the gut microbiota differences associated with nonalcoholic fatty liver disease or nonalcoholic steatohepatitis, but few studies have been performed in humans and they have produced inconsistent results. Patients with nonalcoholic steatohepatitis, including children, have been reported to have lower levels of Bacteroidetes than patients with liver steatosis or healthy individuals . Firmicutes have been found in higher levels in individuals with nonalcoholic fatty liver disease than in healthy subjects , but the results have not been consistent .
The gut microbiota can be modulated to achieve health‐promoting effects . The beneficial manipulation of the composition and metabolic footprint of the gut microbiota can be achieved by using probiotics. These can be defined as a preparation of, or a product containing, viable microorganisms in an adequate number to enable such dietary preparations to favourably modulate the gut microbiota , . The ability to exert a beneficial modulation on the gut microbiota may be enhanced by combining probiotics with other ingredients (64), namely prebiotics, which are capable of favouring the growth and/or activity of microorganisms. Prebiotics appear to be poorly understood by the general public in this regard . It is important to correctly define, and understand, prebiotics and their potential when they are combined with probiotics. This information needs to be disseminated beyond the scientific community, so that regulatory agencies, the food industry and healthcare professionals can correctly describe them and suggest how they should be used. The combined use of prebiotics and probiotics may be described as synbiotic if the net health benefit is synergistic and scientifically validated . Finally, the terms paraprobiotic and postbiotic describe nonviable bacterial cells and soluble factors that are secreted as metabolic by‐products by live bacteria. Such products, which could also be released after bacterial lysis, can provide additional physiological benefits to the host organism. That is why they have received increasing attention from scientific researchers and industry, due to their potential food and pharmaceutical applications (Table ). Probiotics Probiotics have been defined as live microorganisms that, when administered in adequate amounts, confer a health benefit on the host , . The term probiotics is used widely, but not always properly, in the scientific literature and by the industry. Their fundamental characteristics have been described extensively in the literature , including their microbial origin, their viability and their benefit to the health of the host (Table ). The microbial origin of a probiotic product must be guaranteed by identifying a taxonomically defined microbe or combination of microbes. A probiotic must therefore be properly identified by strain‐genotypically and phenotypically characterised. An essential characteristic of a probiotic is its viability , as it must be a live microorganism that is able to survive the acidity of the stomach in order to reach and colonise the intestinal tract. Moreover, a probiotic must be guaranteed to remain viable and stable throughout the technical procedures, during its production, use and storage. A consensus statement was issued by the International Scientific Association for Probiotics and Prebiotics in 2013 with regard to the possible benefits of probiotics to human health. The statement sought to further clarify the appropriate use and scope of the term probiotic and stated that probiotics should exert specific general benefits, which it defined as core benefits . These benefits include contributing to establishing and sustaining a healthy gut microbiota. They are expected to be obtained by creating a favourable intestinal environment through nonstrain‐specific beneficial actions that are shared by most probiotics, which sustain a healthy digestive tract and immune system. In fact, some effects of probiotics can be observed across taxonomic groups and are achieved through general mechanisms, such as the inhibition of pathogens and the production of beneficial metabolites. These effects should be distinguished from other benefits, such as neurological or endocrinological effects, which are strain specific. An important aspect of probiotic activity is identifying the adequate amount that is able to confer health benefits on the host and a specific accepted definition of this is not currently available. Nevertheless, some regulatory approaches in Canada and Italy , have suggested that a probiotic product should contain at least 1^10 9 colony‐forming units per serving to be able to exert the claimed beneficial effects. The 2013 Statement also describes the different categories of live microorganisms for human use, in order to distinguish what can and cannot be considered a probiotic, according to health claims . Products claiming to contain live and active cultures should not be considered probiotics, because the simple use of the terms live and active does not imply any probiotic activity. Foods or supplements that state they contain probiotics have no specific health claims, and their expected effects are those related to the core benefits, as demonstrated by well‐conducted human studies. Products containing probiotics that make specific health claims are those that claim to have any beneficial health effects, according to documented evidence from well‐designed observational studies. Products containing probiotics that claim they can prevent or treat a specific disease need to be backed up by appropriate trials to meet the regulatory standards for drugs. Probiotics are commonly used in paediatric practice, and a summary of the indications and limitations is reported in Table . Their use includes preventing common and nosocomial infections, allergies and antibiotic‐associated diarrhoea, treating acute gastroenteritis and functional abdominal pain disorders and preventing and treating infantile colic. Guidelines by Hojsak et al. on using probiotics in clinical practice for children were published in 2018 , and the study reported that they seemed to be safe in general, even when provided in high doses. The authors provided a detailed description of the correct conditions for their use, together with specific positive instructions for the use of strictly defined strains for various clinical conditions. These conditions include preventing upper respiratory tract infections in children attending day care centres, nosocomial diarrhoea and antibiotic‐associated diarrhoea and treating acute gastroenteritis and infantile colic in breastfed infants. Prebiotics The definition of prebiotics has undergone an important evolution over time. They were initially referred to as nondigestible food ingredients that beneficially affect the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria already residing in the colon (S61). Several studies have focused on the nondigestible oligosaccharides fructans, namely fructooligosaccharides and inulin, and galactans, namely galactooligosaccharides, and how they exert their effects through the enrichment of Lactobacillus and/or Bifidobacterium spp. (S62). Prebiotics have been described as nondigestible compounds that confer a beneficial physiological effect on the host (S63). They do this by metabolising microorganisms in the gut, which then modulate the composition and/or activity of the gut microbiota. In 2017, the International Scientific Association for Probiotics and Prebiotics Consensus Statement proposed a new definition for prebiotics (S64). The document discussed the concept of selectivity with respect to fermentation by bacteria and suggested that prebiotics were defined as substrates that are selectively utilised by host microorganisms and confer a health benefit on the host (Table ). Incorporating the concept of selectivity in the definition is important, as it distinguishes between prebiotics and other substances. The term selective does not mean that only lactobacilli and bifidobacteria are affected by prebiotics. It means that a broader range of microorganisms, but not all, can be affected. Substances that can affect the composition of the microbiota, but are not selectively used by microorganisms, are not prebiotics. The use of prebiotics in paediatric clinical practice is currently limited. Human milk oligosaccharides are a group of prebiotics that can influence a newborn infant’s gastrointestinal health by favouring the development of a healthy gut microbiota through some metabolic and immunological activities. It has been demonstrated that an infant’s consumption of human milk oligosaccharides increases the proportion of human milk oligosaccharide‐consuming Bifidobacteriaceae , particularly Bifidobacterium longum subsp. infantis and Bacteroidaceae (S65). The mechanisms of action in the newborn infant’s intestine include immune regulation and preventing the adhesion of pathogens to the intestinal epithelium, which protects the infant from infections (S66). Some compounds that are equivalent to human milk oligosaccharides or bovine milk oligosaccharides are obtained by enzymatic synthesis. It is still a matter of debate whether these are able to exert beneficial effects on human health by selectively stimulating the microbiota and thus acting as prebiotics. The existing literature does not provide definitive conclusions, but some human milk oligosaccharides may be considered candidate prebiotics. Studies have reported that prebiotics containing immunoactive oligosaccharides could effectively prevent atopic dermatitis in low‐atopy risk infants and that they could potentially be used to prevent adolescents becoming overweight. However, the clinical significance and efficacy of prebiotics and their possible widespread use in paediatric practice still needs to be clarified (S67‐69). Synbiotics Synbiotics are commonly described as a combination of probiotics and prebiotics in functional food compounds. Functional food is a food that has been modified and claims to improve a person’s health or well‐being by providing benefits that extend beyond the traditional nutrients it contains. Examples of functional foods include bread, cereals and drinks that are fortified with vitamins or selected herbs. They can also contain nutraceuticals, which have physiological benefits or provide protection against chronic disease. Studies have reported that their combined use has facilitated the survival of live microbial dietary supplements and their implantation in the gastrointestinal tract (S66–68). This mechanism has been reported to generate a beneficial effect in the host organism, by the metabolic activation of a restricted type of bacteria, which is considered to be health promoting, and the selective stimulation of its growth (S69). It has been suggested that these combined conditions have improved the host’s welfare (S70). Single products containing an appropriate combination of probiotics and prebiotics have been reported to guarantee a greater effect than when they have been used separately. In fact, the synbiotic activity of foods containing a combination of prebiotics and probiotics is based on their elective action in two different areas of the gut. Probiotics are mainly active in the small and large intestine, while prebiotics are mainly active in the large intestine (S69). Synbiotics act in combination in two main ways: by improving the viability of probiotic microorganisms and by providing specific benefits for the host’s health. The rationale of using a synbiotic formulation of prebiotics is because they function as a selective medium, favouring the growth of certain probiotic strains, their fermentation and their intestinal passage. Furthermore, several studies have reported that prebiotics have positively influenced the ability of probiotic microorganisms to develop higher tolerance to particular situations caused by the presence of possibly challenging conditions. These include oxygenation and the pH and temperature of the intestines (S71). In brief, the main reason for using synbiotics is that the survival of probiotics in the digestive system is challenging in normal conditions and when an appropriate prebiotic is not present. Therefore, using prebiotics to stimulate the effectiveness of probiotics appears to be a good way of inducing the beneficial modulation of the metabolic activity of probiotics in the intestine. At the same time, this preserves the intestinal biostructure, favours the development and maintenance of a beneficial microbiota and inhibits the growth of potential pathogens in the gut. In general, the beneficial outcomes of synbiotics for the host’s health have been related to significant increases in short‐chain fatty acid levels, ketones, carbon disulphides and methyl acetates. In particular, the potential beneficial activity of synbiotics in clinical practice has been described in different clinical conditions (Table ). The reported potential therapeutic properties of synbiotics include anticarcinogenic, anti‐allergic and antibacterial effects (S67). A few studies, which need to be confirmed or validated, have also suggested that synbiotics could be used to prevent constipation, diarrhoea and osteoporosis and in treating brain diseases associated with altered hepatic function (S72). Studies suggest that the synbiotic activity exerted by a combination of prebiotics and probiotics in functional food products is mainly due to their ability to modulate the host’s immune system. This means that they can be used in clinical practice for selected conditions. It has been reported that healthcare professionals have used synbiotics in clinical practice before using antibiotics and surgery interventions and that their use may be related to cost‐effectiveness and safety considerations. Finally, the availability of synbiotic‐based commercial products is rapidly increasing, due to the large number of possible existing combinations of prebiotics and probiotics. This may offer increased therapeutic options in the near future (S72). Paraprobiotics and postbiotics In addition to the factors provided by the host organisms, further regulatory elements are able to support the maintenance and growth of the gut microbiota by favouring bacterial development, reproduction, protection from external insults and intercellular communication (S73). Data from the literature have emphasised that bacterial viability, which characterises probiotic activity, is not the exclusive factor involved in exerting health‐promoting effects (S74). In this regard, the term paraprobiotics is used to identify nonviable, inactivated microbial cells that have shown dose‐related beneficial effects for consumers. It has been suggested that paraprobiotics are safer than viable bacterial products for selected groups of patients, such as individuals with impaired immune systems, because they pose a reduced risk of infection, microbial translocation or potential inflammatory responses. Inactivated bacterial cells are typically obtained artificially, through chemical or physical methods such as heating, acid deactivation, freeze‐drying, sonication and ultraviolet treatment. This means that they are able to modify the cell structure and/or the physiological functions of the bacteria while preserving the beneficial properties of their viable forms (S74). The term postbiotics describes soluble factors that may be secreted by viable bacteria or by‐products resulting from bacterial lysis (S73, S74) (Table ). Several bacterial strains have shown the ability to express a wide range of soluble factors of different natures, including cell surface proteins, vitamins, enzymes, peptides, teichoic acids, plasmalogens and organic and short‐chain fatty acids. These cell‐free supernatant metabolites have been reported to possess antimicrobial, antioxidant and immunomodulatory properties. These can positively influence microbiotal homoeostasis as well as the metabolic and/or signalling pathways of the host organism. The active structure and mechanism of action that enable postbiotics to produce a beneficial effect in the context of physiological, immunological, neuro‐hormone biological and metabolic reactions in the host have not yet been clarified. Investigations are currently in progress to explain the beneficial health effects of postbiotic products reported in the literature. These health effects have been previously been related to their possible anti‐inflammatory, antiproliferative, antioxidant, hypocholesterolaemic, antihypertensive, anti‐obesogenic, hepatoprotective and antimicrobial activities (S74). The definitions of probiotics and prebiotics have a long history, particularly with regard to the standing of probiotics in international regulations, but there is still no consensus on their definitions. It is unclear whether these terms will be maintained or changed in future. Specific commercial products use fermentation technology, such as fermented milk‐based infant formulas, and these can be used in clinical practice to beneficially modulate the gut microbiota and gut immunity. Selected lactic acid bacterial strains are used in industrial processes to ferment cows’ milk, and these are combined with heat treatment. The end products are formulas that contain no viable bacteria or prebiotic components, but do contain specific active factors resulting from the fermentation process (S75). Metabolites produced through fermentation processes are used as raw materials for pharmaceutical products, healthcare supplements and functional foods. Experimental in vitro and in vivo studies have indicated that specific fermentation products are involved in establishing immune balance and oral tolerance, although the mechanism of action underlying these functions has not yet been fully explained (S75). Changes in the microbiome have been documented in many chronic, mainly immune‐mediated, gastrointestinal and liver diseases and distinct patterns have been associated with each specific disease. However, causality and the mechanisms by which the gut microflora influences the aetiopathogenesis of a disease have not been fully explained, so they may be considered limiting factors of this review. Paediatricians are on the frontline when it comes to caring for children’s health and well‐being. The strength of this review was that we have emphasised the key roles that paediatricians’ play in minimising preventable early harmful events that could permanently influence the composition and/or function of the gut microbiota.
Probiotics have been defined as live microorganisms that, when administered in adequate amounts, confer a health benefit on the host , . The term probiotics is used widely, but not always properly, in the scientific literature and by the industry. Their fundamental characteristics have been described extensively in the literature , including their microbial origin, their viability and their benefit to the health of the host (Table ). The microbial origin of a probiotic product must be guaranteed by identifying a taxonomically defined microbe or combination of microbes. A probiotic must therefore be properly identified by strain‐genotypically and phenotypically characterised. An essential characteristic of a probiotic is its viability , as it must be a live microorganism that is able to survive the acidity of the stomach in order to reach and colonise the intestinal tract. Moreover, a probiotic must be guaranteed to remain viable and stable throughout the technical procedures, during its production, use and storage. A consensus statement was issued by the International Scientific Association for Probiotics and Prebiotics in 2013 with regard to the possible benefits of probiotics to human health. The statement sought to further clarify the appropriate use and scope of the term probiotic and stated that probiotics should exert specific general benefits, which it defined as core benefits . These benefits include contributing to establishing and sustaining a healthy gut microbiota. They are expected to be obtained by creating a favourable intestinal environment through nonstrain‐specific beneficial actions that are shared by most probiotics, which sustain a healthy digestive tract and immune system. In fact, some effects of probiotics can be observed across taxonomic groups and are achieved through general mechanisms, such as the inhibition of pathogens and the production of beneficial metabolites. These effects should be distinguished from other benefits, such as neurological or endocrinological effects, which are strain specific. An important aspect of probiotic activity is identifying the adequate amount that is able to confer health benefits on the host and a specific accepted definition of this is not currently available. Nevertheless, some regulatory approaches in Canada and Italy , have suggested that a probiotic product should contain at least 1^10 9 colony‐forming units per serving to be able to exert the claimed beneficial effects. The 2013 Statement also describes the different categories of live microorganisms for human use, in order to distinguish what can and cannot be considered a probiotic, according to health claims . Products claiming to contain live and active cultures should not be considered probiotics, because the simple use of the terms live and active does not imply any probiotic activity. Foods or supplements that state they contain probiotics have no specific health claims, and their expected effects are those related to the core benefits, as demonstrated by well‐conducted human studies. Products containing probiotics that make specific health claims are those that claim to have any beneficial health effects, according to documented evidence from well‐designed observational studies. Products containing probiotics that claim they can prevent or treat a specific disease need to be backed up by appropriate trials to meet the regulatory standards for drugs. Probiotics are commonly used in paediatric practice, and a summary of the indications and limitations is reported in Table . Their use includes preventing common and nosocomial infections, allergies and antibiotic‐associated diarrhoea, treating acute gastroenteritis and functional abdominal pain disorders and preventing and treating infantile colic. Guidelines by Hojsak et al. on using probiotics in clinical practice for children were published in 2018 , and the study reported that they seemed to be safe in general, even when provided in high doses. The authors provided a detailed description of the correct conditions for their use, together with specific positive instructions for the use of strictly defined strains for various clinical conditions. These conditions include preventing upper respiratory tract infections in children attending day care centres, nosocomial diarrhoea and antibiotic‐associated diarrhoea and treating acute gastroenteritis and infantile colic in breastfed infants.
The definition of prebiotics has undergone an important evolution over time. They were initially referred to as nondigestible food ingredients that beneficially affect the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria already residing in the colon (S61). Several studies have focused on the nondigestible oligosaccharides fructans, namely fructooligosaccharides and inulin, and galactans, namely galactooligosaccharides, and how they exert their effects through the enrichment of Lactobacillus and/or Bifidobacterium spp. (S62). Prebiotics have been described as nondigestible compounds that confer a beneficial physiological effect on the host (S63). They do this by metabolising microorganisms in the gut, which then modulate the composition and/or activity of the gut microbiota. In 2017, the International Scientific Association for Probiotics and Prebiotics Consensus Statement proposed a new definition for prebiotics (S64). The document discussed the concept of selectivity with respect to fermentation by bacteria and suggested that prebiotics were defined as substrates that are selectively utilised by host microorganisms and confer a health benefit on the host (Table ). Incorporating the concept of selectivity in the definition is important, as it distinguishes between prebiotics and other substances. The term selective does not mean that only lactobacilli and bifidobacteria are affected by prebiotics. It means that a broader range of microorganisms, but not all, can be affected. Substances that can affect the composition of the microbiota, but are not selectively used by microorganisms, are not prebiotics. The use of prebiotics in paediatric clinical practice is currently limited. Human milk oligosaccharides are a group of prebiotics that can influence a newborn infant’s gastrointestinal health by favouring the development of a healthy gut microbiota through some metabolic and immunological activities. It has been demonstrated that an infant’s consumption of human milk oligosaccharides increases the proportion of human milk oligosaccharide‐consuming Bifidobacteriaceae , particularly Bifidobacterium longum subsp. infantis and Bacteroidaceae (S65). The mechanisms of action in the newborn infant’s intestine include immune regulation and preventing the adhesion of pathogens to the intestinal epithelium, which protects the infant from infections (S66). Some compounds that are equivalent to human milk oligosaccharides or bovine milk oligosaccharides are obtained by enzymatic synthesis. It is still a matter of debate whether these are able to exert beneficial effects on human health by selectively stimulating the microbiota and thus acting as prebiotics. The existing literature does not provide definitive conclusions, but some human milk oligosaccharides may be considered candidate prebiotics. Studies have reported that prebiotics containing immunoactive oligosaccharides could effectively prevent atopic dermatitis in low‐atopy risk infants and that they could potentially be used to prevent adolescents becoming overweight. However, the clinical significance and efficacy of prebiotics and their possible widespread use in paediatric practice still needs to be clarified (S67‐69).
Synbiotics are commonly described as a combination of probiotics and prebiotics in functional food compounds. Functional food is a food that has been modified and claims to improve a person’s health or well‐being by providing benefits that extend beyond the traditional nutrients it contains. Examples of functional foods include bread, cereals and drinks that are fortified with vitamins or selected herbs. They can also contain nutraceuticals, which have physiological benefits or provide protection against chronic disease. Studies have reported that their combined use has facilitated the survival of live microbial dietary supplements and their implantation in the gastrointestinal tract (S66–68). This mechanism has been reported to generate a beneficial effect in the host organism, by the metabolic activation of a restricted type of bacteria, which is considered to be health promoting, and the selective stimulation of its growth (S69). It has been suggested that these combined conditions have improved the host’s welfare (S70). Single products containing an appropriate combination of probiotics and prebiotics have been reported to guarantee a greater effect than when they have been used separately. In fact, the synbiotic activity of foods containing a combination of prebiotics and probiotics is based on their elective action in two different areas of the gut. Probiotics are mainly active in the small and large intestine, while prebiotics are mainly active in the large intestine (S69). Synbiotics act in combination in two main ways: by improving the viability of probiotic microorganisms and by providing specific benefits for the host’s health. The rationale of using a synbiotic formulation of prebiotics is because they function as a selective medium, favouring the growth of certain probiotic strains, their fermentation and their intestinal passage. Furthermore, several studies have reported that prebiotics have positively influenced the ability of probiotic microorganisms to develop higher tolerance to particular situations caused by the presence of possibly challenging conditions. These include oxygenation and the pH and temperature of the intestines (S71). In brief, the main reason for using synbiotics is that the survival of probiotics in the digestive system is challenging in normal conditions and when an appropriate prebiotic is not present. Therefore, using prebiotics to stimulate the effectiveness of probiotics appears to be a good way of inducing the beneficial modulation of the metabolic activity of probiotics in the intestine. At the same time, this preserves the intestinal biostructure, favours the development and maintenance of a beneficial microbiota and inhibits the growth of potential pathogens in the gut. In general, the beneficial outcomes of synbiotics for the host’s health have been related to significant increases in short‐chain fatty acid levels, ketones, carbon disulphides and methyl acetates. In particular, the potential beneficial activity of synbiotics in clinical practice has been described in different clinical conditions (Table ). The reported potential therapeutic properties of synbiotics include anticarcinogenic, anti‐allergic and antibacterial effects (S67). A few studies, which need to be confirmed or validated, have also suggested that synbiotics could be used to prevent constipation, diarrhoea and osteoporosis and in treating brain diseases associated with altered hepatic function (S72). Studies suggest that the synbiotic activity exerted by a combination of prebiotics and probiotics in functional food products is mainly due to their ability to modulate the host’s immune system. This means that they can be used in clinical practice for selected conditions. It has been reported that healthcare professionals have used synbiotics in clinical practice before using antibiotics and surgery interventions and that their use may be related to cost‐effectiveness and safety considerations. Finally, the availability of synbiotic‐based commercial products is rapidly increasing, due to the large number of possible existing combinations of prebiotics and probiotics. This may offer increased therapeutic options in the near future (S72).
In addition to the factors provided by the host organisms, further regulatory elements are able to support the maintenance and growth of the gut microbiota by favouring bacterial development, reproduction, protection from external insults and intercellular communication (S73). Data from the literature have emphasised that bacterial viability, which characterises probiotic activity, is not the exclusive factor involved in exerting health‐promoting effects (S74). In this regard, the term paraprobiotics is used to identify nonviable, inactivated microbial cells that have shown dose‐related beneficial effects for consumers. It has been suggested that paraprobiotics are safer than viable bacterial products for selected groups of patients, such as individuals with impaired immune systems, because they pose a reduced risk of infection, microbial translocation or potential inflammatory responses. Inactivated bacterial cells are typically obtained artificially, through chemical or physical methods such as heating, acid deactivation, freeze‐drying, sonication and ultraviolet treatment. This means that they are able to modify the cell structure and/or the physiological functions of the bacteria while preserving the beneficial properties of their viable forms (S74). The term postbiotics describes soluble factors that may be secreted by viable bacteria or by‐products resulting from bacterial lysis (S73, S74) (Table ). Several bacterial strains have shown the ability to express a wide range of soluble factors of different natures, including cell surface proteins, vitamins, enzymes, peptides, teichoic acids, plasmalogens and organic and short‐chain fatty acids. These cell‐free supernatant metabolites have been reported to possess antimicrobial, antioxidant and immunomodulatory properties. These can positively influence microbiotal homoeostasis as well as the metabolic and/or signalling pathways of the host organism. The active structure and mechanism of action that enable postbiotics to produce a beneficial effect in the context of physiological, immunological, neuro‐hormone biological and metabolic reactions in the host have not yet been clarified. Investigations are currently in progress to explain the beneficial health effects of postbiotic products reported in the literature. These health effects have been previously been related to their possible anti‐inflammatory, antiproliferative, antioxidant, hypocholesterolaemic, antihypertensive, anti‐obesogenic, hepatoprotective and antimicrobial activities (S74). The definitions of probiotics and prebiotics have a long history, particularly with regard to the standing of probiotics in international regulations, but there is still no consensus on their definitions. It is unclear whether these terms will be maintained or changed in future. Specific commercial products use fermentation technology, such as fermented milk‐based infant formulas, and these can be used in clinical practice to beneficially modulate the gut microbiota and gut immunity. Selected lactic acid bacterial strains are used in industrial processes to ferment cows’ milk, and these are combined with heat treatment. The end products are formulas that contain no viable bacteria or prebiotic components, but do contain specific active factors resulting from the fermentation process (S75). Metabolites produced through fermentation processes are used as raw materials for pharmaceutical products, healthcare supplements and functional foods. Experimental in vitro and in vivo studies have indicated that specific fermentation products are involved in establishing immune balance and oral tolerance, although the mechanism of action underlying these functions has not yet been fully explained (S75). Changes in the microbiome have been documented in many chronic, mainly immune‐mediated, gastrointestinal and liver diseases and distinct patterns have been associated with each specific disease. However, causality and the mechanisms by which the gut microflora influences the aetiopathogenesis of a disease have not been fully explained, so they may be considered limiting factors of this review. Paediatricians are on the frontline when it comes to caring for children’s health and well‐being. The strength of this review was that we have emphasised the key roles that paediatricians’ play in minimising preventable early harmful events that could permanently influence the composition and/or function of the gut microbiota.
The basic science relating to the gut microbiota is changing rapidly, as clinical data provide evidence of the importance of diversity in the microbial community and point to the general accepted role of so‐called protective bacteria. Gut microbes are moving rapidly from being considered potentially dangerous to being considered as a positive influence on health when they are properly implemented. In clinical practice, modulation of the gut microbiota may be achieved by using several approaches, including probiotics, prebiotics, synbiotics, paraprobiotics and postbiotics. A better understanding of the potential impacts of the gut microbiota on human health, and of the use of related commercially available products, would lead to more appropriate use of these products in clinical practice by healthcare professionals.
This study did not receive any external funding.
Sanja Kolaček has received lecture fees, travel grants and unrestrained support for the hospital from Abbott, AbVie, BioGaia, Fresenius, Medis, Nestle, Nutricia. Iva Hojsak has received lecture fees and consultation fees from BioGaia, Medis, Nestle, Nutricia, Fresenius Kabi and Chr Hansen.
Appendix S1 References. Click here for additional data file.
|
GraphTar: applying word2vec and graph neural networks to miRNA target prediction | 70599e52-c32d-4ea4-ac56-9128cb8441c9 | 10657114 | Internal Medicine[mh] | Discovered in 1993 , microRNAs (miRNAs) are a family of short, non-coding RNA molecules that have the ability to influence gene expression. In the RNA interference phenomenon, the miRNA strand binds to an Argonaute protein (AGO) . The resulting miRNA-induced silencing complex (miRISC) can target specific mRNAs, depending on the miRNA nucleotide sequence, and inhibit their translation. As a result, miRNAs can influence the expression of certain genes and take part in the regulation of a number of biological processes in the human organism . Crucially, miRNAs are believed to influence the development of cardiovascular , oncological , and gastrointestinal diseases , as well as viral infections . Because of this ability, the study of miRNAs and their respective targets may prove crucial for the design of novel diagnostic and treatment methods. Experimental validation of miRNA–mRNA target pairs (duplexes) interaction is difficult and costly due to the large number of potential interactions that need to be examined. Computational methods that are able to filter out potential pairs that can be later validated experimentally are therefore of utmost importance to streamline the process of discovering valid miRNA–mRNA targets and increase the efficiency of patient diagnosis and treatment in the healthcare of the future. Earliest computational methods proposed for target prediction employed expert-based knowledge to classify miRNA–mRNA sequence pairs. For this purpose, metrics inspired by the literature on interaction mechanics were calculated based on the composition of each sequence. The most common metrics include site complementarity , site conservation , and free energy estimation . After the calculation, the scores were graded using a rule-based system to determine the binding probability between the pair of RNAs considered. With an increasing number of experimentally validated miRNA–mRNA pairs and the rise of Machine Learning (ML) in other domains, classical ML algorithms were also introduced into target prediction, in hopes to outperform rule-based systems. Just like the latter, ML methods used calculated metrics as input, but the rules for calculation of the binding probability were learned by the ML classifier directly from the training data, as opposed to being preset. The most popular classifiers used for this task include support vector machines (SVM) , boosting methods , Bayesian probabilistic models , and feedforward neural networks . Although widely used, expert-based methods have two main limitations. First, while pre-engineered features have a documented impact on the miRNA–mRNA interaction, the intrinsic mechanisms behind the binding process are not fully understood. Thus, relying solely on this knowledge may prevent algorithms from achieving optimal performance. Second, the need to calculate interaction metrics based on the sequences adds an often tedious computation step to the procedure, increasing the execution time during inference. To address these limitations, a new family of deep learning (DL) methods has emerged that can process and classify raw RNA data without using any a priori knowledge. After training, these models can extract accurate feature representations of the miRNA–mRNA pair and predict the probability of binding. Despite the limitations that deep learning methods face, such as their reliance on high-quality data, the need for substantial computational resources, interpretability challenges, and the common issue of overfitting, the improving quality of datasets has accelerated their adoption for classification tasks in various fields of computational biology, including protein-protein interaction , RNA-disease , and miRNA–mRNA interaction, which is the specific domain of focus for this study. Various DL architectures have been used for this purpose, including feedforward, convolutional, and recurrent neural networks. Some methods have also utilized pre-trained autoencoders to improve feature extraction. One of the first methods, DeepTarget , trained separate autoencoders on miRNA and mRNA one-hot encoded sequences in an unsupervised manner. The encoders extracted from the trained models were then used to build a feature representation of each sequence, and the concatenation of the features served as input to a recurrent neural network that acted as a classifier. In DeepMirTar , a hybrid approach was proposed where expert-based features and one-hot encoded raw sequence data were processed together by a neural network consisting of a pre-trained stacked denoising autoencoder (SdA) and a fully connected layer on top. Notably, the authors used an unsupervised pre-training strategy for the SdA. In MiRAW , authors used a standard autoencoder, but pre-trained it in an end-to-end manner, unlike the layer-by-layer approach of DeepMirTar. Moreover, the miRAW model used only one-hot encoded sequence data as input without any expert features. The architecture consisted of an encoder extracted after pre-training and a set of fully connected layers as a classifier. One of the most recent methods in this family is miTAR . Gu et al. proposed an architecture that combined recurrent and convolutional layers in a single architecture. The one-hot encoded sequences are concatenated and processed by a set of 1D convolutional layers. Next, a max-pooling operator is applied, and the output is inferred through a bidirectional long short-term memory layer (BiLSTM) . The extracted information is then classified by fully connected layers. The authors hypothesize that convolutional layers can extract compact, spatial features, and this ability is complemented by a recurrent layer that excels in learning long-term sequential features. As a result, the combination of these traits is believed to improve the performance of the model. The duplex representation used by the aforementioned DL methods is essentially a 1D vector of concatenated duplex sequences. Although it has been shown to provide decent results, we believe that it may not be the best way to represent the miRNA–mRNA interaction. Firstly, DL methods operating on sequences in other domains, e.g. in Natural Language Processing (NLP), have moved away from encoding the input using one-hot encoding towards more sophisticated sequence embedding methods, such as word2vec , with great success. This idea has already been applied to the analysis of biological sequences but not in the target prediction domain. In this work, we describe a way to apply word2vec with the hope of improving the predictions of our DL model. Secondly, in reality, miRISC molecules bind directly to the targeted RNAs, creating spatial, graph-like secondary structures. To this end, to improve on existing target prediction methods, we propose a novel DL method that exploits the graph representation of the duplex. The method can classify miRNA–mRNA pairs in an end-to-end manner from raw sequences. To process data in an unstructured form, we employ graph neural networks (GNNs), which have recently gained considerable attention and have been successfully applied to several bioinformatics problems related to graph representation learning, including the prediction of protein-protein interaction, prediction of drug response, and prediction of protein structure . At the same time, GNNs have not been used in miRNA–mRNA target prediction, and therefore, to the best of our knowledge, we document the first use of graph representation of the duplex and GNNs as classifiers in this domain. To find a suitable GNN architecture, in this study, we compare three popular node embedding methods: Graph Convolutional Networks, GraphSAGE, and Graph Attention Networks. We denote the proposed framework as GraphTar to emphasize the use of spatial, graph representation of the duplex. As a final contribution of this work, we validate the method against state of the art by fully reimplementing and reproducing miRAW, DeepMirTar, and miTAR experiments to compare the predictive performance. As mentioned earlier, we believe that expert-based methods have significant limitations due to inaccurate prior assumptions and substantial data preprocessing overhead. Therefore, in this study, we will concentrate on applying GraphTar, along with competing methods, to raw sequences.
Dataset We utilized the meticulously curated dataset collection from the miTAR study in our experiments. Gu et al. employed experimentally validated pairs from the miRAW and DeepMirTar studies. Data for the miRAW study was originally extracted from Diana TarBase and MirTarBase . Additionally, target site sequences were cross-referenced with PAR-CLIP , CLASH , and TargetScanHuman 7.1 . Data used in DeepMirTar study was collected from the mirMark and CLASH datasets. To apply the most recent knowledge, Gu et al. filtered out the samples containing miRNAs not present in the 22nd release of the miRBase . From the miTAR dataset, we derived four distinct datasets: two sets designated for training, validation, and testing (referred to as miRAW and DeepMirTar), and two additional independent test sets (referred to as miRAWIn and DeepMirTarIn). Furthermore, the miTAR authors compiled a consolidated dataset named MirTarRAW, composed of 33% of data from the miRAW set and 90% of data from the DeepMirTar set. This resulted in a unified dataset containing an equal number of samples from the miRAW and DeepMirTar sets. Each sample within the dataset comprises a pair of miRNA and mRNA sequences, wherein nucleotides are labeled using nucleic acid notation (characters from the set {A, C, G, T, U}), along with a corresponding binary label. A label of 1 indicates a positive interaction, i.e., the miRNA targets the respective mRNA, while a label of 0 signifies a negative interaction. The comprehensive statistics of each dataset used in our study is presented in Table . It is noteworthy that we decided to evaluate the methods only using miRAW, DeepMirTar, and MirTarRAW datasets owing to a low number of samples in the independent datasets. Sequence encoding Reproduced methods To classify duplex sequences using the reproduced target prediction methods, we first encoded them using one-hot encoding. This resulted in each nucleotide being represented numerically with a sparse, one-hot vector. Since the mRNA sequences contain Thymine and miRNA sequences contain Uracil, we unified the representation by using the same sparse vector to represent both nucleotides. To ensure a consistent input size, a padding nucleotide ’N’ was added, which has its own sparse representation. To account for this additional padding, one-hot vectors were set to a dimensionality of 5. To evaluate the reproduced methods in our experiments, we padded the miRNA sequences with this nucleotide to the length of the longest miRNA in the respective dataset, and repeated the same process for the mRNA sequences. The proposed method To prepare encoded inputs for the proposed method, we applied a word2vec-based encoding technique. Word2vec emerged in the natural language processing domain in 2013 as a way to create a vector representation of words. In one of its variants, the Continuous Bag of Words (CBOW) neural network model is trained to predict a selected word based on the context, which is a set of surrounding words in the given sentence. The model can be trained on a set of sentences, each consisting of words from a certain corpus. After training, the hidden layer weights of the model can be considered a lookup table, where each row of the weight matrix at the respective position of the word contains its latent vector representation, learned by the model. The power of this method comes from the fact that words with similar meanings are usually characteristic of similar contexts. On this basis, word2vec can capture functional relationships in language and place synonyms close to each other in the latent space, whereas words with a likely occurrence in different contexts are placed far apart. This trait can also be used in the analysis of biological sequences, as demonstrated by Asgari and Mofrad . To use this method with duplex sequences, we first extracted all miRNA and mRNA sequences from the training dataset into distinct sets. Next, we divided each sequence into 3 nucleotide words in both groups. This way, sequences could be regarded as sentences consisting of words. If the sequences are not divisible by 3, we left 1 and 2 nucleotide remainders as words. Finally, we trained a distinct CBOW model on each set of sentences, one on miRNA sequences, and one on mRNA sequences. As a result, for each word in the dataset, we obtained a dense vector representation by inferring it through a corresponding model. We set the dimensionality of the resulting vectors to 16. With the trained models in place, the input to our method could be prepared by first splitting the sequence into words and then inferring words through a corresponding CBOW model. Finally, we stacked the resulting dense vectors. With this encoding procedure, one dataset sample consisted of two stacks of dense vectors, one for each sequence in the duplex pair. Note that we conducted a separate experiment for each dataset (DeepMirTar, miRAW, and MirTarRAW), and therefore trained a total of six word2vec models. Moreover, we trained these models only on the respective training datasets to prevent any knowledge from leaking from the test samples. Graph classification with GNNs Classifier overview The high-level overview of the GraphTar method is presented in Fig. . Firstly, the word2vec encoded sequences of the duplex are used to create an input graph in the input graph preparation procedure. The resulting graph representation is then passed through a set of GNN layers, which results in latent graph node space embeddings. Next, the node embeddings are aggregated using a prediction head operator, to form a graph embedding. Finally, the graph embedding is passed through a set of fully connected layers, resulting in a prediction vector. Input graph preparation To create a graph representation of the miRNA–mRNA duplex under consideration, we treat the word2vec encoded words as nodes of the graph. To construct an undirected graph, we align the sequences by the starting word and connect the closest words within and between the sequences with a bidirectional edge. This process is illustrated in Fig. . GNN layers The proposed classifier is composed of two parts. The first part consists of a set of GNN layers, which can process the miRNA–mRNA duplex graph created in the input graph preparation procedure. During training, the layers learn to extract accurate node embeddings, which can be used for various machine learning tasks. Many GNN layer variants have been proposed in the literature. For this study, we selected three established approaches for node embedding extraction: Graph Convolutional Networks (GCNs), the GraphSAGE inductive framework, and Graph Attention Networks (GATs) for evaluation. GCNs were originally introduced by Kipf and Welling for semi-supervised node classification. The approach is motivated by an approximation of spectral graph convolutions . The layer-wise propagation rule introduced in their work can operate directly on graphs and was able to solve semi-supervised node classification problems on citation networks with state-of-the-art results. For an input graph with V nodes with features X , the layer-wise propagation rule proposed in their work is: [12pt]{minimal}
$$ H^{(l+1)}= ({}^{-}{}{}^{-}H^{(l)}W^{(l)}) $$ H ( l + 1 ) = σ ( D ~ - 1 2 A ~ D ~ - 1 2 H ( l ) W ( l ) ) where [12pt]{minimal}
$${}$$ D ~ is a graph degree matrix, [12pt]{minimal}
$${}$$ A ~ is the adjacency matrix, [12pt]{minimal}
$$H^{(l)} {}^{V {D}}$$ H ( l ) ∈ R V × D is a matrix of node features at layer l ; [12pt]{minimal}
$$H^{(0)} = X$$ H ( 0 ) = X . [12pt]{minimal}
$$W^{(l)}$$ W ( l ) is a trainable weight matrix. This method can be generalized to unseen graphs, as all trainable weights at each layer can be shared between the nodes. The weights can be then used during inference on unseen nodes. Hamilton, Ying, and Leskovec extended the idea behind GCNs by proposing a general, inductive framework for node embedding—GraphSAGE . In GraphSAGE, the forward propagation algorithm is also layer-wise and makes use of two operators: aggregation ( AGG ) and concatenation ( CONCAT ). For graph with V nodes, the steps leading to the computation of the node v feature h at layer l (denoted as [12pt]{minimal}
$$h_v^l$$ h v l ) are as follows: [12pt]{minimal}
$$ h_{{ {N}}(v)}^l= & {} AGG_l(\{h_u^{l-1}, {u} { {N}}(v)\}, \\ h_v^l= & {} ( W^l CONCAT(h_v^{l-1}, h^l_{{ {N}}(v)})) \\ h_v^l= & {} h_v^l/||h_v^l||_2 $$ h N ( v ) l = A G G l ( { h u l - 1 , ∀ u ∈ N ( v ) } , h v l = σ W l · C O N C A T ( h v l - 1 , h N ( v ) l ) h v l = h v l / | | h v l | | 2 Note, that again, if we denote the original feature vector of node v as [12pt]{minimal}
$$x_v$$ x v , then [12pt]{minimal}
$$h_v^0=x_v$$ h v 0 = x v In the first step, the node neighborhood [12pt]{minimal}
$${ {N}}(v)$$ N ( v ) is passed to the AGG operator, which yields a representation of the neighborhood [12pt]{minimal}
$$h^l_{{ {N}}(v)}$$ h N ( v ) l . The aggregation operator can be simply a computation of the element-wise mean of the neighbor feature vectors at the previous layer, but the framework is flexible and more complicated functions can be used. In the original article, authors used also an LSTM aggregator and a pooling aggregator, consisting of a fully connected neural network and an element-wise max-pooling operation. In the second step of the forward pass, the node v latest representation [12pt]{minimal}
$$h_v^{l-1}$$ h v l - 1 is concatenated with its neighborhood representation. The result is then multiplied with a weight matrix at the respective layer [12pt]{minimal}
$$W^l$$ W l , shared between all nodes. This gives the framework an inductive characteristic - it is able to be generalized to unseen nodes. Finally, a non-linearity is applied and the vector is normalized to unit length. It is worth noting, that aggregation of the node neighborhood may be expensive for large graphs. to solve this issue, the authors propose to sample a fixed-size neighborhood, instead of taking into account all neighbors. Graph Attention Networks were proposed by Veličković et al, as an alternative for spectral (e.g. GCNs) and non-spectral (e.g. GraphSAGE) node embedding approaches. GATs use a self-attention mechanism which allows the network to choose important nodes in the node’s neighborhood. For this purpose, at each layer, first the attention coefficients are calculated. Using neighborhood notation from the GraphSAGE description, for node v , the attention coefficient [12pt]{minimal}
$$e_{vu}$$ e vu is calculated as follows: [12pt]{minimal}
$$ e_{vu} = a ( W_k h_u^{k-1}, W_k h_v^{k-1} ) , u, u { {N}}(v) $$ e vu = a W k h u k - 1 , W k h v k - 1 , ∀ u , u ∈ N ( v ) Where [12pt]{minimal}
$$w_k$$ w k is a weight matrix applied to every node and shared within a layer k , and [12pt]{minimal}
$$h_v^{k-1}$$ h v k - 1 is a feature vector of node v at layer [12pt]{minimal}
$$k-1$$ k - 1 . The attention mechanism a is a fully connected neural network with LeakyReLU nonlinearity. After computing the attention coefficients, they are normalized using the softmax function, so that the resulting neighbor importances sum to 1: [12pt]{minimal}
$$ _{vu} = )}{ _{n { {N}}(v)}exp(e_{vn})} $$ α vu = e x p ( e vu ) ∑ n ∈ N ( v ) e x p ( e vn ) Finally, the normalized attention coefficients can be used to to calculate node feature vector [12pt]{minimal}
$$h_v^k$$ h v k , after applying a non-linearity: [12pt]{minimal}
$$ h_v^k= ( _{u { {N}}(v)} _{vu}W_kh_u^{k-1}) $$ h v k = σ ∑ u ∈ N ( v ) α vu W k h u k - 1 In our experiments, we evaluate the use of the aforementioned node embedding approaches, to compare their performance. Regardless of the approach used for a graph with V nodes, GNN layers yield a [12pt]{minimal}
$$D V$$ D × V embedding matrix, where D is the dimensionality of the node feature vectors. D is one of the hyperparameters that has to be tuned to obtain the best performance. Prediction head Node features extracted by the GNN layers contain important information about the nodes and can be used for various tasks, including node, edge, and graph classification. The prediction head operator transforms these embeddings into a suitable representation for the chosen task. In this study, we aim to solve a graph classification problem, and thus, we need to acquire a graph representation based on the node embeddings that can be classified by a set of fully connected layers. To achieve this goal, we apply a pooling operator that aggregates the node information into a graph feature vector with D features, which is equal to the dimensionality of the node feature vectors. We consider three classic pooling operators, established in the literature : global add pooling ( ADD ), global mean pooling ( MEAN ), and global max pooling ( MAX ), and choose the best one for each embedding approach during the hyperparameter tuning process. These operators apply element-wise sum, averaging, or maximum operations to the node feature vectors, yielding an aggregated graph representation. The operators preserve different aspects of the graph. For instance, the global add pooling provides distinct graph embeddings for graphs of varying sizes, as a greater number of nodes will generally provide greater element-wise sum values. On the other hand, global mean pooling can provide similar graph embeddings, even if the considered graphs vary greatly in size—when averaging, similar element-wise values can be obtained for both large and small graphs. Fully connected layers The vector with graph embedding is processed by a set of fully connected layers, with ReLU activations. As a regularization mechanism, we use Dropout after every but the last fully connected layer. The final layer returns two values and is normalized using the softmax function. This yields two probability values, indicating, whether the considered graph is an instance of a positive, or negative interaction. Evaluation metrics To evaluate the performance of the reproduced methods and the GNN classifiers proposed in this work, we report the following metrics: Balanced accuracy (BACC), Precision, Recall. These metrics are calculated as follows: [12pt]{minimal}
$$ BACC= & {} + }{2}, \\ Precision= & {} , \\ Recall= & {} , $$ B A C C = TP T P + F N + TN T N + F P 2 , P r e c i s i o n = TP T P + F P , R e c a l l = TP T P + F N , where TP, TN, FP, and FN are true positives, true negatives, false positives, and false negatives, respectively. Balanced accuracy is a metric that indicates the percentage of correctly classified labels. Precision indicates how many miRNA:mRNA pairs classified as instances of positive interaction are actually positive. Recall is a metric that shows the percentage of positive pairs in the dataset that were classified as positive. Implementation and availability The data preparation and preprocessing were implemented in Python 3.8.10 using the Pandas package 1.3.0 . For the reproduced methods, we used the PyTorch machine learning framework 1.9.0 . To implement graph neural networks, we used the PyTorch Geometric extension 1.7.2 . Word2vec embeddings were prepared using utilities provided by the Gensim package 4.0.1 . Training was implemented using PyTorch Lightning 1.5.10 . The datasets, trained models and results data, together with the code and reproduction steps, are available in the project repository .
We utilized the meticulously curated dataset collection from the miTAR study in our experiments. Gu et al. employed experimentally validated pairs from the miRAW and DeepMirTar studies. Data for the miRAW study was originally extracted from Diana TarBase and MirTarBase . Additionally, target site sequences were cross-referenced with PAR-CLIP , CLASH , and TargetScanHuman 7.1 . Data used in DeepMirTar study was collected from the mirMark and CLASH datasets. To apply the most recent knowledge, Gu et al. filtered out the samples containing miRNAs not present in the 22nd release of the miRBase . From the miTAR dataset, we derived four distinct datasets: two sets designated for training, validation, and testing (referred to as miRAW and DeepMirTar), and two additional independent test sets (referred to as miRAWIn and DeepMirTarIn). Furthermore, the miTAR authors compiled a consolidated dataset named MirTarRAW, composed of 33% of data from the miRAW set and 90% of data from the DeepMirTar set. This resulted in a unified dataset containing an equal number of samples from the miRAW and DeepMirTar sets. Each sample within the dataset comprises a pair of miRNA and mRNA sequences, wherein nucleotides are labeled using nucleic acid notation (characters from the set {A, C, G, T, U}), along with a corresponding binary label. A label of 1 indicates a positive interaction, i.e., the miRNA targets the respective mRNA, while a label of 0 signifies a negative interaction. The comprehensive statistics of each dataset used in our study is presented in Table . It is noteworthy that we decided to evaluate the methods only using miRAW, DeepMirTar, and MirTarRAW datasets owing to a low number of samples in the independent datasets.
Reproduced methods To classify duplex sequences using the reproduced target prediction methods, we first encoded them using one-hot encoding. This resulted in each nucleotide being represented numerically with a sparse, one-hot vector. Since the mRNA sequences contain Thymine and miRNA sequences contain Uracil, we unified the representation by using the same sparse vector to represent both nucleotides. To ensure a consistent input size, a padding nucleotide ’N’ was added, which has its own sparse representation. To account for this additional padding, one-hot vectors were set to a dimensionality of 5. To evaluate the reproduced methods in our experiments, we padded the miRNA sequences with this nucleotide to the length of the longest miRNA in the respective dataset, and repeated the same process for the mRNA sequences. The proposed method To prepare encoded inputs for the proposed method, we applied a word2vec-based encoding technique. Word2vec emerged in the natural language processing domain in 2013 as a way to create a vector representation of words. In one of its variants, the Continuous Bag of Words (CBOW) neural network model is trained to predict a selected word based on the context, which is a set of surrounding words in the given sentence. The model can be trained on a set of sentences, each consisting of words from a certain corpus. After training, the hidden layer weights of the model can be considered a lookup table, where each row of the weight matrix at the respective position of the word contains its latent vector representation, learned by the model. The power of this method comes from the fact that words with similar meanings are usually characteristic of similar contexts. On this basis, word2vec can capture functional relationships in language and place synonyms close to each other in the latent space, whereas words with a likely occurrence in different contexts are placed far apart. This trait can also be used in the analysis of biological sequences, as demonstrated by Asgari and Mofrad . To use this method with duplex sequences, we first extracted all miRNA and mRNA sequences from the training dataset into distinct sets. Next, we divided each sequence into 3 nucleotide words in both groups. This way, sequences could be regarded as sentences consisting of words. If the sequences are not divisible by 3, we left 1 and 2 nucleotide remainders as words. Finally, we trained a distinct CBOW model on each set of sentences, one on miRNA sequences, and one on mRNA sequences. As a result, for each word in the dataset, we obtained a dense vector representation by inferring it through a corresponding model. We set the dimensionality of the resulting vectors to 16. With the trained models in place, the input to our method could be prepared by first splitting the sequence into words and then inferring words through a corresponding CBOW model. Finally, we stacked the resulting dense vectors. With this encoding procedure, one dataset sample consisted of two stacks of dense vectors, one for each sequence in the duplex pair. Note that we conducted a separate experiment for each dataset (DeepMirTar, miRAW, and MirTarRAW), and therefore trained a total of six word2vec models. Moreover, we trained these models only on the respective training datasets to prevent any knowledge from leaking from the test samples.
To classify duplex sequences using the reproduced target prediction methods, we first encoded them using one-hot encoding. This resulted in each nucleotide being represented numerically with a sparse, one-hot vector. Since the mRNA sequences contain Thymine and miRNA sequences contain Uracil, we unified the representation by using the same sparse vector to represent both nucleotides. To ensure a consistent input size, a padding nucleotide ’N’ was added, which has its own sparse representation. To account for this additional padding, one-hot vectors were set to a dimensionality of 5. To evaluate the reproduced methods in our experiments, we padded the miRNA sequences with this nucleotide to the length of the longest miRNA in the respective dataset, and repeated the same process for the mRNA sequences.
To prepare encoded inputs for the proposed method, we applied a word2vec-based encoding technique. Word2vec emerged in the natural language processing domain in 2013 as a way to create a vector representation of words. In one of its variants, the Continuous Bag of Words (CBOW) neural network model is trained to predict a selected word based on the context, which is a set of surrounding words in the given sentence. The model can be trained on a set of sentences, each consisting of words from a certain corpus. After training, the hidden layer weights of the model can be considered a lookup table, where each row of the weight matrix at the respective position of the word contains its latent vector representation, learned by the model. The power of this method comes from the fact that words with similar meanings are usually characteristic of similar contexts. On this basis, word2vec can capture functional relationships in language and place synonyms close to each other in the latent space, whereas words with a likely occurrence in different contexts are placed far apart. This trait can also be used in the analysis of biological sequences, as demonstrated by Asgari and Mofrad . To use this method with duplex sequences, we first extracted all miRNA and mRNA sequences from the training dataset into distinct sets. Next, we divided each sequence into 3 nucleotide words in both groups. This way, sequences could be regarded as sentences consisting of words. If the sequences are not divisible by 3, we left 1 and 2 nucleotide remainders as words. Finally, we trained a distinct CBOW model on each set of sentences, one on miRNA sequences, and one on mRNA sequences. As a result, for each word in the dataset, we obtained a dense vector representation by inferring it through a corresponding model. We set the dimensionality of the resulting vectors to 16. With the trained models in place, the input to our method could be prepared by first splitting the sequence into words and then inferring words through a corresponding CBOW model. Finally, we stacked the resulting dense vectors. With this encoding procedure, one dataset sample consisted of two stacks of dense vectors, one for each sequence in the duplex pair. Note that we conducted a separate experiment for each dataset (DeepMirTar, miRAW, and MirTarRAW), and therefore trained a total of six word2vec models. Moreover, we trained these models only on the respective training datasets to prevent any knowledge from leaking from the test samples.
Classifier overview The high-level overview of the GraphTar method is presented in Fig. . Firstly, the word2vec encoded sequences of the duplex are used to create an input graph in the input graph preparation procedure. The resulting graph representation is then passed through a set of GNN layers, which results in latent graph node space embeddings. Next, the node embeddings are aggregated using a prediction head operator, to form a graph embedding. Finally, the graph embedding is passed through a set of fully connected layers, resulting in a prediction vector. Input graph preparation To create a graph representation of the miRNA–mRNA duplex under consideration, we treat the word2vec encoded words as nodes of the graph. To construct an undirected graph, we align the sequences by the starting word and connect the closest words within and between the sequences with a bidirectional edge. This process is illustrated in Fig. . GNN layers The proposed classifier is composed of two parts. The first part consists of a set of GNN layers, which can process the miRNA–mRNA duplex graph created in the input graph preparation procedure. During training, the layers learn to extract accurate node embeddings, which can be used for various machine learning tasks. Many GNN layer variants have been proposed in the literature. For this study, we selected three established approaches for node embedding extraction: Graph Convolutional Networks (GCNs), the GraphSAGE inductive framework, and Graph Attention Networks (GATs) for evaluation. GCNs were originally introduced by Kipf and Welling for semi-supervised node classification. The approach is motivated by an approximation of spectral graph convolutions . The layer-wise propagation rule introduced in their work can operate directly on graphs and was able to solve semi-supervised node classification problems on citation networks with state-of-the-art results. For an input graph with V nodes with features X , the layer-wise propagation rule proposed in their work is: [12pt]{minimal}
$$ H^{(l+1)}= ({}^{-}{}{}^{-}H^{(l)}W^{(l)}) $$ H ( l + 1 ) = σ ( D ~ - 1 2 A ~ D ~ - 1 2 H ( l ) W ( l ) ) where [12pt]{minimal}
$${}$$ D ~ is a graph degree matrix, [12pt]{minimal}
$${}$$ A ~ is the adjacency matrix, [12pt]{minimal}
$$H^{(l)} {}^{V {D}}$$ H ( l ) ∈ R V × D is a matrix of node features at layer l ; [12pt]{minimal}
$$H^{(0)} = X$$ H ( 0 ) = X . [12pt]{minimal}
$$W^{(l)}$$ W ( l ) is a trainable weight matrix. This method can be generalized to unseen graphs, as all trainable weights at each layer can be shared between the nodes. The weights can be then used during inference on unseen nodes. Hamilton, Ying, and Leskovec extended the idea behind GCNs by proposing a general, inductive framework for node embedding—GraphSAGE . In GraphSAGE, the forward propagation algorithm is also layer-wise and makes use of two operators: aggregation ( AGG ) and concatenation ( CONCAT ). For graph with V nodes, the steps leading to the computation of the node v feature h at layer l (denoted as [12pt]{minimal}
$$h_v^l$$ h v l ) are as follows: [12pt]{minimal}
$$ h_{{ {N}}(v)}^l= & {} AGG_l(\{h_u^{l-1}, {u} { {N}}(v)\}, \\ h_v^l= & {} ( W^l CONCAT(h_v^{l-1}, h^l_{{ {N}}(v)})) \\ h_v^l= & {} h_v^l/||h_v^l||_2 $$ h N ( v ) l = A G G l ( { h u l - 1 , ∀ u ∈ N ( v ) } , h v l = σ W l · C O N C A T ( h v l - 1 , h N ( v ) l ) h v l = h v l / | | h v l | | 2 Note, that again, if we denote the original feature vector of node v as [12pt]{minimal}
$$x_v$$ x v , then [12pt]{minimal}
$$h_v^0=x_v$$ h v 0 = x v In the first step, the node neighborhood [12pt]{minimal}
$${ {N}}(v)$$ N ( v ) is passed to the AGG operator, which yields a representation of the neighborhood [12pt]{minimal}
$$h^l_{{ {N}}(v)}$$ h N ( v ) l . The aggregation operator can be simply a computation of the element-wise mean of the neighbor feature vectors at the previous layer, but the framework is flexible and more complicated functions can be used. In the original article, authors used also an LSTM aggregator and a pooling aggregator, consisting of a fully connected neural network and an element-wise max-pooling operation. In the second step of the forward pass, the node v latest representation [12pt]{minimal}
$$h_v^{l-1}$$ h v l - 1 is concatenated with its neighborhood representation. The result is then multiplied with a weight matrix at the respective layer [12pt]{minimal}
$$W^l$$ W l , shared between all nodes. This gives the framework an inductive characteristic - it is able to be generalized to unseen nodes. Finally, a non-linearity is applied and the vector is normalized to unit length. It is worth noting, that aggregation of the node neighborhood may be expensive for large graphs. to solve this issue, the authors propose to sample a fixed-size neighborhood, instead of taking into account all neighbors. Graph Attention Networks were proposed by Veličković et al, as an alternative for spectral (e.g. GCNs) and non-spectral (e.g. GraphSAGE) node embedding approaches. GATs use a self-attention mechanism which allows the network to choose important nodes in the node’s neighborhood. For this purpose, at each layer, first the attention coefficients are calculated. Using neighborhood notation from the GraphSAGE description, for node v , the attention coefficient [12pt]{minimal}
$$e_{vu}$$ e vu is calculated as follows: [12pt]{minimal}
$$ e_{vu} = a ( W_k h_u^{k-1}, W_k h_v^{k-1} ) , u, u { {N}}(v) $$ e vu = a W k h u k - 1 , W k h v k - 1 , ∀ u , u ∈ N ( v ) Where [12pt]{minimal}
$$w_k$$ w k is a weight matrix applied to every node and shared within a layer k , and [12pt]{minimal}
$$h_v^{k-1}$$ h v k - 1 is a feature vector of node v at layer [12pt]{minimal}
$$k-1$$ k - 1 . The attention mechanism a is a fully connected neural network with LeakyReLU nonlinearity. After computing the attention coefficients, they are normalized using the softmax function, so that the resulting neighbor importances sum to 1: [12pt]{minimal}
$$ _{vu} = )}{ _{n { {N}}(v)}exp(e_{vn})} $$ α vu = e x p ( e vu ) ∑ n ∈ N ( v ) e x p ( e vn ) Finally, the normalized attention coefficients can be used to to calculate node feature vector [12pt]{minimal}
$$h_v^k$$ h v k , after applying a non-linearity: [12pt]{minimal}
$$ h_v^k= ( _{u { {N}}(v)} _{vu}W_kh_u^{k-1}) $$ h v k = σ ∑ u ∈ N ( v ) α vu W k h u k - 1 In our experiments, we evaluate the use of the aforementioned node embedding approaches, to compare their performance. Regardless of the approach used for a graph with V nodes, GNN layers yield a [12pt]{minimal}
$$D V$$ D × V embedding matrix, where D is the dimensionality of the node feature vectors. D is one of the hyperparameters that has to be tuned to obtain the best performance. Prediction head Node features extracted by the GNN layers contain important information about the nodes and can be used for various tasks, including node, edge, and graph classification. The prediction head operator transforms these embeddings into a suitable representation for the chosen task. In this study, we aim to solve a graph classification problem, and thus, we need to acquire a graph representation based on the node embeddings that can be classified by a set of fully connected layers. To achieve this goal, we apply a pooling operator that aggregates the node information into a graph feature vector with D features, which is equal to the dimensionality of the node feature vectors. We consider three classic pooling operators, established in the literature : global add pooling ( ADD ), global mean pooling ( MEAN ), and global max pooling ( MAX ), and choose the best one for each embedding approach during the hyperparameter tuning process. These operators apply element-wise sum, averaging, or maximum operations to the node feature vectors, yielding an aggregated graph representation. The operators preserve different aspects of the graph. For instance, the global add pooling provides distinct graph embeddings for graphs of varying sizes, as a greater number of nodes will generally provide greater element-wise sum values. On the other hand, global mean pooling can provide similar graph embeddings, even if the considered graphs vary greatly in size—when averaging, similar element-wise values can be obtained for both large and small graphs. Fully connected layers The vector with graph embedding is processed by a set of fully connected layers, with ReLU activations. As a regularization mechanism, we use Dropout after every but the last fully connected layer. The final layer returns two values and is normalized using the softmax function. This yields two probability values, indicating, whether the considered graph is an instance of a positive, or negative interaction.
The high-level overview of the GraphTar method is presented in Fig. . Firstly, the word2vec encoded sequences of the duplex are used to create an input graph in the input graph preparation procedure. The resulting graph representation is then passed through a set of GNN layers, which results in latent graph node space embeddings. Next, the node embeddings are aggregated using a prediction head operator, to form a graph embedding. Finally, the graph embedding is passed through a set of fully connected layers, resulting in a prediction vector.
To create a graph representation of the miRNA–mRNA duplex under consideration, we treat the word2vec encoded words as nodes of the graph. To construct an undirected graph, we align the sequences by the starting word and connect the closest words within and between the sequences with a bidirectional edge. This process is illustrated in Fig. .
The proposed classifier is composed of two parts. The first part consists of a set of GNN layers, which can process the miRNA–mRNA duplex graph created in the input graph preparation procedure. During training, the layers learn to extract accurate node embeddings, which can be used for various machine learning tasks. Many GNN layer variants have been proposed in the literature. For this study, we selected three established approaches for node embedding extraction: Graph Convolutional Networks (GCNs), the GraphSAGE inductive framework, and Graph Attention Networks (GATs) for evaluation. GCNs were originally introduced by Kipf and Welling for semi-supervised node classification. The approach is motivated by an approximation of spectral graph convolutions . The layer-wise propagation rule introduced in their work can operate directly on graphs and was able to solve semi-supervised node classification problems on citation networks with state-of-the-art results. For an input graph with V nodes with features X , the layer-wise propagation rule proposed in their work is: [12pt]{minimal}
$$ H^{(l+1)}= ({}^{-}{}{}^{-}H^{(l)}W^{(l)}) $$ H ( l + 1 ) = σ ( D ~ - 1 2 A ~ D ~ - 1 2 H ( l ) W ( l ) ) where [12pt]{minimal}
$${}$$ D ~ is a graph degree matrix, [12pt]{minimal}
$${}$$ A ~ is the adjacency matrix, [12pt]{minimal}
$$H^{(l)} {}^{V {D}}$$ H ( l ) ∈ R V × D is a matrix of node features at layer l ; [12pt]{minimal}
$$H^{(0)} = X$$ H ( 0 ) = X . [12pt]{minimal}
$$W^{(l)}$$ W ( l ) is a trainable weight matrix. This method can be generalized to unseen graphs, as all trainable weights at each layer can be shared between the nodes. The weights can be then used during inference on unseen nodes. Hamilton, Ying, and Leskovec extended the idea behind GCNs by proposing a general, inductive framework for node embedding—GraphSAGE . In GraphSAGE, the forward propagation algorithm is also layer-wise and makes use of two operators: aggregation ( AGG ) and concatenation ( CONCAT ). For graph with V nodes, the steps leading to the computation of the node v feature h at layer l (denoted as [12pt]{minimal}
$$h_v^l$$ h v l ) are as follows: [12pt]{minimal}
$$ h_{{ {N}}(v)}^l= & {} AGG_l(\{h_u^{l-1}, {u} { {N}}(v)\}, \\ h_v^l= & {} ( W^l CONCAT(h_v^{l-1}, h^l_{{ {N}}(v)})) \\ h_v^l= & {} h_v^l/||h_v^l||_2 $$ h N ( v ) l = A G G l ( { h u l - 1 , ∀ u ∈ N ( v ) } , h v l = σ W l · C O N C A T ( h v l - 1 , h N ( v ) l ) h v l = h v l / | | h v l | | 2 Note, that again, if we denote the original feature vector of node v as [12pt]{minimal}
$$x_v$$ x v , then [12pt]{minimal}
$$h_v^0=x_v$$ h v 0 = x v In the first step, the node neighborhood [12pt]{minimal}
$${ {N}}(v)$$ N ( v ) is passed to the AGG operator, which yields a representation of the neighborhood [12pt]{minimal}
$$h^l_{{ {N}}(v)}$$ h N ( v ) l . The aggregation operator can be simply a computation of the element-wise mean of the neighbor feature vectors at the previous layer, but the framework is flexible and more complicated functions can be used. In the original article, authors used also an LSTM aggregator and a pooling aggregator, consisting of a fully connected neural network and an element-wise max-pooling operation. In the second step of the forward pass, the node v latest representation [12pt]{minimal}
$$h_v^{l-1}$$ h v l - 1 is concatenated with its neighborhood representation. The result is then multiplied with a weight matrix at the respective layer [12pt]{minimal}
$$W^l$$ W l , shared between all nodes. This gives the framework an inductive characteristic - it is able to be generalized to unseen nodes. Finally, a non-linearity is applied and the vector is normalized to unit length. It is worth noting, that aggregation of the node neighborhood may be expensive for large graphs. to solve this issue, the authors propose to sample a fixed-size neighborhood, instead of taking into account all neighbors. Graph Attention Networks were proposed by Veličković et al, as an alternative for spectral (e.g. GCNs) and non-spectral (e.g. GraphSAGE) node embedding approaches. GATs use a self-attention mechanism which allows the network to choose important nodes in the node’s neighborhood. For this purpose, at each layer, first the attention coefficients are calculated. Using neighborhood notation from the GraphSAGE description, for node v , the attention coefficient [12pt]{minimal}
$$e_{vu}$$ e vu is calculated as follows: [12pt]{minimal}
$$ e_{vu} = a ( W_k h_u^{k-1}, W_k h_v^{k-1} ) , u, u { {N}}(v) $$ e vu = a W k h u k - 1 , W k h v k - 1 , ∀ u , u ∈ N ( v ) Where [12pt]{minimal}
$$w_k$$ w k is a weight matrix applied to every node and shared within a layer k , and [12pt]{minimal}
$$h_v^{k-1}$$ h v k - 1 is a feature vector of node v at layer [12pt]{minimal}
$$k-1$$ k - 1 . The attention mechanism a is a fully connected neural network with LeakyReLU nonlinearity. After computing the attention coefficients, they are normalized using the softmax function, so that the resulting neighbor importances sum to 1: [12pt]{minimal}
$$ _{vu} = )}{ _{n { {N}}(v)}exp(e_{vn})} $$ α vu = e x p ( e vu ) ∑ n ∈ N ( v ) e x p ( e vn ) Finally, the normalized attention coefficients can be used to to calculate node feature vector [12pt]{minimal}
$$h_v^k$$ h v k , after applying a non-linearity: [12pt]{minimal}
$$ h_v^k= ( _{u { {N}}(v)} _{vu}W_kh_u^{k-1}) $$ h v k = σ ∑ u ∈ N ( v ) α vu W k h u k - 1 In our experiments, we evaluate the use of the aforementioned node embedding approaches, to compare their performance. Regardless of the approach used for a graph with V nodes, GNN layers yield a [12pt]{minimal}
$$D V$$ D × V embedding matrix, where D is the dimensionality of the node feature vectors. D is one of the hyperparameters that has to be tuned to obtain the best performance.
Node features extracted by the GNN layers contain important information about the nodes and can be used for various tasks, including node, edge, and graph classification. The prediction head operator transforms these embeddings into a suitable representation for the chosen task. In this study, we aim to solve a graph classification problem, and thus, we need to acquire a graph representation based on the node embeddings that can be classified by a set of fully connected layers. To achieve this goal, we apply a pooling operator that aggregates the node information into a graph feature vector with D features, which is equal to the dimensionality of the node feature vectors. We consider three classic pooling operators, established in the literature : global add pooling ( ADD ), global mean pooling ( MEAN ), and global max pooling ( MAX ), and choose the best one for each embedding approach during the hyperparameter tuning process. These operators apply element-wise sum, averaging, or maximum operations to the node feature vectors, yielding an aggregated graph representation. The operators preserve different aspects of the graph. For instance, the global add pooling provides distinct graph embeddings for graphs of varying sizes, as a greater number of nodes will generally provide greater element-wise sum values. On the other hand, global mean pooling can provide similar graph embeddings, even if the considered graphs vary greatly in size—when averaging, similar element-wise values can be obtained for both large and small graphs.
The vector with graph embedding is processed by a set of fully connected layers, with ReLU activations. As a regularization mechanism, we use Dropout after every but the last fully connected layer. The final layer returns two values and is normalized using the softmax function. This yields two probability values, indicating, whether the considered graph is an instance of a positive, or negative interaction.
To evaluate the performance of the reproduced methods and the GNN classifiers proposed in this work, we report the following metrics: Balanced accuracy (BACC), Precision, Recall. These metrics are calculated as follows: [12pt]{minimal}
$$ BACC= & {} + }{2}, \\ Precision= & {} , \\ Recall= & {} , $$ B A C C = TP T P + F N + TN T N + F P 2 , P r e c i s i o n = TP T P + F P , R e c a l l = TP T P + F N , where TP, TN, FP, and FN are true positives, true negatives, false positives, and false negatives, respectively. Balanced accuracy is a metric that indicates the percentage of correctly classified labels. Precision indicates how many miRNA:mRNA pairs classified as instances of positive interaction are actually positive. Recall is a metric that shows the percentage of positive pairs in the dataset that were classified as positive.
The data preparation and preprocessing were implemented in Python 3.8.10 using the Pandas package 1.3.0 . For the reproduced methods, we used the PyTorch machine learning framework 1.9.0 . To implement graph neural networks, we used the PyTorch Geometric extension 1.7.2 . Word2vec embeddings were prepared using utilities provided by the Gensim package 4.0.1 . Training was implemented using PyTorch Lightning 1.5.10 . The datasets, trained models and results data, together with the code and reproduction steps, are available in the project repository .
Experimental setup To compare the proposed approach with the current state of the art, we have reimplemented miRAW , DeepMirTar and miTAR according to descriptions from the respective articles and evaluated it on data described in Table together with the proposed GraphTar approach. Note, that for DeepMirTar we used only a part of the proposed architecture—we employed only raw seuqences as input to the model, whereas in the original study expert-based features were also used in addition. Within the GraphTar framework, we set to train a separate model for each of the node embedding methods considered ( [12pt]{minimal}
$$E_{GNN} [GCN, GAT, GraphSAGE]$$ E GNN ∈ [ G C N , G A T , G r a p h S A G E ] ), with the resulting models denoted as GraphTarGCN , GraphTarSAGE and GraphTarGAT respectively. For each compared method, we trained three separate models on miRAW, DeepMirTar, and MirTarRAW datasets, split using training:validation:test ratio of 0.7:0.15:0.15. We repeated this step for 30 data splits, which resulted in 90 models overall for each method. We train all models for 1000 epochs using Adam optimizer , starting with learning rate [12pt]{minimal}
$$lr=0.001$$ l r = 0.001 and reducing it on plateau. If the model did not improve for 100 epochs, we performed early stopping. Hyperparameter tunning To obtain the best performance, for all GNN layer types, prior to the experiments, we performed a hyperparameters search on the MirTarRAW dataset on one data split using the grid search methodology. The selection was based on the balanced accuracy score on the validation set. We searched for the best GNN layer embedding size [12pt]{minimal}
$$D_{GNN} [16, 32, 64, 128, 256, 512]$$ D GNN ∈ [ 16 , 32 , 64 , 128 , 256 , 512 ] , as well as the optimal number of graph layers [12pt]{minimal}
$$L_{GNN} [1, 2, ,10]$$ L GNN ∈ [ 1 , 2 , … , 10 ] . Similarly, we attempted to find the best prediction head operator [12pt]{minimal}
$$P_h [ADD, MEAN, MAX]$$ P h ∈ [ A D D , M E A N , M A X ] for each of the node embedding methods. Using the same parameter ranges, we also tuned the dimensionality of fully connected layers and their number ( [12pt]{minimal}
$$D_{FC}$$ D FC and [12pt]{minimal}
$$L_{FC}$$ L FC respectively), as well as dropout rate [12pt]{minimal}
$$R_{D} {0.2, 0.3, 0.4, 0.5, 0.6}$$ R D ∈ 0.2 , 0.3 , 0.4 , 0.5 , 0.6 . The resulting parameters used in GraphTar experiments are provided in Table . For all models, including the reproduced methods, we also found the optimal batch size [12pt]{minimal}
$$B_s [16,32,64,128,256,512]$$ B s ∈ [ 16 , 32 , 64 , 128 , 256 , 512 ] for each training dataset (Table ). Performance versus the state of the art In this section, we present our findings on the performance comparison between the GraphTar models and state-of-the-art methods. Our results, which are summarized in Tables , , and , reveal that there is no method that performs better than others on all datasets. We observed that the GraphTar models, along with miRAW and miTAR, consistently outperformed the adaptation of DeepMirTar in terms of balanced accuracy score, with the differences ranging from 0.04 to 0.083, depending on the dataset. On the DeepMirTar dataset, it was miTAR model that achieved the best performance with a balanced accuracy score of 0.927, followed by GraphTarGAT (0.922) and GraphTarSAGE (0.915). GraphTarGCN and miRAW achieved considerably lower scores of 0.904 and 0.902, respectively, while DeepMirTar performed the worst with an accuracy score of 0.815. On the miRAW dataset, GraphTarGAT emerged as the top-performing method with a balanced accuracy score of 0.948, followed closely by GraphTarSAGE, miTAR, and miRAW with scores of 0.94, 0.939 and 0.938, respectively. GraphTarGCN and DeepMirTar exhibited inferior performance with balanced accuracy scores of 0.915 and 0.875. Finally, on the MirTarRaw dataset, miRAW achieved the highest balanced accuracy score of 0.928, while GraphTarGAT came in second with a score of 0.921. GraphTarGCN, GraphTarSAGE, and miTAR obtained balanced accuracy scores of 0.915, 0.914, and 0.91, respectively. In contrast, DeepMirTar performed worst with a balanced accuracy score of 0.827. An illustration of the results is shown on Fig. . Ablation experiments To investigate which parts of the GraphTar architecture have the greatest impact on predictive performance, we conducted ablation experiments. In these experiments, we assessed the effects of four aspects of the graph encoder architecture: the number of graph embedding layers, graph layer embedding size, graph embedding method, and the prediction head. The baseline for our experiments was GraphTarGAT with the following parameters: [12pt]{minimal}
$$E_{GNN}=5$$ E GNN = 5 , [12pt]{minimal}
$$L_{GNN}=5$$ L GNN = 5 , [12pt]{minimal}
$$D_{GNN}=256$$ D GNN = 256 , [12pt]{minimal}
$$L_{FC}=2$$ L FC = 2 , [12pt]{minimal}
$$D_{FC}=128$$ D FC = 128 , [12pt]{minimal}
$$P_h=ADD$$ P h = A D D , [12pt]{minimal}
$$R_D=0.4$$ R D = 0.4 . We carried out the experiments using the MirTarRaw dataset, given its substantial sample size. For each experiment, all utilized model configurations were trained and evaluated across 30 data splits, following the methodology employed in the preceding experiments. The parameters we considered included: [12pt]{minimal}
$$D_{GNN} [16, 32, 64, 128, 256, 512]$$ D GNN ∈ [ 16 , 32 , 64 , 128 , 256 , 512 ] , [12pt]{minimal}
$$L_{GNN} [1, 2, ,10]$$ L GNN ∈ [ 1 , 2 , … , 10 ] , [12pt]{minimal}
$$P_h [ADD, MEAN, MAX]$$ P h ∈ [ A D D , M E A N , M A X ] and [12pt]{minimal}
$$E_{GNN} [GCN, GAT, GraphSAGE]$$ E GNN ∈ [ G C N , G A T , G r a p h S A G E ] As a result of ablation study, we observed that the optimal number of GNN layers was equal to 2. Having more than 2 GAT layers (balanced accuracy equal to 0.927) resulted in a decreased metric scores (Fig. ), with the lowest score equal to 0.916 for [12pt]{minimal}
$$L_{GNN}=9$$ L GNN = 9 . As for the graph layer embedding size, the results show, that the optimal value was in the middle of our search space - equal to 128 (balanced accuracy equal to 0.92), as seen on Fig. ). In this experiment, the lowest value recorded was obtained with [12pt]{minimal}
$$D_{GNN}=16$$ D GNN = 16 and equal to 0.895. Out of three GNN embedding methods considered (Fig. ), the best results were obtained with GAT (0.92) and the worst for GCN (0.916). Finally, the best performing prediction head was global add pooling (0.92 accuracy score), whereas the worst performing global mean pooling obtained 0.916 accuracy score (Fig. ).
To compare the proposed approach with the current state of the art, we have reimplemented miRAW , DeepMirTar and miTAR according to descriptions from the respective articles and evaluated it on data described in Table together with the proposed GraphTar approach. Note, that for DeepMirTar we used only a part of the proposed architecture—we employed only raw seuqences as input to the model, whereas in the original study expert-based features were also used in addition. Within the GraphTar framework, we set to train a separate model for each of the node embedding methods considered ( [12pt]{minimal}
$$E_{GNN} [GCN, GAT, GraphSAGE]$$ E GNN ∈ [ G C N , G A T , G r a p h S A G E ] ), with the resulting models denoted as GraphTarGCN , GraphTarSAGE and GraphTarGAT respectively. For each compared method, we trained three separate models on miRAW, DeepMirTar, and MirTarRAW datasets, split using training:validation:test ratio of 0.7:0.15:0.15. We repeated this step for 30 data splits, which resulted in 90 models overall for each method. We train all models for 1000 epochs using Adam optimizer , starting with learning rate [12pt]{minimal}
$$lr=0.001$$ l r = 0.001 and reducing it on plateau. If the model did not improve for 100 epochs, we performed early stopping.
To obtain the best performance, for all GNN layer types, prior to the experiments, we performed a hyperparameters search on the MirTarRAW dataset on one data split using the grid search methodology. The selection was based on the balanced accuracy score on the validation set. We searched for the best GNN layer embedding size [12pt]{minimal}
$$D_{GNN} [16, 32, 64, 128, 256, 512]$$ D GNN ∈ [ 16 , 32 , 64 , 128 , 256 , 512 ] , as well as the optimal number of graph layers [12pt]{minimal}
$$L_{GNN} [1, 2, ,10]$$ L GNN ∈ [ 1 , 2 , … , 10 ] . Similarly, we attempted to find the best prediction head operator [12pt]{minimal}
$$P_h [ADD, MEAN, MAX]$$ P h ∈ [ A D D , M E A N , M A X ] for each of the node embedding methods. Using the same parameter ranges, we also tuned the dimensionality of fully connected layers and their number ( [12pt]{minimal}
$$D_{FC}$$ D FC and [12pt]{minimal}
$$L_{FC}$$ L FC respectively), as well as dropout rate [12pt]{minimal}
$$R_{D} {0.2, 0.3, 0.4, 0.5, 0.6}$$ R D ∈ 0.2 , 0.3 , 0.4 , 0.5 , 0.6 . The resulting parameters used in GraphTar experiments are provided in Table . For all models, including the reproduced methods, we also found the optimal batch size [12pt]{minimal}
$$B_s [16,32,64,128,256,512]$$ B s ∈ [ 16 , 32 , 64 , 128 , 256 , 512 ] for each training dataset (Table ).
In this section, we present our findings on the performance comparison between the GraphTar models and state-of-the-art methods. Our results, which are summarized in Tables , , and , reveal that there is no method that performs better than others on all datasets. We observed that the GraphTar models, along with miRAW and miTAR, consistently outperformed the adaptation of DeepMirTar in terms of balanced accuracy score, with the differences ranging from 0.04 to 0.083, depending on the dataset. On the DeepMirTar dataset, it was miTAR model that achieved the best performance with a balanced accuracy score of 0.927, followed by GraphTarGAT (0.922) and GraphTarSAGE (0.915). GraphTarGCN and miRAW achieved considerably lower scores of 0.904 and 0.902, respectively, while DeepMirTar performed the worst with an accuracy score of 0.815. On the miRAW dataset, GraphTarGAT emerged as the top-performing method with a balanced accuracy score of 0.948, followed closely by GraphTarSAGE, miTAR, and miRAW with scores of 0.94, 0.939 and 0.938, respectively. GraphTarGCN and DeepMirTar exhibited inferior performance with balanced accuracy scores of 0.915 and 0.875. Finally, on the MirTarRaw dataset, miRAW achieved the highest balanced accuracy score of 0.928, while GraphTarGAT came in second with a score of 0.921. GraphTarGCN, GraphTarSAGE, and miTAR obtained balanced accuracy scores of 0.915, 0.914, and 0.91, respectively. In contrast, DeepMirTar performed worst with a balanced accuracy score of 0.827. An illustration of the results is shown on Fig. .
To investigate which parts of the GraphTar architecture have the greatest impact on predictive performance, we conducted ablation experiments. In these experiments, we assessed the effects of four aspects of the graph encoder architecture: the number of graph embedding layers, graph layer embedding size, graph embedding method, and the prediction head. The baseline for our experiments was GraphTarGAT with the following parameters: [12pt]{minimal}
$$E_{GNN}=5$$ E GNN = 5 , [12pt]{minimal}
$$L_{GNN}=5$$ L GNN = 5 , [12pt]{minimal}
$$D_{GNN}=256$$ D GNN = 256 , [12pt]{minimal}
$$L_{FC}=2$$ L FC = 2 , [12pt]{minimal}
$$D_{FC}=128$$ D FC = 128 , [12pt]{minimal}
$$P_h=ADD$$ P h = A D D , [12pt]{minimal}
$$R_D=0.4$$ R D = 0.4 . We carried out the experiments using the MirTarRaw dataset, given its substantial sample size. For each experiment, all utilized model configurations were trained and evaluated across 30 data splits, following the methodology employed in the preceding experiments. The parameters we considered included: [12pt]{minimal}
$$D_{GNN} [16, 32, 64, 128, 256, 512]$$ D GNN ∈ [ 16 , 32 , 64 , 128 , 256 , 512 ] , [12pt]{minimal}
$$L_{GNN} [1, 2, ,10]$$ L GNN ∈ [ 1 , 2 , … , 10 ] , [12pt]{minimal}
$$P_h [ADD, MEAN, MAX]$$ P h ∈ [ A D D , M E A N , M A X ] and [12pt]{minimal}
$$E_{GNN} [GCN, GAT, GraphSAGE]$$ E GNN ∈ [ G C N , G A T , G r a p h S A G E ] As a result of ablation study, we observed that the optimal number of GNN layers was equal to 2. Having more than 2 GAT layers (balanced accuracy equal to 0.927) resulted in a decreased metric scores (Fig. ), with the lowest score equal to 0.916 for [12pt]{minimal}
$$L_{GNN}=9$$ L GNN = 9 . As for the graph layer embedding size, the results show, that the optimal value was in the middle of our search space - equal to 128 (balanced accuracy equal to 0.92), as seen on Fig. ). In this experiment, the lowest value recorded was obtained with [12pt]{minimal}
$$D_{GNN}=16$$ D GNN = 16 and equal to 0.895. Out of three GNN embedding methods considered (Fig. ), the best results were obtained with GAT (0.92) and the worst for GCN (0.916). Finally, the best performing prediction head was global add pooling (0.92 accuracy score), whereas the worst performing global mean pooling obtained 0.916 accuracy score (Fig. ).
Performance versus the state of the art In this study, we compared the performance of several machine learning models, including the proposed GraphTar method, on three datasets: DeepMirTar, miRAW, and MirTarRaw. Our results indicate, that different methods obtained the best performance on different datasets, with the miRAW model yielding the best results on the MirTarRaw dataset, while miTAR was the best on the DeepMirTar dataset. Interestingly, our findings did not support the results of the miTAR article , which indicated that this method was clearly superior to miRAW. We observed that all methods exhibited very similar performance, except for the DeepMirTar method, which clearly demonstrated less proficiency in target prediction. This could be a result of an inferior architecture being employed. However, it’s important to note that our study focused on methods operating with raw sequence inputs. Consequently, we modified the architecture to exclude expert-based features from the input, as opposed to the original study. This modification likely had an impact on the performance of DeepMirTar models. We discovered that the proposed GraphTar method matched, at the very least, the performance of miRAW and miTAR. It achieved the best performance on the miRAW dataset, the second-best on the MirTarRaw dataset, and performed at par with miTAR on the DeepMirTar dataset. This implies that the novel graph representation can effectively describe the spatial structure of a miRNA–mRNA duplex. It also indicates that graph neural networks can be as effective in target prediction as architectures built on autoencoders and recurrent layers. However, we couldn’t establish a clear, dataset-independent advantage of the GNN architecture over state-of-the-art methods. Further development, including the exploration of novel node embedding methods, meticulous parameter tuning, and experimentation with various GNN architectures, holds significant promise for the future. Additionally, the results unveiled that among the considered node embedding methods, GraphTar, based on Graph Attention Networks, clearly outperformed the others. The attention mechanism proved to be the most effective in learning node embeddings. Following closely was the GraphTarSAGE method, with GraphTarGCN only posing a challenge to GraphTarSAGE on the MirTarRaw dataset. Interestingly, we discovered that graph neural networks could only achieve performance comparable to miTAR and miRAW when utilizing word2vec embeddings in the graph representation. Our initial experiments demonstrated that GNN-based models, when their input was encoded with one-hot encoding, failed to surpass a balanced accuracy score of approximately 0.85 across all datasets. This indicates that word2vec-based encoding dramatically improves the performance of GNNs in this context, compared to other encoding techniques, like one-hot encoding. It also suggests that while GNNs excel at generating accurate graph embeddings for duplex structures, they struggle to precisely capture the features of sequence elements at the nucleotide level. Further investigation should center on understanding the specific enhancements introduced by the word2vec embedding method and the limitations of GNNs in embedding biological sequences. Moreover, there exists a necessity to thoroughly investigate and interpret the acquired representations within GNN-based models to unveil the inherent characteristics of the miRNA–mRNA duplex. These insights could provide valuable comprehension of the miRNA binding mechanism and offer a deeper understanding of the biological processes underpinning target prediction. Clearly, explainability is important for researchers, which is why easy-to-interpret algorithms are popular in computational biology. An example could be Ordinary Differential Equations-based (ODE-based) modeling methods employed in systems biology, such as in studies on protein signaling networks . From our perspective, ensuring the interpretability of GNN models represents a crucial avenue of research for any interaction prediction study employing this category of deep learning architectures. Although in recent years, GNN-based prediction methods have become more popular in other fields of computational biology, such as metabolite-disease associations (Sun et al., ), long non-coding RNA (Wang et al., ), and protein-protein interactions (Shen et al., ), little research was focused on actually investigating the intrinsic mechanisms behind their predictive performance. One reason for this could be that the methodology of such investigations is not widely understood. To address this, providing a set of good practices and methods for GNN model interpretability in the context of interaction prediction could give the momentum necessary to comprehend and utilize the knowledge conveyed within the prediction models. Once understood, this information could further enable the identification of novel drug targets, genetic markers, and disease associations. Another angle that supports the importance of interpretability is the fact that it is commonly referenced as a challenge that prevents the use of deep learning methods in healthcare (Miotto et al., ). In the current state of interaction prediction research, where the models cannot be used in practice (e.g., applied to real patients in therapies), it is much more interesting to know why various interactions occur than if they occur. The classification task itself is merely a way to guide the models to learn the right task to solve, but for the field, it is not as important as deepening the understanding of the interaction intrinsics. Once we deepen our understanding of intrinsics, we can design better data preprocessing methods and models, and crucially validate them using this knowledge. At this point, when we are able to understand the models, we can convince medical professionals to use them in practice, and then the if question will really become relevant. Ablation experiments One methodology that could be of great help in revealing crucial components of the GNN architecture involves conducting ablation experiments, similar to the ones documented in the results section. Through these experiments, we were able to observe how different aspects of the graph encoder (such as the number of layers, embedding size, prediction head, and layer type) impact predictive performance. We deduced that while all these aspects influence the results, the most significant ones were the embedding size and the number of GNN layers. Interestingly, ablation experiments focused on these aspects revealed which parameters found by our hyperparameter tuning procedure could be still potentially further improved and which seem to have the best values. They indicate that we might have achieved superior results by using 2 GAT layers instead of 5 and a graph embedding size of 128 instead of 256 in the evaluated GraphTarGAT architecture in the “Performance vs the state of the art” experiments. To improve the process, it would require performing the hyperparameter tuning procedure on multiple data splits, instead of one. However, this would significantly increase the amount of computations required in this step. It seems that a 128-dimensional embedding adequately encapsulates essential information about the nodes in the input graph for the miRNA–mRNA duplex. The optimal choice of 2 layers suggests that the graph neural network needs to propagate information from neighbors at most two edges away from a node to obtain the most informative embedding. This suggests that nucleotide interactions primarily occur in close proximity and are not widely distributed across the duplex. For the selection of prediction heads, we found that the best performance in the ablation experiment was achieved with the ADD operator, followed by MAX and MEAN . Meanwhile, the average balanced accuracy score gap between the best and worst performing operators is not substantial, equaling 0.05. It remains unclear why the ADD operator produced the best results and what the underlying mechanisms for this performance are. As previously mentioned, an investigation into model interpretability could potentially unveil and elucidate the impact of various prediction heads on the model’s predictions. Given that pooling operators are a crucial component of GNN classifiers, this constitutes an important direction for future research. In summary of these experiments, when fine-tuning GNN architectures, we recommend commencing the process by adjusting the number of graph embedding layers, followed by tuning the embedding size and prediction head. Only subsequently should you proceed to search for the most optimal graph embedding layer type. Dataset quality While the miRAW and DeepMirTar datasets have provided a valuable benchmark for our study, it is crucial to expand the current datasets to propel target prediction methods towards real-world applications. The biggest limitation of DL methods is related to data quality, therefore in numerous domains of computational biology, DL-based interaction prediction algorithms face constraints. Based on our expertise in this domain, we can delineate data quality limitations as follows: Data availability: large interaction datasets are not common and widely available. Data balance: the ratio of positive and negative samples is highly imbalanced (e.g. in miRNA–mRNA interaction prediction, there is a complete lack of experimentally validated negative samples, whereas in metabolite-disease association prediction, the ratio of positive to negative samples can be 1–100, as outlined by Sun et al. in ). Data imbalance makes models biased and difficult to train and evaluate. Benchmarking: there is no established, official benchmark to evaluate miRNA–mRNA interaction prediction methods, which makes comparing them difficult and unreliable. Data heterogeneity: interaction studies cover a small part of the tissue and disease search space. Conducting a wider search and expanding knowledge about the impact of interaction phenomena can uncover their influence on diseases that so far were out of the spotlight in this research field. Considering the aforementioned deficiencies, forthcoming studies should prioritize the acquisition of more diverse, well-balanced, and extensive datasets. This effort will establish a robust foundation for the trustworthy evaluation of data-driven prediction algorithms like GraphTar. An alternative could involve turning to more data-efficient approaches, such as the aforementioned ODE-based modeling methods . Employing methods that can work with a modest dataset could prove essential in advancing the state of the art, regardless of the challenges in dataset collection. Nevertheless, the presence of standardized evaluation datasets and methodologies holds paramount importance for employing computational techniques as research instruments and for their potential applications in healthcare contexts.
In this study, we compared the performance of several machine learning models, including the proposed GraphTar method, on three datasets: DeepMirTar, miRAW, and MirTarRaw. Our results indicate, that different methods obtained the best performance on different datasets, with the miRAW model yielding the best results on the MirTarRaw dataset, while miTAR was the best on the DeepMirTar dataset. Interestingly, our findings did not support the results of the miTAR article , which indicated that this method was clearly superior to miRAW. We observed that all methods exhibited very similar performance, except for the DeepMirTar method, which clearly demonstrated less proficiency in target prediction. This could be a result of an inferior architecture being employed. However, it’s important to note that our study focused on methods operating with raw sequence inputs. Consequently, we modified the architecture to exclude expert-based features from the input, as opposed to the original study. This modification likely had an impact on the performance of DeepMirTar models. We discovered that the proposed GraphTar method matched, at the very least, the performance of miRAW and miTAR. It achieved the best performance on the miRAW dataset, the second-best on the MirTarRaw dataset, and performed at par with miTAR on the DeepMirTar dataset. This implies that the novel graph representation can effectively describe the spatial structure of a miRNA–mRNA duplex. It also indicates that graph neural networks can be as effective in target prediction as architectures built on autoencoders and recurrent layers. However, we couldn’t establish a clear, dataset-independent advantage of the GNN architecture over state-of-the-art methods. Further development, including the exploration of novel node embedding methods, meticulous parameter tuning, and experimentation with various GNN architectures, holds significant promise for the future. Additionally, the results unveiled that among the considered node embedding methods, GraphTar, based on Graph Attention Networks, clearly outperformed the others. The attention mechanism proved to be the most effective in learning node embeddings. Following closely was the GraphTarSAGE method, with GraphTarGCN only posing a challenge to GraphTarSAGE on the MirTarRaw dataset. Interestingly, we discovered that graph neural networks could only achieve performance comparable to miTAR and miRAW when utilizing word2vec embeddings in the graph representation. Our initial experiments demonstrated that GNN-based models, when their input was encoded with one-hot encoding, failed to surpass a balanced accuracy score of approximately 0.85 across all datasets. This indicates that word2vec-based encoding dramatically improves the performance of GNNs in this context, compared to other encoding techniques, like one-hot encoding. It also suggests that while GNNs excel at generating accurate graph embeddings for duplex structures, they struggle to precisely capture the features of sequence elements at the nucleotide level. Further investigation should center on understanding the specific enhancements introduced by the word2vec embedding method and the limitations of GNNs in embedding biological sequences. Moreover, there exists a necessity to thoroughly investigate and interpret the acquired representations within GNN-based models to unveil the inherent characteristics of the miRNA–mRNA duplex. These insights could provide valuable comprehension of the miRNA binding mechanism and offer a deeper understanding of the biological processes underpinning target prediction. Clearly, explainability is important for researchers, which is why easy-to-interpret algorithms are popular in computational biology. An example could be Ordinary Differential Equations-based (ODE-based) modeling methods employed in systems biology, such as in studies on protein signaling networks . From our perspective, ensuring the interpretability of GNN models represents a crucial avenue of research for any interaction prediction study employing this category of deep learning architectures. Although in recent years, GNN-based prediction methods have become more popular in other fields of computational biology, such as metabolite-disease associations (Sun et al., ), long non-coding RNA (Wang et al., ), and protein-protein interactions (Shen et al., ), little research was focused on actually investigating the intrinsic mechanisms behind their predictive performance. One reason for this could be that the methodology of such investigations is not widely understood. To address this, providing a set of good practices and methods for GNN model interpretability in the context of interaction prediction could give the momentum necessary to comprehend and utilize the knowledge conveyed within the prediction models. Once understood, this information could further enable the identification of novel drug targets, genetic markers, and disease associations. Another angle that supports the importance of interpretability is the fact that it is commonly referenced as a challenge that prevents the use of deep learning methods in healthcare (Miotto et al., ). In the current state of interaction prediction research, where the models cannot be used in practice (e.g., applied to real patients in therapies), it is much more interesting to know why various interactions occur than if they occur. The classification task itself is merely a way to guide the models to learn the right task to solve, but for the field, it is not as important as deepening the understanding of the interaction intrinsics. Once we deepen our understanding of intrinsics, we can design better data preprocessing methods and models, and crucially validate them using this knowledge. At this point, when we are able to understand the models, we can convince medical professionals to use them in practice, and then the if question will really become relevant.
One methodology that could be of great help in revealing crucial components of the GNN architecture involves conducting ablation experiments, similar to the ones documented in the results section. Through these experiments, we were able to observe how different aspects of the graph encoder (such as the number of layers, embedding size, prediction head, and layer type) impact predictive performance. We deduced that while all these aspects influence the results, the most significant ones were the embedding size and the number of GNN layers. Interestingly, ablation experiments focused on these aspects revealed which parameters found by our hyperparameter tuning procedure could be still potentially further improved and which seem to have the best values. They indicate that we might have achieved superior results by using 2 GAT layers instead of 5 and a graph embedding size of 128 instead of 256 in the evaluated GraphTarGAT architecture in the “Performance vs the state of the art” experiments. To improve the process, it would require performing the hyperparameter tuning procedure on multiple data splits, instead of one. However, this would significantly increase the amount of computations required in this step. It seems that a 128-dimensional embedding adequately encapsulates essential information about the nodes in the input graph for the miRNA–mRNA duplex. The optimal choice of 2 layers suggests that the graph neural network needs to propagate information from neighbors at most two edges away from a node to obtain the most informative embedding. This suggests that nucleotide interactions primarily occur in close proximity and are not widely distributed across the duplex. For the selection of prediction heads, we found that the best performance in the ablation experiment was achieved with the ADD operator, followed by MAX and MEAN . Meanwhile, the average balanced accuracy score gap between the best and worst performing operators is not substantial, equaling 0.05. It remains unclear why the ADD operator produced the best results and what the underlying mechanisms for this performance are. As previously mentioned, an investigation into model interpretability could potentially unveil and elucidate the impact of various prediction heads on the model’s predictions. Given that pooling operators are a crucial component of GNN classifiers, this constitutes an important direction for future research. In summary of these experiments, when fine-tuning GNN architectures, we recommend commencing the process by adjusting the number of graph embedding layers, followed by tuning the embedding size and prediction head. Only subsequently should you proceed to search for the most optimal graph embedding layer type.
While the miRAW and DeepMirTar datasets have provided a valuable benchmark for our study, it is crucial to expand the current datasets to propel target prediction methods towards real-world applications. The biggest limitation of DL methods is related to data quality, therefore in numerous domains of computational biology, DL-based interaction prediction algorithms face constraints. Based on our expertise in this domain, we can delineate data quality limitations as follows: Data availability: large interaction datasets are not common and widely available. Data balance: the ratio of positive and negative samples is highly imbalanced (e.g. in miRNA–mRNA interaction prediction, there is a complete lack of experimentally validated negative samples, whereas in metabolite-disease association prediction, the ratio of positive to negative samples can be 1–100, as outlined by Sun et al. in ). Data imbalance makes models biased and difficult to train and evaluate. Benchmarking: there is no established, official benchmark to evaluate miRNA–mRNA interaction prediction methods, which makes comparing them difficult and unreliable. Data heterogeneity: interaction studies cover a small part of the tissue and disease search space. Conducting a wider search and expanding knowledge about the impact of interaction phenomena can uncover their influence on diseases that so far were out of the spotlight in this research field. Considering the aforementioned deficiencies, forthcoming studies should prioritize the acquisition of more diverse, well-balanced, and extensive datasets. This effort will establish a robust foundation for the trustworthy evaluation of data-driven prediction algorithms like GraphTar. An alternative could involve turning to more data-efficient approaches, such as the aforementioned ODE-based modeling methods . Employing methods that can work with a modest dataset could prove essential in advancing the state of the art, regardless of the challenges in dataset collection. Nevertheless, the presence of standardized evaluation datasets and methodologies holds paramount importance for employing computational techniques as research instruments and for their potential applications in healthcare contexts.
In this study, we introduced an innovative approach to miRNA target prediction named GraphTar. We framed target prediction as a graph classification problem and put forth a novel graph representation for the miRNA–mRNA duplex. For encoding the nucleotide triplets within each sequence, we harnessed the word2vec method, which had not been previously employed in target prediction. The resulting graph, composed of encoded triplets, was subjected to classification using a graph neural network. Through a comprehensive comparison with replicated state-of-the-art methods, we illustrated that GraphTar can match the performance of state-of-the-art classifiers and even surpass them on one of the datasets. To gain valuable insights, we conducted ablation experiments, assessing the influence of graph layer count, their type, as well as embedding size and global pooling method on predictive performance. Building upon our experience, we discussed the most important future study directions, such as exploring the underlying mechanisms of GNN-based interaction prediction methods and developing more accurate and efficient GNN architectures. As outlined in the discussion section, expanding and standarizing the available datasets for target prediction will also be critical to advance the field towards real-world applications.
|
The Neurophysiological Representation of Imagined Somatosensory Percepts in Human Cortex | 94bd1cbc-aa57-4801-b6f0-1dd762498243 | 8018772 | Physiology[mh] | In recent studies, intracortical microstimulation (ICMS) in primary somatosensory cortex (S1) has been successfully used to elicit somatosensory sensations in quadriplegic humans below the level of spinal cord lesion . Many parameters of the electrical stimulus, such as amplitude, frequency, duration, and electrode location, have been found to manipulate the qualitative experience of elicited sensory responses in both non-human primates and humans . It is therefore important to develop our understanding of the correspondence between stimulation parameters and the sensations they elicit if we are to further understand the mode of action of ICMS and elicit specific sensations more reliably via ICMS. To begin, we seek to uncover the neurophysiology underlying those sensations previously elicited by ICMS. In previous work , we found the top five most elicited somatic sensations with ICMS in S1 of a human participant. These were naturalistic sensations which the subject had not experienced in deafferented locations since being injured. We seek to examine for the first time the intracortical electrophysiological behavior of human sensorimotor circuits while experiencing these same sensations. Since it is not possible to use normal touch to elicit a sensation below the level of paralysis in a quadriplegic individual, we performed our experiment using “somatosensory imagery”, the vivid recollection of a somatosensory experience, to evoke activity in these circuits specific to the same sensations experienced during electrical stimulation. We chose to use sensations that were previously elicited by ICMS, rather than any sensation the subject was able to imagine, because these sensations were elicited with known stimulation parameters in the same cortical area we record from during somatosensory imagery. Somatosensory imagery has previously been shown in functional magnetic resonance imaging (fMRI) studies to activate the somatosensory system . Both primary and secondary somatosensory areas are activated by tactile imagery in areas that respond to actual touch. Imagined movements after amputation of the fingers have also been shown to produce neural activation in somatosensory cortex . We record intracortically from three areas of human cortex ( A ), S1, ventral premotor cortex (PMv), and the supramarginal gyrus (SMG). Each of these areas is involved in somatosensory processing. Neurons in S1 respond to cutaneous and proprioceptive stimuli and electrical stimulation in this area produces naturalistic somatosensory percepts . The SMG array, on the SMG near the anterior end of the intraparietal sulcus ( A ), is in a region of cortex often studied in the context of grasp for both human and non-human primate studies. There is not yet enough evidence in this literature and our study to make exact homological assignments between the two species. Similarly, this same region of cortex responds to somatosensory stimuli in both species and has reciprocal connections to other sensorimotor regions such as BA1, BA2, BA5, S2 , and premotor cortex . Broadly, posterior parietal cortex is a higher order area in sensorimotor and somatosensory processing . PMv neurons respond to tactile and proprioceptive somatosensory stimuli . Given the role of these areas in somatosensory processing, we expect to observe neurophysiological modulation because of somatosensory imagery. In this work we investigated the neural correlates of imagined sensations and how this representation is distributed across different sensorimotor cortical areas. We used the sensations previously experienced by our participant during ICMS and sought to demonstrate a discriminable representation of the sensations in the brain. We examined neurophysiological responses to somatosensory imagery from intracortical human recordings across three brain areas, each implanted with recording microelectrode arrays (Utah Array, Blackrock Microsystems): S1, SMG, and PMv. We found a highly significant classification accuracy between sensations was attainable using both threshold crossing spiking activity and spectral power of various common frequency bands in the continuous brain signal. Our results demonstrate that unique sensory experiences can be classified from human neural signals during somatosensory imagery and explore how the encoding of different aspects of sensation are distributed across different brain areas. The correspondence between the neural signal during somatosensory imagery and the stimulation parameters that elicit the same sensations may inform the choice of stimulation parameters for eliciting novel and robust sensations via ICMS in future work.
Participant We recruited and consented a male participant with C5-level complete spinal cord injury (34 years old, three years and six months postinjury, and one year and eight months postimplant, at the time of the first experiment) to participate in a clinical trial of a brain-machine interface (BMI) system with intracortical recording and stimulation. All data were recorded through electrode arrays that were implanted in three locations of the left hemisphere ( A ): SMG, PMv, and S1. One 96-channel, platinum tipped Neuroport microelectrode recording array (Blackrock Microsystems, Salt Lake City, UT) was implanted in each of SMG and PMv. Two 48-channel SIROF-tipped (sputtered iridium oxide film) microelectrode arrays were implanted in S1. Further information regarding specific surgical planning and implantation details are described in . All procedures were approved by the Institutional Review Boards (IRB) of the California Institute of Technology, University of Southern California, and Rancho Los Amigos National Rehabilitation Hospital. Task Based on the outcome of S1-only stimulation mapping we identified the five most commonly elicited sensations with ICMS: “squeeze,” “tap,” “rightward movement,” “vibration,” and “blowing.” These sensations represented 24.9%, 17.3%, 9.7%, 8.1%, and 6.6%, respectively, of 381 total ICMS elicited sensations (for full details of ICMS mapping see, ). These sensations were experienced in the same body locations of the contralateral forearm and upper arm. In our somatosensory imagery experiment, each trial consisted of an intertrial interval (ITI), a cue, a delay, and an imagery phase. During the ITI, a black screen with a gray circle (1-cm diameter) in the middle was shown for 4 s during which time the participant was instructed to rest and fixate gaze on the circle, although gaze was not measured. In the cue phase, one of the sensations listed above was presented as a written word for 2 s, then in the 2-s delay phase, only a black screen with the fixation circle was shown. In the final 5-s imagery phase of the task, the fixation circle changed to green and the participant began somatosensory imagery. The instruction for the imagery phase given at the beginning of each experiment was to “imagine the sensation as you experienced it during electrical stimulation as vividly as possible” ( B ). The participant confirmed to us that the sensations were all imagined in the same location at the forearm, thus controlling for the inadvertent classification of location rather than sensation. In each run of the task, each individual sensation was imagined 10 times (total 50 trials per run), pseudo-randomly shuffled. The full dataset consists of 400 trials with N = 80 repetitions of each imagined sensation. Experiment design and data collection Data were collected from each array site using a 128-channel Neural Signal Processor (Blackrock Microsystems). Broadband signals were recorded at 30,000 samples/s. Spectral power was computed for each phase of each trial using MATLAB's pspectrum function (MathWorks Inc. MA). Unsorted threshold crossings extracted from the broadband signal using a threshold of −3.5 times the noise RMS of the continuous signal voltage, were used as spike activity. The first full data set (herein referred to as experiment 1) was collected across 10 d. The second full data set was collected 11 months later, across 24 d (herein referred to as experiment 2). This time delay allowed us to explore the stability of the representations initially observed. ICMS sensory mapping that produced the percepts used for imagery in this study were collected 16 months before experiment 1 began. Statistics and analysis methods Classification was performed independently for each array and each phase of the somatosensory-imagery task using linear discriminant analysis (LDA) with the fitcdiscr function in MATLAB. For analysis using spike firing rates, the average threshold crossing rates from each channel, calculated from the entirety of each phase in 50-ms time bins, were passed as features to the classifier. For analysis of the spectral power data, power in the 4–8 (θ), 8–12 (α), 12–30 (β), 30–70, 70–150, and 150–300 Hz (γ) bands, computed for each channel, were used as features. Classification was performed separately for each frequency band. We note that in these very high-frequency bands the signal is likely to reflect the spiking activity of local neurons. For both threshold crossings and spectral power, LDA was performed over 1000 repetitions. In each repetition, all 400 trials were randomly divided in a 50/50 cross-validation training and testing paradigm. Following 1000 repetitions, mean classification accuracy and 95% confidence intervals were computed. This procedure was repeated in a null condition where class labels were randomly shuffled during each repetition to generate a chance-level distribution of classification accuracies. Significance for classification performance was calculated by comparison of the overlapping percentile values of the actual and null data set. The full results are available in . In order to test the ability of our datasets to generalize to one another, a decoder was trained on all of experiment 1 data and tested on all of experiment 2 data (another decoder was trained using the opposite train/test regime). This analysis yielded only one accuracy for each phase and electrode array as opposed to a distribution over 1000 iterations, because of the nature of testing which used one specific split of the data. However, the null condition was calculated as before, by shuffling the trial labels of both train and test datasets randomly over 1000 iterations. For this reason, in the generalization analysis, for each phase and electrode array, a single accuracy value was compared with the percentiles of the null distribution. For instance, we report p < 0.05 if the accuracy was greater than the value at the 97.5th percentile of the null distribution. Initially we performed LDA without preprocessing (e.g., without performing dimensionality reduction) as this allows for a direct analysis of the relationship between the neural activity recorded on each channel and imagined sensations. However, since the absence of preprocessing results in a small trade-off in classification accuracy, we separately repeated the classification using singular value decomposition (SVD) feature selection before model fitting. For threshold-crossing features, SVD was computed on mean-centered firing rates averaged within each task phase (svd function in MATLAB). Average firing rate data were projected onto the top N features that represent the dimensions of greatest variance in the data. N was determined by examining accuracy scores across phases and electrode arrays in experiment 1. N was calculated separately for spike decoding and for each frequency band in spectral power decoding. N was initially set to 5 features, and then increased in increments of 5. Each run yielded a mean accuracy across phases (cue, delay, imagery) and arrays (SMG, PMv, S1) over 1000 iterations. For each of these accuracies, the current run was compared with the previous run with N –5 features. In all cases, accuracies as N increased followed a curve with a single peak or plateau at some N > 0 and smaller than the original number of features. The run with the greater number of superior accuracies was chosen as the “better” run. In the case of a tie, the lower number of features was chosen. The number of features N is given in , dimensions. The number of features determined to be best for experiment 1 data were also used to decode experiment 2 data, and to perform the cross-experiment decoding (i.e., training on experiment 1 and testing on experiment 2 and vice versa). The best number of features was recomputed with the combined data from experiment 1 and experiment 2 following the same procedure. For spectral-power classification, the same approach was used to determine the optimal number of features for each frequency band individually. As appropriate, p values were corrected for multiple comparisons using the Bonferroni–Holm method.
We recruited and consented a male participant with C5-level complete spinal cord injury (34 years old, three years and six months postinjury, and one year and eight months postimplant, at the time of the first experiment) to participate in a clinical trial of a brain-machine interface (BMI) system with intracortical recording and stimulation. All data were recorded through electrode arrays that were implanted in three locations of the left hemisphere ( A ): SMG, PMv, and S1. One 96-channel, platinum tipped Neuroport microelectrode recording array (Blackrock Microsystems, Salt Lake City, UT) was implanted in each of SMG and PMv. Two 48-channel SIROF-tipped (sputtered iridium oxide film) microelectrode arrays were implanted in S1. Further information regarding specific surgical planning and implantation details are described in . All procedures were approved by the Institutional Review Boards (IRB) of the California Institute of Technology, University of Southern California, and Rancho Los Amigos National Rehabilitation Hospital. Task Based on the outcome of S1-only stimulation mapping we identified the five most commonly elicited sensations with ICMS: “squeeze,” “tap,” “rightward movement,” “vibration,” and “blowing.” These sensations represented 24.9%, 17.3%, 9.7%, 8.1%, and 6.6%, respectively, of 381 total ICMS elicited sensations (for full details of ICMS mapping see, ). These sensations were experienced in the same body locations of the contralateral forearm and upper arm. In our somatosensory imagery experiment, each trial consisted of an intertrial interval (ITI), a cue, a delay, and an imagery phase. During the ITI, a black screen with a gray circle (1-cm diameter) in the middle was shown for 4 s during which time the participant was instructed to rest and fixate gaze on the circle, although gaze was not measured. In the cue phase, one of the sensations listed above was presented as a written word for 2 s, then in the 2-s delay phase, only a black screen with the fixation circle was shown. In the final 5-s imagery phase of the task, the fixation circle changed to green and the participant began somatosensory imagery. The instruction for the imagery phase given at the beginning of each experiment was to “imagine the sensation as you experienced it during electrical stimulation as vividly as possible” ( B ). The participant confirmed to us that the sensations were all imagined in the same location at the forearm, thus controlling for the inadvertent classification of location rather than sensation. In each run of the task, each individual sensation was imagined 10 times (total 50 trials per run), pseudo-randomly shuffled. The full dataset consists of 400 trials with N = 80 repetitions of each imagined sensation. Experiment design and data collection Data were collected from each array site using a 128-channel Neural Signal Processor (Blackrock Microsystems). Broadband signals were recorded at 30,000 samples/s. Spectral power was computed for each phase of each trial using MATLAB's pspectrum function (MathWorks Inc. MA). Unsorted threshold crossings extracted from the broadband signal using a threshold of −3.5 times the noise RMS of the continuous signal voltage, were used as spike activity. The first full data set (herein referred to as experiment 1) was collected across 10 d. The second full data set was collected 11 months later, across 24 d (herein referred to as experiment 2). This time delay allowed us to explore the stability of the representations initially observed. ICMS sensory mapping that produced the percepts used for imagery in this study were collected 16 months before experiment 1 began. Statistics and analysis methods Classification was performed independently for each array and each phase of the somatosensory-imagery task using linear discriminant analysis (LDA) with the fitcdiscr function in MATLAB. For analysis using spike firing rates, the average threshold crossing rates from each channel, calculated from the entirety of each phase in 50-ms time bins, were passed as features to the classifier. For analysis of the spectral power data, power in the 4–8 (θ), 8–12 (α), 12–30 (β), 30–70, 70–150, and 150–300 Hz (γ) bands, computed for each channel, were used as features. Classification was performed separately for each frequency band. We note that in these very high-frequency bands the signal is likely to reflect the spiking activity of local neurons. For both threshold crossings and spectral power, LDA was performed over 1000 repetitions. In each repetition, all 400 trials were randomly divided in a 50/50 cross-validation training and testing paradigm. Following 1000 repetitions, mean classification accuracy and 95% confidence intervals were computed. This procedure was repeated in a null condition where class labels were randomly shuffled during each repetition to generate a chance-level distribution of classification accuracies. Significance for classification performance was calculated by comparison of the overlapping percentile values of the actual and null data set. The full results are available in . In order to test the ability of our datasets to generalize to one another, a decoder was trained on all of experiment 1 data and tested on all of experiment 2 data (another decoder was trained using the opposite train/test regime). This analysis yielded only one accuracy for each phase and electrode array as opposed to a distribution over 1000 iterations, because of the nature of testing which used one specific split of the data. However, the null condition was calculated as before, by shuffling the trial labels of both train and test datasets randomly over 1000 iterations. For this reason, in the generalization analysis, for each phase and electrode array, a single accuracy value was compared with the percentiles of the null distribution. For instance, we report p < 0.05 if the accuracy was greater than the value at the 97.5th percentile of the null distribution. Initially we performed LDA without preprocessing (e.g., without performing dimensionality reduction) as this allows for a direct analysis of the relationship between the neural activity recorded on each channel and imagined sensations. However, since the absence of preprocessing results in a small trade-off in classification accuracy, we separately repeated the classification using singular value decomposition (SVD) feature selection before model fitting. For threshold-crossing features, SVD was computed on mean-centered firing rates averaged within each task phase (svd function in MATLAB). Average firing rate data were projected onto the top N features that represent the dimensions of greatest variance in the data. N was determined by examining accuracy scores across phases and electrode arrays in experiment 1. N was calculated separately for spike decoding and for each frequency band in spectral power decoding. N was initially set to 5 features, and then increased in increments of 5. Each run yielded a mean accuracy across phases (cue, delay, imagery) and arrays (SMG, PMv, S1) over 1000 iterations. For each of these accuracies, the current run was compared with the previous run with N –5 features. In all cases, accuracies as N increased followed a curve with a single peak or plateau at some N > 0 and smaller than the original number of features. The run with the greater number of superior accuracies was chosen as the “better” run. In the case of a tie, the lower number of features was chosen. The number of features N is given in , dimensions. The number of features determined to be best for experiment 1 data were also used to decode experiment 2 data, and to perform the cross-experiment decoding (i.e., training on experiment 1 and testing on experiment 2 and vice versa). The best number of features was recomputed with the combined data from experiment 1 and experiment 2 following the same procedure. For spectral-power classification, the same approach was used to determine the optimal number of features for each frequency band individually. As appropriate, p values were corrected for multiple comparisons using the Bonferroni–Holm method.
We recruited and consented a male participant with C5-level complete spinal cord injury (34 years old, three years and six months postinjury, and one year and eight months postimplant, at the time of the first experiment) to participate in a clinical trial of a brain-machine interface (BMI) system with intracortical recording and stimulation. All data were recorded through electrode arrays that were implanted in three locations of the left hemisphere ( A ): SMG, PMv, and S1. One 96-channel, platinum tipped Neuroport microelectrode recording array (Blackrock Microsystems, Salt Lake City, UT) was implanted in each of SMG and PMv. Two 48-channel SIROF-tipped (sputtered iridium oxide film) microelectrode arrays were implanted in S1. Further information regarding specific surgical planning and implantation details are described in . All procedures were approved by the Institutional Review Boards (IRB) of the California Institute of Technology, University of Southern California, and Rancho Los Amigos National Rehabilitation Hospital.
Based on the outcome of S1-only stimulation mapping we identified the five most commonly elicited sensations with ICMS: “squeeze,” “tap,” “rightward movement,” “vibration,” and “blowing.” These sensations represented 24.9%, 17.3%, 9.7%, 8.1%, and 6.6%, respectively, of 381 total ICMS elicited sensations (for full details of ICMS mapping see, ). These sensations were experienced in the same body locations of the contralateral forearm and upper arm. In our somatosensory imagery experiment, each trial consisted of an intertrial interval (ITI), a cue, a delay, and an imagery phase. During the ITI, a black screen with a gray circle (1-cm diameter) in the middle was shown for 4 s during which time the participant was instructed to rest and fixate gaze on the circle, although gaze was not measured. In the cue phase, one of the sensations listed above was presented as a written word for 2 s, then in the 2-s delay phase, only a black screen with the fixation circle was shown. In the final 5-s imagery phase of the task, the fixation circle changed to green and the participant began somatosensory imagery. The instruction for the imagery phase given at the beginning of each experiment was to “imagine the sensation as you experienced it during electrical stimulation as vividly as possible” ( B ). The participant confirmed to us that the sensations were all imagined in the same location at the forearm, thus controlling for the inadvertent classification of location rather than sensation. In each run of the task, each individual sensation was imagined 10 times (total 50 trials per run), pseudo-randomly shuffled. The full dataset consists of 400 trials with N = 80 repetitions of each imagined sensation.
Data were collected from each array site using a 128-channel Neural Signal Processor (Blackrock Microsystems). Broadband signals were recorded at 30,000 samples/s. Spectral power was computed for each phase of each trial using MATLAB's pspectrum function (MathWorks Inc. MA). Unsorted threshold crossings extracted from the broadband signal using a threshold of −3.5 times the noise RMS of the continuous signal voltage, were used as spike activity. The first full data set (herein referred to as experiment 1) was collected across 10 d. The second full data set was collected 11 months later, across 24 d (herein referred to as experiment 2). This time delay allowed us to explore the stability of the representations initially observed. ICMS sensory mapping that produced the percepts used for imagery in this study were collected 16 months before experiment 1 began.
Classification was performed independently for each array and each phase of the somatosensory-imagery task using linear discriminant analysis (LDA) with the fitcdiscr function in MATLAB. For analysis using spike firing rates, the average threshold crossing rates from each channel, calculated from the entirety of each phase in 50-ms time bins, were passed as features to the classifier. For analysis of the spectral power data, power in the 4–8 (θ), 8–12 (α), 12–30 (β), 30–70, 70–150, and 150–300 Hz (γ) bands, computed for each channel, were used as features. Classification was performed separately for each frequency band. We note that in these very high-frequency bands the signal is likely to reflect the spiking activity of local neurons. For both threshold crossings and spectral power, LDA was performed over 1000 repetitions. In each repetition, all 400 trials were randomly divided in a 50/50 cross-validation training and testing paradigm. Following 1000 repetitions, mean classification accuracy and 95% confidence intervals were computed. This procedure was repeated in a null condition where class labels were randomly shuffled during each repetition to generate a chance-level distribution of classification accuracies. Significance for classification performance was calculated by comparison of the overlapping percentile values of the actual and null data set. The full results are available in . In order to test the ability of our datasets to generalize to one another, a decoder was trained on all of experiment 1 data and tested on all of experiment 2 data (another decoder was trained using the opposite train/test regime). This analysis yielded only one accuracy for each phase and electrode array as opposed to a distribution over 1000 iterations, because of the nature of testing which used one specific split of the data. However, the null condition was calculated as before, by shuffling the trial labels of both train and test datasets randomly over 1000 iterations. For this reason, in the generalization analysis, for each phase and electrode array, a single accuracy value was compared with the percentiles of the null distribution. For instance, we report p < 0.05 if the accuracy was greater than the value at the 97.5th percentile of the null distribution. Initially we performed LDA without preprocessing (e.g., without performing dimensionality reduction) as this allows for a direct analysis of the relationship between the neural activity recorded on each channel and imagined sensations. However, since the absence of preprocessing results in a small trade-off in classification accuracy, we separately repeated the classification using singular value decomposition (SVD) feature selection before model fitting. For threshold-crossing features, SVD was computed on mean-centered firing rates averaged within each task phase (svd function in MATLAB). Average firing rate data were projected onto the top N features that represent the dimensions of greatest variance in the data. N was determined by examining accuracy scores across phases and electrode arrays in experiment 1. N was calculated separately for spike decoding and for each frequency band in spectral power decoding. N was initially set to 5 features, and then increased in increments of 5. Each run yielded a mean accuracy across phases (cue, delay, imagery) and arrays (SMG, PMv, S1) over 1000 iterations. For each of these accuracies, the current run was compared with the previous run with N –5 features. In all cases, accuracies as N increased followed a curve with a single peak or plateau at some N > 0 and smaller than the original number of features. The run with the greater number of superior accuracies was chosen as the “better” run. In the case of a tie, the lower number of features was chosen. The number of features N is given in , dimensions. The number of features determined to be best for experiment 1 data were also used to decode experiment 2 data, and to perform the cross-experiment decoding (i.e., training on experiment 1 and testing on experiment 2 and vice versa). The best number of features was recomputed with the combined data from experiment 1 and experiment 2 following the same procedure. For spectral-power classification, the same approach was used to determine the optimal number of features for each frequency band individually. As appropriate, p values were corrected for multiple comparisons using the Bonferroni–Holm method.
In this study, a human quadriplegic participant with intracortical microelectrode arrays in the SMG, PMv, and S1 performed somatosensory imagery, the vivid recollection of sensory experiences, of five sensations. These sensations were the most common ones that the same participant experienced in a previously published sensory mapping of S1 by ICMS . We investigated the hypothesis that somatosensory imagery would generate unique representations for each sensation, which could be classified from the neural signal. Classifying sensations Using unsorted threshold crossings recorded during experiment 1 (see Materials and Methods), we trained an LDA classifier to identify the five sensations we tested. We trained the classifier on half of the trials (see Materials and Methods) at a single phase of the task and on data from a single array, using the average firing rate during the phase at each channel as features. We tested the classification on the other half of the trials in the same phase and array. We found a significant classification accuracy for the cue, delay and imagery phases of the task in SMG and in the imagery phase in S1 ( A ; ). To improve the classification accuracy, we applied SVD feature preprocessing before the LDA was trained (see Materials and Methods). We found significant classification for the cue, delay and imagery phases of the task in SMG and in the imagery phase in both S1 and PMv ( B , experiment 1; ). In all cases, classification accuracy was compared with that of a null distribution ( B , null), where the classification was performed identically but the trial labels were randomly shuffled. LDA analysis determines discriminability across the population activity of the whole array; however, we also observed individual channel firing activity capable of significantly discriminating between two or more sensations (exemplary channels shown in A ). The total percentage of channels, 96 in each brain area, whose activity significantly discriminated between two or more sensations in the imagery phase only ( p < 0.05) was 49% in SMG, 22% in PMv, and 20% in S1. This metric was calculated per channel, pooling across all trials, using a Kruskal–Wallis test with the averaged firing rate in the imagery phase of the task. Data were corrected for multiple comparisons with the Bonferroni-Holm method. To compare the correspondence between results from both stimulation (in previous work) and imagery for all individual channel-sensation pairs (96 channels × 5 sensations, N = 480), we identified tuning of the channel to the sensation by looking for a significant difference in firing rate across all trials of a pair between the ITI and imagery phase of the task, using a Wilcoxon signed rank test ( p < 0.05). We identified responses to ICMS for each channel-sensation pair by looking for at least one instance of the pair during ICMS mapping in the previous study . We found 89 (18.5%) pairs (38/96 unique channels, 5/5 unique sensations) which had both neurophysiological tuning and a response to ICMS. We also used the same method as above to perform a classification using the spectral power in various frequency bands of the raw neural signal as features (see Materials and Methods; C ; ). In SMG, we found significant classification accuracy in the cue phase across several frequency bands. We also found significant classification accuracy in the delay phase in higher frequency bands only. In the imagery phase we saw significant classification accuracy across several frequency bands. In PMv, we found significant classification accuracy only in the imagery phase at high and low frequencies. Likewise, in S1, we found significant classification accuracy only in the imagery phase and only in the highest frequency band (150–300 Hz). During the ITI phase of the trial, while the subject was at rest, we never achieved classification performance different to chance level with any method or neural signal used. This confirms that the discriminable activity in other task phases is related specifically to the somatosensory imagery task. Longitudinal representation of sensations We have demonstrated above that different sensations can be uniquely represented in distributed cortical areas. However, to what extent are the representations stable over time? Recordings of the human neural signal can be unstable over time , so to assess longitudinal stability, the participant performed experiment 2, repeating the imagery task ∼11 months after the initial experiment 1. We found that sensations could be classified from threshold crossings in SMG during cue, delay, and imagery phases as in the earlier data ( B ; ). We found a significant classification in the delay and imagery phase in PMv. Using spectral power features from experiment 2 only to examine longitudinal stability, as above with threshold crossings, showed a similar trend. In SMG significant classification was observed in all frequency bands during the cue but in the delay phase was only observed in higher frequency bands ( C , middle row; ). Additional lower bands became significant in the imagery phase. In PMv, significant classification accuracy was only achieved in the cue phase at a single high-frequency band and in the imagery phase across a range of bands. No significant classification using spectral power was achieved in S1 during experiment 2. To determine how similar activity was between experiments 1 and 2 within each task phase and each array, we performed a split training and testing using all trials of experiment 1 to train and all trials of experiment 2 to test (and vice versa). A null distribution was created using shuffled labels over N = 1000 repetitions of the classification. Using threshold crossings, significant classification accuracy was only observed in SMG during the imagery phase when testing on experiment 2. When testing on experiment 1, significant classification accuracy was observed only in SMG during the cue, delay and imagery phases of the task . To evaluate the longitudinal stability of spectral power representations, we trained and tested on both the experiments 1 and 2 datasets, as described above for threshold crossing features. When training on experiment 1 and testing on experiment 2, SMG showed significant classification accuracy in the cue and imagery phase across a broad range of bands and significance in one band in the delay phase. In PMv, significant classification accuracy also occurred in the cue and imagery phase. In S1, significant classification accuracy was observed in the imagery phase at only a low-frequency band ( A ). When training on experiment 2 and testing on experiment 1, significant classification accuracy was observed in SMG during the cue, delay and imagery phase. In PMv, significant classification accuracy occurred during the delay and imagery phase. In S1, significant classification accuracy only occurred during the imagery phase ( B ; ). Longitudinal classification from both spike and LFP signals performs well especially where the signal has a high decoding accuracy within either experiment 1 or 2 alone. In the longitudinal analysis, taking all brain regions together, there are some additional task phases and array locations with significant classification accuracy in the spectral power data compared with the spike data. This indicates a tendency toward more general stability in the spectral power. To ensure that classification accuracy could not be further improved with more data we combined threshold crossing datasets from both experiment 1 and experiment 2 to use all trials recorded ( N = 800) in the same classifier, with the same LDA and SVD method as before. Note, data across the two experiments is combined in this model. The significant classification accuracies in this result corroborate stability over time as in the longitudinal analysis above. However, combining data does not take into consideration changes in signal or noise over time as addressed specifically in the longitudinal analysis above. This analysis yielded significant classification accuracy for the cue, delay, and imagery phases of the task in SMG, in the imagery phase in S1 and in the imagery phase of PMv ( B , combined; ). Finally, we combined the full spectral power data set as above ( C , bottom row) and found significant classification accuracy for SMG in the cue phase across a broad range of frequency bands, in the delay phase in only the higher frequency bands and imagery phase again across a broad range of bands. In PMv and S1, significant classification accuracy was achieved in the imagery phase only. For PMv, this was achieved in a broad range of frequency bands, while for S1, this was only achieved in the highest frequency band.
Using unsorted threshold crossings recorded during experiment 1 (see Materials and Methods), we trained an LDA classifier to identify the five sensations we tested. We trained the classifier on half of the trials (see Materials and Methods) at a single phase of the task and on data from a single array, using the average firing rate during the phase at each channel as features. We tested the classification on the other half of the trials in the same phase and array. We found a significant classification accuracy for the cue, delay and imagery phases of the task in SMG and in the imagery phase in S1 ( A ; ). To improve the classification accuracy, we applied SVD feature preprocessing before the LDA was trained (see Materials and Methods). We found significant classification for the cue, delay and imagery phases of the task in SMG and in the imagery phase in both S1 and PMv ( B , experiment 1; ). In all cases, classification accuracy was compared with that of a null distribution ( B , null), where the classification was performed identically but the trial labels were randomly shuffled. LDA analysis determines discriminability across the population activity of the whole array; however, we also observed individual channel firing activity capable of significantly discriminating between two or more sensations (exemplary channels shown in A ). The total percentage of channels, 96 in each brain area, whose activity significantly discriminated between two or more sensations in the imagery phase only ( p < 0.05) was 49% in SMG, 22% in PMv, and 20% in S1. This metric was calculated per channel, pooling across all trials, using a Kruskal–Wallis test with the averaged firing rate in the imagery phase of the task. Data were corrected for multiple comparisons with the Bonferroni-Holm method. To compare the correspondence between results from both stimulation (in previous work) and imagery for all individual channel-sensation pairs (96 channels × 5 sensations, N = 480), we identified tuning of the channel to the sensation by looking for a significant difference in firing rate across all trials of a pair between the ITI and imagery phase of the task, using a Wilcoxon signed rank test ( p < 0.05). We identified responses to ICMS for each channel-sensation pair by looking for at least one instance of the pair during ICMS mapping in the previous study . We found 89 (18.5%) pairs (38/96 unique channels, 5/5 unique sensations) which had both neurophysiological tuning and a response to ICMS. We also used the same method as above to perform a classification using the spectral power in various frequency bands of the raw neural signal as features (see Materials and Methods; C ; ). In SMG, we found significant classification accuracy in the cue phase across several frequency bands. We also found significant classification accuracy in the delay phase in higher frequency bands only. In the imagery phase we saw significant classification accuracy across several frequency bands. In PMv, we found significant classification accuracy only in the imagery phase at high and low frequencies. Likewise, in S1, we found significant classification accuracy only in the imagery phase and only in the highest frequency band (150–300 Hz). During the ITI phase of the trial, while the subject was at rest, we never achieved classification performance different to chance level with any method or neural signal used. This confirms that the discriminable activity in other task phases is related specifically to the somatosensory imagery task.
We have demonstrated above that different sensations can be uniquely represented in distributed cortical areas. However, to what extent are the representations stable over time? Recordings of the human neural signal can be unstable over time , so to assess longitudinal stability, the participant performed experiment 2, repeating the imagery task ∼11 months after the initial experiment 1. We found that sensations could be classified from threshold crossings in SMG during cue, delay, and imagery phases as in the earlier data ( B ; ). We found a significant classification in the delay and imagery phase in PMv. Using spectral power features from experiment 2 only to examine longitudinal stability, as above with threshold crossings, showed a similar trend. In SMG significant classification was observed in all frequency bands during the cue but in the delay phase was only observed in higher frequency bands ( C , middle row; ). Additional lower bands became significant in the imagery phase. In PMv, significant classification accuracy was only achieved in the cue phase at a single high-frequency band and in the imagery phase across a range of bands. No significant classification using spectral power was achieved in S1 during experiment 2. To determine how similar activity was between experiments 1 and 2 within each task phase and each array, we performed a split training and testing using all trials of experiment 1 to train and all trials of experiment 2 to test (and vice versa). A null distribution was created using shuffled labels over N = 1000 repetitions of the classification. Using threshold crossings, significant classification accuracy was only observed in SMG during the imagery phase when testing on experiment 2. When testing on experiment 1, significant classification accuracy was observed only in SMG during the cue, delay and imagery phases of the task . To evaluate the longitudinal stability of spectral power representations, we trained and tested on both the experiments 1 and 2 datasets, as described above for threshold crossing features. When training on experiment 1 and testing on experiment 2, SMG showed significant classification accuracy in the cue and imagery phase across a broad range of bands and significance in one band in the delay phase. In PMv, significant classification accuracy also occurred in the cue and imagery phase. In S1, significant classification accuracy was observed in the imagery phase at only a low-frequency band ( A ). When training on experiment 2 and testing on experiment 1, significant classification accuracy was observed in SMG during the cue, delay and imagery phase. In PMv, significant classification accuracy occurred during the delay and imagery phase. In S1, significant classification accuracy only occurred during the imagery phase ( B ; ). Longitudinal classification from both spike and LFP signals performs well especially where the signal has a high decoding accuracy within either experiment 1 or 2 alone. In the longitudinal analysis, taking all brain regions together, there are some additional task phases and array locations with significant classification accuracy in the spectral power data compared with the spike data. This indicates a tendency toward more general stability in the spectral power. To ensure that classification accuracy could not be further improved with more data we combined threshold crossing datasets from both experiment 1 and experiment 2 to use all trials recorded ( N = 800) in the same classifier, with the same LDA and SVD method as before. Note, data across the two experiments is combined in this model. The significant classification accuracies in this result corroborate stability over time as in the longitudinal analysis above. However, combining data does not take into consideration changes in signal or noise over time as addressed specifically in the longitudinal analysis above. This analysis yielded significant classification accuracy for the cue, delay, and imagery phases of the task in SMG, in the imagery phase in S1 and in the imagery phase of PMv ( B , combined; ). Finally, we combined the full spectral power data set as above ( C , bottom row) and found significant classification accuracy for SMG in the cue phase across a broad range of frequency bands, in the delay phase in only the higher frequency bands and imagery phase again across a broad range of bands. In PMv and S1, significant classification accuracy was achieved in the imagery phase only. For PMv, this was achieved in a broad range of frequency bands, while for S1, this was only achieved in the highest frequency band.
As cortical stimulation methods are becoming more widely used it is increasingly important to understand the relationship between intervention (i.e., ICMS) and evoked perception/behavior (i.e., sensations). In order to achieve the goal of restoring sensation in humans, we need to produce consistent effects across participants and robustly deliver specific sensations relevant to the task. We believe understanding this begins with exploring the neural representation of the sensations that we are able to elicit, for example, uncovering the neural features that represent unique sensations. In the work presented here, we demonstrate that different sensations are uniquely represented in the neural activity of human cortex. We measured spiking activity and spectral power during somatosensory imagery with intracortical recording arrays in SMG, PMv, and S1 of a single human participant with a high-level spinal cord injury (see Materials and Methods). We demonstrate that individual sensations can be accurately classified using these signals . Here, we observe activity through somatosensory imagery, a powerful tool to elicit sensation-relevant neural activity, as physical interaction with the environment is not possible because of the nature of the injury in the quadriplegic patient population. We explore sensations that the participant had experienced both naturally before the injury and reported during ICMS mapping. Previously, individual aspects of somatosensation have been studied in isolation such as responses to different textures, the frequency of vibration, individual forces, etc. In somatosensory imagery, all these components are combined as a naturalistic sensation. With recordings across human cortical areas we can further characterize the distributed response in the brain to somatosensation . We show sensations can be classified in S1 during somatosensory imagery with threshold crossing activity, when the participant vividly recalls a previously experienced sensation ( A ; ). Additionally, in S1 the sensations are only classifiable in the imagery phase in high-frequency spectral power of 150–300 Hz, again likely reflecting spiking activity ( C , top row; ). This finding suggests S1 does not encode the planning or anticipation of sensation during imagery with no significant classification occurring in the cue or delay phase. In PMv, we found activity in the imagery phase similar to S1, but with additional low-frequency components of 4–8 and 8–12 Hz, which may be responsible for driving coordinated networks over a larger area . In experiment 2, we were able to classify sensations from threshold crossing activity in PMv during the delay phase. This result reinforces the trend seen in experiment 1 for PMv ( B ), suggesting that it encodes the planning or anticipation of the sensation in addition to the sensation itself . In SMG, we saw the highest classification performance of any area tested during the cue, delay and imagery phases of the task, both in threshold crossing activity and the spectral power in high-frequency bands. This finding demonstrates SMG contains somatosensory information, both during imagery and in the planning/anticipation of somatosensory imagery . Classification during the cue phase, which uniquely included θ band activity, suggests a representation of the semantic aspect of the cued sensation within SMG, which further supports the higher order cognitive encoding of sensorimotor control in posterior parietal cortex . We observe a large difference in the decoding performance between SMG and S1/PMv. A hypothesis for this difference may be that since somatosensory imagery is a top-down cognitive process, without somatosensory input, the representation is stronger in SMG as this is a higher order, cognitive area in somatosensory processing. Our results show imagery produces discriminable activity in S1 and PMv; however, the reduced decoding accuracy may reflect the primary role of this neural population to process input from the somatosensory system. We explicitly test somatosensory imagery to determine whether neural activity encodes the imagined sensation. This is motivated entirely because of the nature of injury in our patient population. We do not assume that these areas would represent the sensations in exactly the same way if they were experienced through interaction with the environment in the absence of injury. Indeed, the representations found from somatosensory imagery have intrinsic value to efforts aimed at restoring sensation in injured people. However, it is likely that there would be a high degree of correspondence between the neural representation of sensations during somatosensory imagery and actual somatosensation . As seen in the motor system , research into motor control, motor learning and motor BMIs have shown a high degree of similarity between the neural activity of imagined and executed behavior. In the longitudinal comparison of the neural representation of sensation , classification accuracy decreased in most phases and locations compared with testing within the experiments , with the biggest decrease in performance observed in S1. While it is unclear what caused this change in classification accuracy, it is interesting to note that it was accompanied by the participant's comments during experiment 2 that the passage of time between the two experiments “made it much harder to imagine the sensation [evoked by ICMS] because I have not felt them in a while.” This anecdotal evidence might suggest a link between the strength of responses in S1 to the clarity with which the sensations could be recalled, as may be intuitively expected in a somatosensory imagery task. Nevertheless, threshold crossing S1 activity was still able to yield significant longitudinal classification accuracy after 11 months, comparable to that measured initially. In SMG, the presence of significant classification across experiments may suggest a stronger representation of the task than PMv or S1. The cross-classification performance across the two experiments suggests that while each of these areas encode the sensations after 11 months, the representation over all brain areas differs over time. While physiological changes in the representation of the sensations or the quality of the imagery could contribute to this, there are many additional factors unrelated to the neurophysiology of the task that likely contribute as well. For example, small movements in the array, degradation of the array over time and changes at the electrode-tissue interface may all account for the reduced performance. Identifying a stable relationship between aspects of the neural signal representing sensations during somatosensory imagery and features of stimulation that evoke those sensations could allow us to efficiently identify protocols for artificially eliciting sensation. This is relevant to closed loop BMIs where during robotic or computer control, task-relevant sensations must be identified and delivered via ICMS. It remains to be investigated whether correspondence between features of the neural signal during imagery and the neural signal evoked during stimulation could reduce the time to map the relationship between sensations and stimulation. If so, somatosensory imagery could be used to improve sensory mapping by stimulation and potentially elicit more varied responses in future work. Furthermore, S1 was originally chosen as a stimulation site because of its known neurophysiological relationship to sensation. Here, we confirm a relationship between imagined sensations and S1 neurophysiology for sensations previously elicited with S1 stimulation in the same array. Somatosensory imagery of the sensations shows an even stronger relationship between neurophysiological activity and imagined sensation in SMG. Therefore, SMG may also be a potential target for ICMS to elicit sensation. Stimulation in parietal cortex has previously been shown to have connections with and relate to behavior of the sensorimotor system. In conclusion, we present evidence that human somatosensory imagery can be uniquely and robustly encoded in the activity of distributed cortical areas. In future work it would be essential to identify the evoked neurophysiology from certain stimulation parameters and compare this, instead of stimulation parameters alone, to the evoked sensations and representation of the sensations during imagery or experience. Such information would likely elucidate further the relationship between the stimulation parameters, their ability to elicit certain sensations, and the representation of the sensations elicited in the brain.
|
Quality of life changes over time and predictors in a large head and neck patients’ cohort: secondary analysis from an Italian multi-center longitudinal, prospective, observational study—a study of the Italian Association of Radiotherapy and Clinical Oncology (AIRO) head and neck working group | be303eb4-30a4-43e2-9feb-34d6d50200fe | 10023607 | Internal Medicine[mh] | Head and neck carcinoma (HNC) is becoming common worldwide, and it is anticipated to rise by 30% accounting for an estimated 1.08 million new cancer cases annually by 2030 . In particular, the increasing rates of human papilloma virus (HPV)-related tumors, with better prognosis compared to the counterpart, have contributed to this high prevalence of HNC especially in the United States of America and Western Europe . Currently, regardless of HPV status, evidenced-based treatments are multimodal and may produce several physical complications and psychological distress, which may persist beyond treatment . The main treatment-related side effects are oral mucositis, taste impairment, salivary gland dysfunction, xerostomia, incapacity to chew and swallow, bacterial and fungal infections, neuropathy, trismus, and skin changes and reactions of the treated area . All these complications impair patients’ ability to perform on daily activities , resulting in social withdrawal, mental, and emotional distress and impacting patients’ health-related (HR) quality of life (QoL) domains but also more general QoL domains . HRQoL may be described as a subjective and multi-dimensional concept related to one’s perception of well-being and satisfaction with one’s own health as well daily life functioning , which encompasses physical, psychological, and social functioning and disease-treatment related symptoms and side effects . Thus, it may be considered a subset of the broader concept of QoL, defined as “an individual’s perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns” . Accordingly, we have decided to focus on the more comprehensive term of QoL. As it was abovementioned said, HNC patients’ face unique physical, emotional, and psychological challenges and life disruptions, in comparison to other cancer sites . Hence, understanding QoL changes and patients’ needs during and after therapy is essential to manage the disease more effectively and to set up rehabilitative strategies for the patients . Longitudinal studies reported that QoL usually decreases during radiation therapy (RT) and starts to improve 3–6 months after treatment, with a global amelioration one year after RT end, without a complete return to pre-treatment status, and with a pattern varies depending on the dimension of QoL evaluated . In addition, information about clinical and treatment-related predictors impacting on improvement and recovery on QOL is not comprehensive enough so far. A multi-center longitudinal, prospective, observational study of consecutive HNC patients, treated at seven Italian Oncology Radiotherapy Departments, was conducted on behalf of the Italian Association of Radiotherapy and Clinical Oncology (AIRO) Head and Neck Working Group. The first endpoint was the Italian language psychometric validation of the M.D. Anderson Symptom Inventory Head and Neck (MDASI-HN) questionnaire . Here, we present results of secondary endpoints: (i) investigate QoL in patients with HNC using the MDASI-HN module to measure symptom burden during RT and in the follow-up period, namely, (1, 3, 6, and 12 months after completion of RT) and (ii) analyze whether QoL may be predicted by socio-demographic and clinical characteristics.
Procedure This was a multi-center prospective longitudinal observational study of consecutive HNC patients treated with RT at seven Italian Oncology Radiotherapy Departments, from 2016 to 2019. Eligibility criteria were patients with a squamous cell carcinoma of the head and neck (including oral cavity, oropharynx, larynx, and hypopharynx); age ≥ 18 years old; Eastern Cooperative Oncology Group (ECOG) performance status < 2; and good knowledge of Italian language. Exclusion criteria included history of cognitive or psychiatric disorders, synchronous tumors, or previous RT to the head and neck region. Treatment details were previously described . Briefly, all patients were treated with (chemo)radiotherapy ((C)RT) with definitive or adjuvant intent (postoperative), based on primary and disease stage. If needed, type of surgical approach and induction chemotherapy regimen were chosen by the respective professionals. The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Nazionale dei Tumori in Milan (prot. INT 29/15). All patients signed study-specific informed consent and answered to the questionnaire after the physician visit. Questionnaire measure and socio-demographic and clinical variables were collected at different time points: pre-treatment (before RT); weekly during RT (6–7 weeks); and in the follow-up period, specifically 1, 3, 6, and 12 months after RT. Questionnaire and data collection The MDASI-HN is a brief and reliable patient-reported outcome measure (PROM) questionnaire developed to investigate symptoms severity, specifically general cancer-related symptoms (GC-RS), head and neck cancer-related symptoms (HNC-RS), and symptoms interference with daily activities (SIDA) . It contains 13 items representing the most common symptoms among all cancer types (such as fatigue level, lack of appetite and vomiting) and 9 items specific to HNC (such as problems with tasting food, choking or coughing and difficulty swallowing or chewing). These items assess the presence and severity of symptoms during the previous 24 h, rating them on a 11-point scale from “not present” (0) to “as bad as you can imagine” (10). The last 6 items concern how these symptoms interfere with daily activities, including work, walk, and relationship with other; these assess how general and specific cancer symptoms interfere with patients’ activities during the past 24 h. These items are rated on a scale ranging from “do not interfere” (0) to “interfered completely” (10) . Clinical and socio-demographic characteristics, including age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, human papillomavirus (HPV) status, RT setting (adjuvant vs. definitive), and concomitant systemic therapy, were also collected. Statistical analysis Data were analyzed using IBM SPSS Statistics version 25 (IBM, Armonk, NY, USA). Multi-level mixed-effects linear regression estimated the association between QoL and time as well as with clinical and socio-demographic variables. We opted for such a hierarchical approach as it (a) permits to model random effects (intercepts and slopes) of time and (b) permits to treat variables as nested within other variables; in particular, for the present study, the various timepoints are nested under each participant. We also investigated the missing and response rate at each timepoint as percentage (e.g., number of participants who responded at week x/total number of participants*100). The following variables were investigated: time (in weeks), age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, HPV status, RT setting, and concomitant systemic therapy. Last, we set alpha at p < 0.05.
This was a multi-center prospective longitudinal observational study of consecutive HNC patients treated with RT at seven Italian Oncology Radiotherapy Departments, from 2016 to 2019. Eligibility criteria were patients with a squamous cell carcinoma of the head and neck (including oral cavity, oropharynx, larynx, and hypopharynx); age ≥ 18 years old; Eastern Cooperative Oncology Group (ECOG) performance status < 2; and good knowledge of Italian language. Exclusion criteria included history of cognitive or psychiatric disorders, synchronous tumors, or previous RT to the head and neck region. Treatment details were previously described . Briefly, all patients were treated with (chemo)radiotherapy ((C)RT) with definitive or adjuvant intent (postoperative), based on primary and disease stage. If needed, type of surgical approach and induction chemotherapy regimen were chosen by the respective professionals. The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Nazionale dei Tumori in Milan (prot. INT 29/15). All patients signed study-specific informed consent and answered to the questionnaire after the physician visit. Questionnaire measure and socio-demographic and clinical variables were collected at different time points: pre-treatment (before RT); weekly during RT (6–7 weeks); and in the follow-up period, specifically 1, 3, 6, and 12 months after RT.
The MDASI-HN is a brief and reliable patient-reported outcome measure (PROM) questionnaire developed to investigate symptoms severity, specifically general cancer-related symptoms (GC-RS), head and neck cancer-related symptoms (HNC-RS), and symptoms interference with daily activities (SIDA) . It contains 13 items representing the most common symptoms among all cancer types (such as fatigue level, lack of appetite and vomiting) and 9 items specific to HNC (such as problems with tasting food, choking or coughing and difficulty swallowing or chewing). These items assess the presence and severity of symptoms during the previous 24 h, rating them on a 11-point scale from “not present” (0) to “as bad as you can imagine” (10). The last 6 items concern how these symptoms interfere with daily activities, including work, walk, and relationship with other; these assess how general and specific cancer symptoms interfere with patients’ activities during the past 24 h. These items are rated on a scale ranging from “do not interfere” (0) to “interfered completely” (10) . Clinical and socio-demographic characteristics, including age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, human papillomavirus (HPV) status, RT setting (adjuvant vs. definitive), and concomitant systemic therapy, were also collected.
Data were analyzed using IBM SPSS Statistics version 25 (IBM, Armonk, NY, USA). Multi-level mixed-effects linear regression estimated the association between QoL and time as well as with clinical and socio-demographic variables. We opted for such a hierarchical approach as it (a) permits to model random effects (intercepts and slopes) of time and (b) permits to treat variables as nested within other variables; in particular, for the present study, the various timepoints are nested under each participant. We also investigated the missing and response rate at each timepoint as percentage (e.g., number of participants who responded at week x/total number of participants*100). The following variables were investigated: time (in weeks), age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, HPV status, RT setting, and concomitant systemic therapy. Last, we set alpha at p < 0.05.
Participants From January 2016 to December 2019, 166 HNC patients were enrolled and received (C)RT. The response rate at the beginning of the study was high in all the three dimensions, and at time 1, it ranged from 95.78% (GC-RS) to 93.37% (SIDA); however, it slowly decreased from the last week of treatment. Indeed, the missing rate gradually increased in the follow-up period. At week 8, missing rate was of 31.93% for all three factors of the MDASI-HN, whereas it raised at 60.84% at week 52. Patient socio-demographic characteristics are shown in Table , while tumor and treatment characteristics are shown in Table . Most of the patients, specifically 79%, had locally advanced disease according to TNM 7th edition. Socio-demographic and clinical and variables and changes of QoL over time Considering the whole sample, first, hierarchical linear model analysis was conducted on the factor GC-RS as the dependent variable in a stepwise fashion and indicated that the best model was the one including the linear, quadratic, and cubic effects of time, and both the intercepts and the slope of time (linear) as random effects. Subsequently, the other variables were also entered in the analyses. After entering them, the random effect of the slope was no longer significant and was hence excluded. Table shows results of this model. A second analysis was conducted on the factor HNC-RS as the dependent variable in the same stepwise fashion as for the first dimension. The analyses showed that the best fitting model included the linear, quadratic, and cubic trend and the random effect of the intercepts (linear). Subsequently, the other variables were entered in the analyses. None of the variables considered reached significance except for time (Table ). A third analysis was conducted on SIDA as the dependent variable, again in a stepwise fashion. The analyses showed that the best fitting model included the three effects of time (linear, quadratic, and cubic) and the random effects of the intercepts and the slope (linear). As for the first factor, once the other variables were entered in the analyses, the random effect of the slope was no longer significant; hence, it was excluded. The HPV status and the linear, quadratic, and cubic effects of time were significant (Table ). As Fig. a shows, for all three MDASI factors, there was a trend whereby the scores increased from week 1 to week 8 (with some fluctuation between week 4 and week 8), followed by a decrease from week 8 to week 52. Considering that a higher score indicates lower QoL, the results indicated a worsening in the first eight weeks, followed by a slow return to a better QoL. Changes of QoL over time: the role of HPV Since the amount of patient diagnosed with oropharynx cancer outnumbered those with other tumor locations, the same analyses as above were conducted only for those cases where the location of the tumor was the oropharynx, considering patients HPV positive and negative separately. In relation to HPV-negative patients, as can be seen in Table , for the GC-RS factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). This model showed that the linear, quadratic, and the cubic effects of time were all significant. For the HNC-RS factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, linear, quadratic, and cubic effects of time were all significant. The analysis conducted on the SIDA factor showed that the best model was the one including the three effects of time (linear, quadratic, and cubic), all the other variables, and the random effects of the intercepts (linear). The model showed that the linear, quadratic, and cubic effects of time were all significant. In all these three dimensions, none of the other variables considered reached significance. In relation to HPV-positive patients (Table ), for the first factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects and that of all the other variables, plus intercepts of time (linear) as random effect. The model showed that the linear, quadratic, and the cubic effects of time were all significant. Further, the effect of gender, age at diagnosis, educational level, surgery, and alcohol use were also significant. The estimated marginal means indicated that male patients ( M = 2.16, SE = 0.42), with a higher educational level ( M = 2.11, SE = 0.33), who had surgery ( M = 2.15, SE = 0.53), and those who use alcohol ( M = 2.22, SE = 0.38) had lower scores than females ( M = 3.30, SE = 0.37), who had a low educational level ( M = 3.35, SE = 0.45), who had not the surgery done ( M = 3.31, SE = 0.32), and who never drink alcohol ( M = 3.24, SE = 0.40). For the second factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). The model showed that the linear, quadratic, and the cubic effects of time were all significant. The effect of educational level and ECOG status was also significant. Patients with a lower educational level ( M = 5.38, SE = 0.47) and those fully active (ECOG 0) ( M = 4.93, SE = 0.41) showed higher scores than those with higher educational level ( M = 3.56, SE = 0.35) and restricted in physically strenuous activity (ECOG 1) ( M = 4.01, SE = 0.43). For the third factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, the linear, quadratic, and the cubic effects of time were all significant. The effects of gender, age at diagnosis, employment status, and alcohol use were also significant. Patients who were female ( M = 3.70, SE = 0.62), employed ( M = 3.76, SE = 0.68), and never use alcohol ( M = 3.57, SE = 0.66) showed higher scores that males ( M = 2.08, SE = 0.70), unemployed ( M = 2.02, SE = 0.63), and alcohol user ( M = 2.21, SE = 0.63). As Fig. b-d shows, HPV-positive patients showed higher score, thus, worse QoL during treatment, whereas HPV-negative patients had worse QoL in the follow-up period, specifically when considering the HN cancer-related symptoms and the symptom interference with daily activities factors.
From January 2016 to December 2019, 166 HNC patients were enrolled and received (C)RT. The response rate at the beginning of the study was high in all the three dimensions, and at time 1, it ranged from 95.78% (GC-RS) to 93.37% (SIDA); however, it slowly decreased from the last week of treatment. Indeed, the missing rate gradually increased in the follow-up period. At week 8, missing rate was of 31.93% for all three factors of the MDASI-HN, whereas it raised at 60.84% at week 52. Patient socio-demographic characteristics are shown in Table , while tumor and treatment characteristics are shown in Table . Most of the patients, specifically 79%, had locally advanced disease according to TNM 7th edition.
Considering the whole sample, first, hierarchical linear model analysis was conducted on the factor GC-RS as the dependent variable in a stepwise fashion and indicated that the best model was the one including the linear, quadratic, and cubic effects of time, and both the intercepts and the slope of time (linear) as random effects. Subsequently, the other variables were also entered in the analyses. After entering them, the random effect of the slope was no longer significant and was hence excluded. Table shows results of this model. A second analysis was conducted on the factor HNC-RS as the dependent variable in the same stepwise fashion as for the first dimension. The analyses showed that the best fitting model included the linear, quadratic, and cubic trend and the random effect of the intercepts (linear). Subsequently, the other variables were entered in the analyses. None of the variables considered reached significance except for time (Table ). A third analysis was conducted on SIDA as the dependent variable, again in a stepwise fashion. The analyses showed that the best fitting model included the three effects of time (linear, quadratic, and cubic) and the random effects of the intercepts and the slope (linear). As for the first factor, once the other variables were entered in the analyses, the random effect of the slope was no longer significant; hence, it was excluded. The HPV status and the linear, quadratic, and cubic effects of time were significant (Table ). As Fig. a shows, for all three MDASI factors, there was a trend whereby the scores increased from week 1 to week 8 (with some fluctuation between week 4 and week 8), followed by a decrease from week 8 to week 52. Considering that a higher score indicates lower QoL, the results indicated a worsening in the first eight weeks, followed by a slow return to a better QoL.
Since the amount of patient diagnosed with oropharynx cancer outnumbered those with other tumor locations, the same analyses as above were conducted only for those cases where the location of the tumor was the oropharynx, considering patients HPV positive and negative separately. In relation to HPV-negative patients, as can be seen in Table , for the GC-RS factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). This model showed that the linear, quadratic, and the cubic effects of time were all significant. For the HNC-RS factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, linear, quadratic, and cubic effects of time were all significant. The analysis conducted on the SIDA factor showed that the best model was the one including the three effects of time (linear, quadratic, and cubic), all the other variables, and the random effects of the intercepts (linear). The model showed that the linear, quadratic, and cubic effects of time were all significant. In all these three dimensions, none of the other variables considered reached significance. In relation to HPV-positive patients (Table ), for the first factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects and that of all the other variables, plus intercepts of time (linear) as random effect. The model showed that the linear, quadratic, and the cubic effects of time were all significant. Further, the effect of gender, age at diagnosis, educational level, surgery, and alcohol use were also significant. The estimated marginal means indicated that male patients ( M = 2.16, SE = 0.42), with a higher educational level ( M = 2.11, SE = 0.33), who had surgery ( M = 2.15, SE = 0.53), and those who use alcohol ( M = 2.22, SE = 0.38) had lower scores than females ( M = 3.30, SE = 0.37), who had a low educational level ( M = 3.35, SE = 0.45), who had not the surgery done ( M = 3.31, SE = 0.32), and who never drink alcohol ( M = 3.24, SE = 0.40). For the second factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). The model showed that the linear, quadratic, and the cubic effects of time were all significant. The effect of educational level and ECOG status was also significant. Patients with a lower educational level ( M = 5.38, SE = 0.47) and those fully active (ECOG 0) ( M = 4.93, SE = 0.41) showed higher scores than those with higher educational level ( M = 3.56, SE = 0.35) and restricted in physically strenuous activity (ECOG 1) ( M = 4.01, SE = 0.43). For the third factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, the linear, quadratic, and the cubic effects of time were all significant. The effects of gender, age at diagnosis, employment status, and alcohol use were also significant. Patients who were female ( M = 3.70, SE = 0.62), employed ( M = 3.76, SE = 0.68), and never use alcohol ( M = 3.57, SE = 0.66) showed higher scores that males ( M = 2.08, SE = 0.70), unemployed ( M = 2.02, SE = 0.63), and alcohol user ( M = 2.21, SE = 0.63). As Fig. b-d shows, HPV-positive patients showed higher score, thus, worse QoL during treatment, whereas HPV-negative patients had worse QoL in the follow-up period, specifically when considering the HN cancer-related symptoms and the symptom interference with daily activities factors.
In this prospective longitudinal study, we used the PROM MDASI-HN to detect patients’ symptoms burden and implement interventions and therapy adjustments specific to each patient. A 3-factor solution, including GC-RS, HNC-RS, and SIDA, was considered, and a series of linear mixed model analyses were conducted. In both GC-RS and HNC-RS domains, time was the only significant predictor of patient’s QoL, whereas concerning the SIDA, time and HPV status were significant, resulting in HPV-positive patients with worst QoL than negative ones. It was evident that HNC patients’ QoL declined during RT (Fig. a), especially those symptoms specific to HNC, such as problems with mucus and difficulty in swallowing, that resulted to be more painful; nonetheless, QoL slowly improved as soon as treatment ended, which is consistent with the pattern found by other findings . Indeed, it is plausible that symptom severity is worse during RT because of tumor presence as well as therapy short-term side effects, which consequently affect patients’ life, whereas after therapy completion, there should be a physical relief due to tumor size reduction, thus, an improvement of patients’ perception of their life quality. However, it is also important to consider those findings in which side effects and problems persisted up to 1-year follow-up and even beyond it . In these cases, the sequelae were related to specific HNC-related symptoms, such as dry mouth, sticky saliva, or senses dysfunctions, showing that although general and global QoL recovered, the same did not happen for specific HNC symptoms. For instance, Oskam and colleagues found that QoL decrease related to HNC specific symptoms persisted up to a period between 8 to 11 years post-diagnosis. A possible explanation is that these problems and symptomatology are long-term side-effects of treatments, which appear only years after therapy, whereas other symptoms, such as nausea or pain, are caused by the presence of tumor or treatment administration . Among the studies found, only a few employed the M.D. Anderson Symptom Inventory Head and Neck module (MDASI-HN), 28-item version, which was used to assess symptoms severity during RT as well as in the follow up period. Most of previous research used QoL measures that were longer than MDASI-HN, although measuring similar dimensions; thus, future research could use this questionnaire to address patients’ QoL and avoid extra burden to them. The same abovementioned analyses were conducted among oropharynx cancer patients, distinguished by HPV positive and negative. Concerning HPV-negative patients, only the variable of time resulted to predict patients’ QoL. Among HPV-positive patients, time resulted to be significant in all the three factors. Regarding the GC-RS factor, being female, those patients who underwent surgery, those with low educational level, or patients that have never drunk alcohol had a worst QoL. Moreover, older patients were likely to have decreased QoL. It seems understandable that patients who had surgery may be debilitated, thus, having low QoL; similarly, patients with low educational level may engage in unhealthy behaviors and have less resources to cope with their disease. In relation to the HNC-RS factor, patients restricted in physically strenuous activity (ECOG 1) or with high educational level had a better QoL than fully active patients (ECOG 0) or those with a lower educational level. As for ECOG, our results appear to be contradictory at the first glance. We need to underline that a good performance status is generally classified as state 0 or 1one for the other. ECOG 0–1 is linked to better values in several scales of QOL. A possible explanation of our finding is that for patients with no functional impairment or premorbid lifestyle depicting a ECOG 0 status before starting RT, any impact on QOL is more perceived since the difference from baseline conditions is greater compared to patients with ECOG 1. For the SIDA, it was found that older patients, female subjects, those patients who were employed, or those who never used alcohol showed worst QoL. Unexpectedly, those subjects who never drink alcohol had worst QoL; this result would need to be further explored, considering that previous studies have focused on the prognostic role of alcohol use in developing HNC regardless its specific role during cancer treatment. Comparing HPV-positive and HPV-negative patients’ QoL trends over time (Fig. b-d), it is possible to notice that although HPV-positive patients had worse QoL during treatment and immediately after it, especially in relation to GC-RS and HNC-RS factors, their QoL levels increase in the follow-up period; on the other hand, HPV-negative patients had worse QoL during the weeks after concluding treatment, thus, in the follow-up period. Our results are in agreement with literature data. Indeed, HPV-related oropharyngeal cancer patients’ population tends to be younger and healthier, with a very good baseline QOL, compared with individuals with other HPV-unrelated HNC. However, HPV-positive cancer patients are more likely to suffer a deterioration on their QOL during treatment. In a sub-study conducted within a prospective phase 3 randomized trial of concurrent standard radiation versus accelerated radiation plus cisplatin for locally advanced HN Carcinoma: NRG Oncology RTOG 0129, p16-positive oropharyngeal cancer (OPC) patients had better QOL than p16-negative patients did, before treatment and after 1 year after treatment. However, QOL/PS decreased more significantly from pretreatment to the last 2 weeks of treatment in the p16-positive group than in the p16-negative group . Again, in a sub-analysis of the randomized trial Trans-Tasman Radiation Oncology Group (TROG) 02.02 (HeadSTART), HPV-positive patients showed a more dramatic QOL drop with concurrent chemoradiation compared to HPV-negative ones . The current study has some limitations that should be noted and may have an influence on results generalization. First, due to drop-out the sample size of those who completed the questionnaire up to the last time point was smaller than the one who answered at the beginning of the research. Second, our sample consisted mainly of male patients with a prevalence of oropharynx tumors. Although the presence of these limitations, using the MDASI-HN, is a valid and short PROM, having a timeline that included both the treatment and the follow-up period resulted to be fundamental to have deeper understanding of patients’ QoL. Future research should give further attention to treatments sequelae specific to HNC, especially in the long-term period; extending the follow-up period would allow to better understand symptoms trajectories and their interference with daily life, considering that HNC specific symptoms may persist even years after ending treatments. Furthermore, it seems important to consider other psycho-social variables (for instance, gender and financial toxicity ), which may have an impact on treatment outcomes as well as patients’ QoL, and analyze their trajectories over time, allowing to understand how these variables interact with patients’ physical and psychological well-being. This would help to develop more specific treatments and interventions that would answer to patients’ needs.
Although QoL is an important indicator of healthcare systems quality and is included within the assessment of treatments benefits , some of its aspects may be often underdiagnosed and thus undertreated by physicians . Moreover, clinical as well as socio-demographic variables may have an impact on patients’ QoL. Hence, PROM as a standard procedure should be included in patients’ condition assessment, allowing deeper insights of their disease experience and excluding response misunderstanding .
|
Presenting decision-relevant numerical information to Dutch women aged 50–70 with varying levels of health literacy: Case example of adjuvant systemic therapy for breast cancer | 6e4c80ab-128f-44d0-a729-8f2660585633 | 11371237 | Health Literacy[mh] | Women with primary breast cancer face multiple decisions during their treatment trajectory, including concerning (neo-)adjuvant systemic therapy. Informing patients about benefits and harms of different options is one of the key principles in health communication regarding informed and shared decision making (SDM) . Benefits (e.g., prolonged survival, reduced recurrence risk) and harms (e.g., side-effects, lower quality of life) can be presented in the clinical encounter and patient decision aids (PtDAs) . Personalised survival rates, i.e., survival rates obtained from prognostic models based on individual patient characteristics, are increasingly used in this respect . Understanding information about the probability of experiencing harms and benefits of treatment options can help in the decision-making process. However, understanding medical probability information is greatly influenced by patient skills related to information processing, such as the ability to derive meaning from abstract information and factual evaluation and appraisal of information . Such information processing skills are captured in concepts such as health literacy (HL), which is mainly about accessing, understanding, and using health information ; numeracy, which is about understanding and using numbers and probabilities; and graph literacy (GL), which concerns the ability to understand graphically presented information . Concerning HL there is a variety of conceptualizations. While some conceptualizations focus on a broad and holistic view of HL that also includes, for example, searching for information or assessing the reliability of information [e.g., ], other conceptualizations focus more on specific aspects of HL, such as the literacy aspect [e.g., ]. In this study, the focus is on understanding and interpreting abstract health information. Therefore HL is conceptualized as an individual ability in which understanding and interpreting health information is central. Best practices exist for presenting decision-relevant information to enhance patient understanding, such as numerical formats instead of verbal labels only . Visual formats are recommended for certain information types, such as line graphs to show trends over time and bar graphs to compare multiple outcomes , although studies have not always compared such formats to a format without visualisation. Whether a visualisation has added value and which visualisation is best to use depends on the purpose of the communication and the target group . When using visualisations, it is important to adhere to design principles such as clearly labelled captions and axes and consistent denominators/scales . Visualising numerical information can have benefits, mainly because less cognitive effort (e.g., fewer mental calculations) is required . Furthermore, patterns are more visible, comparing multiple options is easier , and denominator neglect (i.e., tendency to pay more attention to numerators than denominators) can be reduced by conveying the part-to-whole relationship . Reducing cognitive effort may be especially beneficial for people with lower HL or numeracy , as they generally have more difficulty understanding decision-relevant information . Information comprehension can be divided into gist comprehension, referring to the essential aspect of the information (e.g., comparing which quantity is greater) and verbatim comprehension which is the literal, detailed message content (e.g., how much larger a quantity is in exact numbers) . Several studies showed that visual formats improve both gist and verbatim understanding . However, other studies found that visual formats are superior mainly for communicating gist messages and numerical formats for verbatim messages . Besides improving understanding, visualisations can influence people’s behavioural intentions and preferences . Therefore, when designing a presentation format, it should be aligned to the communication goal and context . Although visual formats are generally mentioned in recommendations for probability communication, e.g., International Patient Decision Aid Standards (IPDAS) , due to variations in study design and outcome measures, there is no consensus on which format should be used in which situation . For example, a study about the presentation of benefits and harms of cancer screening showed no differences in understanding fact boxes with numbers and fact boxes with icon arrays . However, the decision to undergo cancer screening versus cancer treatment is arguably different. In the context of adjuvant systemic breast cancer treatment, previous studies of visual formats of survival rates (i.e., icon arrays and bar graphs) have been conducted . However, only one study compared icon arrays and bar graphs, showing a 2-option icon array was best understood compared to bar graphs and 4-option visualisations . In none of the studies in the context of adjuvant systemic breast cancer treatment, a direct comparison with a non-visual format was made. Moreover, no side-effect information was presented, while this is essential in the trade-off to be made by patients. When studies presented information about side-effects of cancer treatments or treatments to reduce the cancer risk, the probability of side-effects was typically not presented visually [e.g., – ]. One study examined two visual side-effect formats, i.e., a bar graph and an icon array, and found no differences in understanding. However, no information about the benefits was presented and no comparison with a textual condition was made . One study used a bar graph to display probabilities of side-effects in addition to text in the context of medication to reduce the risk of developing cancer; participants were found to be more accurate when viewing this bar graph compared to text alone . However, the information on side-effects was limited and consisted of one side-effect only. This does not fully correspond to the variety of information on side-effects that is usually provided for cancer treatment such as adjuvant systematic treatment for breast cancer. In addition, previous research has not investigated which format best suits people depending on their information processing skills. To gain more insight into how to present decision-relevant information in the context of systemic adjuvant treatment for breast cancer, we co-created several presentation formats regarding benefit (i.e., survival rates) and potential harm (i.e., side-effects) with patients with breast cancer in a previous study . To account for potential HL differences, this prior study also included patients with low HL. Based on this co-creation study, bar graphs and icon arrays seemed promising for visualising survival rate information, also for those with lower HL. To compare these visualisations to non-visualised numerical information, a text block format was developed. Regarding side-effect information, patients expressed a need for including probability information and explanations of what the side-effects exactly entail , corresponding to previous research . Therefore, five presentation formats were developed that varied in the way of presenting probabilities (no probabilities or probabilities in numbers/visualisation) and in providing an additional description of side-effects (for details see section). The current study aimed to investigate: 1) the effect of several presentation formats of survival rates on comprehension; 2) the effect of the provision of side-effects probability information and accompanying description of those side-effects on comprehension and feeling informed; and (3) differential effects of the presentation formats among women with lower HL versus higher HL. Since presentation formats of decision-relevant information can also influence patients’ intentions and evaluations , we explored effects on several secondary outcomes: affect, hypothetical decision, decision confidence, and evaluation of information. In addition, we also assessed the perception of the treatment effect in the first experiment and risk perception regarding additional treatment in the second experiment. Because the information presented was intended to support decision-making, we also included decision uncertainty and the extent to which the information contributes to the perceived preparedness for decision-making in the second experiment. While some (elements) of the co-created visualisations were also tested in previous studies, our study adds the following. First, our information was similar to the complex information that can be provided in oncology practice, such as multiple side-effects with probability estimates with quite a large range. Previous studies simplified this information. Secondly, we also looked at the combination of information on survival rates and side-effects, because in SDM practice, both are needed to make a trade-off. Third, the formats were developed in co-creation with patients, so that the needs of the end-users were central during the development of the formats. Finally, we compared survival rates presented in icon arrays and bar graphs with a well-designed textual numerical format (Experiment 1), an underexposed comparison in previous studies.
Design and materials In two online randomised experiments, women viewed presentation formats with hypothetical information embedded in an online survey. Experiment 2 was performed after the first and contained new participants. Participants in Experiment 1 were excluded from participating in Experiment 2 by the panel. Before data collection, we formulated hypotheses, shown in . Ethical approval was obtained from the medical research ethics committee of Amsterdam UMC, location VUmc (FWA00017598). The Dutch Medical Research Involving Human Subjects Act (WMO) did not apply. Participants provided written informed consent (online) after reading the study aim in the online survey. This study was pre-registered before data collection through the Open Science Framework on July 8 th , 2021 https://osf.io/sxpjf/?view_only=cb702fb0aa904758bc2ce4a19abf0b74 . Deviations from the pre-registration are indicated. The first experiment contained a 3 (survival rate format: text block–bar graph–icon array) x 2 (HL: low–high) between-subjects design. In the formats, three treatment options were presented: (1) no additional treatment; (2) hormone therapy; and (3) combination of hormone therapy/chemotherapy. The numerical information was an example of personalised information obtained from a prognostic model . In co-creation with breast cancer patients, various survival rate visualisations were developed . A bar graph and icon array seemed most appropriate in this context, although there were mixed results regarding the feelings evoked by icon arrays (i.e., some felt overwhelmed and others perceived them as more personal). To compare visualised data with textual numerical data, a text block format was developed that did not visualise the numerical information but did use visual elements (e.g., three text blocks to indicate three options). displays the survival rate formats used in Experiment 1. We built on the best-understood survival rate format from Experiment 1 in Experiment 2. This survival rate format would be the same for all participants in Experiment 2. This second experiment contained a 5 (side-effects format: no probability information–probability information in numbers without description–visualised probability information without description–probability information in numbers with accompanying description–visualised probability information with accompanying description) x 2 (HL: low–high) between-subjects design. The five side-effects presentation formats are described in and . shows the format with the most extensive information. The first format contained no probability information, which resembles how side-effect information is often presented to patients. The other formats contained probability information in numbers or a visualisation. Additionally, some formats contained contextual information (whether a side-effect disappears after treatment and whether something can be done about this side-effect). This need for contextual information was expressed by patients in the preceding co-creation sessions . The fact that we used likelihood estimates in a fairly wide range was driven by clinical reality. We wanted to investigate information formats that could be used in oncology practice and in the Netherlands (nor in most other countries, as far as we know) no exact point estimates were available. So, we developed a visualisation inspired by the results of the co-creation sessions (i.e., a horizontal bar graph), but also based on the available information (i.e., whether side-effects occur in 1–10 or more than 10 out of 100 women). The newly developed visualisation was pre-tested with eight women. An oncologist (I.R.H.M.K) reviewed the medical content of both experiments to ensure accuracy and compliance with practice in encounters with patients. Participants In both experiments, we used a convenience sample of women from the general Dutch population aged 50–70 years without a history of breast cancer. In our preceding co-creation sessions, participants were (former) patients with breast cancer . In the current study, we included women without breast cancer, since the decision-relevant information is intended for newly-diagnosed patients without prior knowledge. Participants were recruited by the online panel Flycatcher (ISO-20252, ISO-27001 certified). To ensure that approximately half of the participants had low HL, women’s HL was assessed before the experiments with the Set of Brief Screening Questions (SBSQ) in Dutch, a self-reported measure with three questions measured on a 5-point scale (0 = always, 4 = never) . Women who scored ≤2 on one of the questions were indicated as having lower HL. HL questions were posed together with the question if they had (or had had) breast cancer (exclusion criteria). Randomisation with quotas on HL was used to assign women to the first or second experiment and to assign them to a presentation format. For Experiment 1, women were recruited by the Panel between August 23 and September 1, 2021, and for the second experiment between September 20 and September 30, 2021. Procedure Participants received a link to an online survey through Flycatcher. Participants received hypothetical but realistic information about a woman with primary hormone-sensitive/Her2Neu-negative breast cancer and were asked to imagine themselves in the hypothetical situation. To emphasize that the medical information did not relate to the participants themselves, it was stated several times in the survey: ‘ Please note : the information is NOT real information . The information is an example . It is not about your own medical situation . ’ Women then received information on survival rates (experiment 1) or survival rates and side-effects (experiment 2). Subsequently, women answered questions (see measures) while still being able to see the survival rates/side-effects information. Finally, women’s numeracy and GL were assessed . Participants were thanked and rewarded according to the panel’s agreements. Measures Regarding participants’ demographic characteristics, age and education level were already known to the panel. Participants’ HL was measured before both experiments with the SBSQ in Dutch, a self-reported measure with three questions measured on a 5-point scale (0 = always, 4 = never). The three questions were: (1) How often do you have someone help you read hospital materials?; (2) How confident are you filling out medical forms by yourself?; (3) How often do you have problems learning about your medical condition because of difficulty understanding written information? . The SBSQ was chosen as a measure for HL because it fits our view of HL in this study, namely as an individual ability in which understanding and interpreting health information is central. Moreover, in our pre-test, people responded negatively to the NVS-D and thought it resembled a math test. The survey also included questions about the medical knowledge and medical education of participants to check afterward whether there were any differences between the groups. provides the primary outcome measures of both experiments. The gist comprehension questions were asked before the verbatim comprehension questions. provides the secondary outcome measures. Questions from the first experiment were pre-tested among 67 women between 18–74 years (M = 34.9; SD = 15.8) without breast cancer. In this pretest, HL was assessed using the SBSQ in Dutch and women scoring ≤2 on one of the questions were indicated as having lower HL skills (n = 30). To avoid ceiling effects in the experiment and to take into account respondents’ comments that the number of questions made it feel like a math exam, we selected comprehension questions answered correctly by ≤90% of the women with lower HL. In addition, the number of questions was reduced because respondents indicated that the questionnaire was too long and minor textual adjustments were made if respondents indicated that the question or response category was not clear. Regarding the perception of treatment effect, questions were selected based on the two relevant comparisons (i.e., no treatment versus hormone therapy, and hormone therapy versus combined therapies). Primary outcome measures–Experiment 1: Presentation of survival rates. Comprehension–gist . Three multiple-choice questions, related to understanding which treatment led to more/less survival. Each answer was coded as 1 (correct) or 0 (incorrect). Comprehension–verbatim . Four open-ended questions, related to the exact amount of extra survival rate for the various treatments. Answers were coded as correct when they were exactly correct. Primary outcome measures–Experiment 2: Side-effects information in addition to survival rates. Gist comprehension trade-off . Three self-composed true/false questions, with an extra ‘I don’t know’ option. The questions were meant to address the essence of weighing the harms and benefits of the options, which was defined by the researchers as knowing that additional treatment increases chances of survival but also brings more risks of side-effects. Each answer was coded as 1 (correct) or 0 (incorrect). Gist comprehension probability of side-effects . Six self-composed multiple-choice questions. Each answer was coded as 1 (correct) or 0 (incorrect). This was not calculated for format A because this format did not contain probability information. Feeling informed . Three items of the Informed subscale of the Decisional Conflict Scale (DCS) on a 5-point scale (1 = strongly disagree, 5 = strongly agree) . This measure was included to assess the effect of adding more detailed side-effect information on participants’ feeling of being informed. Secondary outcome measures–Experiments 1 and 2 To measure people’s feelings and subjective reactions to the presentation formats, various secondary outcomes were assessed. Affect 10 items of the Short PANAS . Half of the items measure Positive Affect (PA) and half Negative Affect (NA) on a 5-point scale (1 = very slightly or not at all, 5 = extremely). Two items were added, i.e., worried and overwhelmed, based on our previous study . Hypothetical decision One question about participants’ choice after seeing the information. Decision confidence One item about how confident one was about the decision (1 = not confident at all, 10 = very confident). Decision uncertainty [Experiment 2 only] Three items of the Uncertainty subscale of the DCS on a 5-point scale (1 = strongly agree, 5 = strongly disagree) . Preparedness for decision-making [Experiment 2 only] Six items from the Preparation for Decision-Making Scale . We included the six items about decision-making and excluded four items about preparation for decision-making with a doctor in the consultation room, to match the aim of our experiment. Perception of treatment effect [Experiment 1 only] Two items measuring perception of the amount of benefit of the treatments based on Zikmund-Fisher, Angott, and Ubel , measured on a 10-point scale (1 = not reduce the chance at all, 10 = reduce the chance a great deal). Risk perception [Experiment 2 only] Six items about hormone therapy and six about chemotherapy, assessing perceived severity, perceived likelihood, and worry on a 10-point scale . Evaluation of the information Three evaluation questions with a 10-point scale (1 = totally disagree, 10 = totally agree) , a higher score indicates a higher evaluation. Besides these secondary outcome measures, we used a realism check to verify whether participants were able to imagine themselves in the hypothetical situation, using two questions, i.e., ‘The situation described was realistic’ and ‘I had no trouble imagining myself in this situation’, with a 10-point scale (1 = totally disagree, 10 = totally agree) . Data analysis An a priori power analysis was performed based on a 3x2 (Experiment 1) and a 5x2 factorial ANOVA design (Experiment 2) with interaction-effect using the software programs G-power and PASS. With a medium effect size (ES) of .25 (Cohen’s f) , alpha of .05, and power of .90, the required sample sizes were 210 and 260. A medium effect size was chosen from a pragmatic point of view as we wanted to formulate recommendations for practical implementations (e.g., PtDAs). However, contrary to what was described in our pre-registration, in both experiments the analyses with comprehension as outcome were performed with cumulative odds ordinal logistic regression with proportional odds (instead of ANOVAs). This was due to the ordinal nature of the comprehension variables with limited possible outcomes (i.e., ranging from 0–3 and 0–6). Likewise, due to the categorical nature of the outcome ‘hypothetical decision’, the effects of format and HL on this outcome were analysed using chi-square tests of association. The other outcomes were analysed using two-way ANOVAs, with format and HL as independent variables. To account for potential effects of multiple hypothesis testing, we applied a Bonferroni correction in post hoc analyses. Results related to numeracy and GL are described in . Analyses were performed using SPSS version 26. Significance levels were set at p < .05.
In two online randomised experiments, women viewed presentation formats with hypothetical information embedded in an online survey. Experiment 2 was performed after the first and contained new participants. Participants in Experiment 1 were excluded from participating in Experiment 2 by the panel. Before data collection, we formulated hypotheses, shown in . Ethical approval was obtained from the medical research ethics committee of Amsterdam UMC, location VUmc (FWA00017598). The Dutch Medical Research Involving Human Subjects Act (WMO) did not apply. Participants provided written informed consent (online) after reading the study aim in the online survey. This study was pre-registered before data collection through the Open Science Framework on July 8 th , 2021 https://osf.io/sxpjf/?view_only=cb702fb0aa904758bc2ce4a19abf0b74 . Deviations from the pre-registration are indicated. The first experiment contained a 3 (survival rate format: text block–bar graph–icon array) x 2 (HL: low–high) between-subjects design. In the formats, three treatment options were presented: (1) no additional treatment; (2) hormone therapy; and (3) combination of hormone therapy/chemotherapy. The numerical information was an example of personalised information obtained from a prognostic model . In co-creation with breast cancer patients, various survival rate visualisations were developed . A bar graph and icon array seemed most appropriate in this context, although there were mixed results regarding the feelings evoked by icon arrays (i.e., some felt overwhelmed and others perceived them as more personal). To compare visualised data with textual numerical data, a text block format was developed that did not visualise the numerical information but did use visual elements (e.g., three text blocks to indicate three options). displays the survival rate formats used in Experiment 1. We built on the best-understood survival rate format from Experiment 1 in Experiment 2. This survival rate format would be the same for all participants in Experiment 2. This second experiment contained a 5 (side-effects format: no probability information–probability information in numbers without description–visualised probability information without description–probability information in numbers with accompanying description–visualised probability information with accompanying description) x 2 (HL: low–high) between-subjects design. The five side-effects presentation formats are described in and . shows the format with the most extensive information. The first format contained no probability information, which resembles how side-effect information is often presented to patients. The other formats contained probability information in numbers or a visualisation. Additionally, some formats contained contextual information (whether a side-effect disappears after treatment and whether something can be done about this side-effect). This need for contextual information was expressed by patients in the preceding co-creation sessions . The fact that we used likelihood estimates in a fairly wide range was driven by clinical reality. We wanted to investigate information formats that could be used in oncology practice and in the Netherlands (nor in most other countries, as far as we know) no exact point estimates were available. So, we developed a visualisation inspired by the results of the co-creation sessions (i.e., a horizontal bar graph), but also based on the available information (i.e., whether side-effects occur in 1–10 or more than 10 out of 100 women). The newly developed visualisation was pre-tested with eight women. An oncologist (I.R.H.M.K) reviewed the medical content of both experiments to ensure accuracy and compliance with practice in encounters with patients.
In both experiments, we used a convenience sample of women from the general Dutch population aged 50–70 years without a history of breast cancer. In our preceding co-creation sessions, participants were (former) patients with breast cancer . In the current study, we included women without breast cancer, since the decision-relevant information is intended for newly-diagnosed patients without prior knowledge. Participants were recruited by the online panel Flycatcher (ISO-20252, ISO-27001 certified). To ensure that approximately half of the participants had low HL, women’s HL was assessed before the experiments with the Set of Brief Screening Questions (SBSQ) in Dutch, a self-reported measure with three questions measured on a 5-point scale (0 = always, 4 = never) . Women who scored ≤2 on one of the questions were indicated as having lower HL. HL questions were posed together with the question if they had (or had had) breast cancer (exclusion criteria). Randomisation with quotas on HL was used to assign women to the first or second experiment and to assign them to a presentation format. For Experiment 1, women were recruited by the Panel between August 23 and September 1, 2021, and for the second experiment between September 20 and September 30, 2021.
Participants received a link to an online survey through Flycatcher. Participants received hypothetical but realistic information about a woman with primary hormone-sensitive/Her2Neu-negative breast cancer and were asked to imagine themselves in the hypothetical situation. To emphasize that the medical information did not relate to the participants themselves, it was stated several times in the survey: ‘ Please note : the information is NOT real information . The information is an example . It is not about your own medical situation . ’ Women then received information on survival rates (experiment 1) or survival rates and side-effects (experiment 2). Subsequently, women answered questions (see measures) while still being able to see the survival rates/side-effects information. Finally, women’s numeracy and GL were assessed . Participants were thanked and rewarded according to the panel’s agreements.
Regarding participants’ demographic characteristics, age and education level were already known to the panel. Participants’ HL was measured before both experiments with the SBSQ in Dutch, a self-reported measure with three questions measured on a 5-point scale (0 = always, 4 = never). The three questions were: (1) How often do you have someone help you read hospital materials?; (2) How confident are you filling out medical forms by yourself?; (3) How often do you have problems learning about your medical condition because of difficulty understanding written information? . The SBSQ was chosen as a measure for HL because it fits our view of HL in this study, namely as an individual ability in which understanding and interpreting health information is central. Moreover, in our pre-test, people responded negatively to the NVS-D and thought it resembled a math test. The survey also included questions about the medical knowledge and medical education of participants to check afterward whether there were any differences between the groups. provides the primary outcome measures of both experiments. The gist comprehension questions were asked before the verbatim comprehension questions. provides the secondary outcome measures. Questions from the first experiment were pre-tested among 67 women between 18–74 years (M = 34.9; SD = 15.8) without breast cancer. In this pretest, HL was assessed using the SBSQ in Dutch and women scoring ≤2 on one of the questions were indicated as having lower HL skills (n = 30). To avoid ceiling effects in the experiment and to take into account respondents’ comments that the number of questions made it feel like a math exam, we selected comprehension questions answered correctly by ≤90% of the women with lower HL. In addition, the number of questions was reduced because respondents indicated that the questionnaire was too long and minor textual adjustments were made if respondents indicated that the question or response category was not clear. Regarding the perception of treatment effect, questions were selected based on the two relevant comparisons (i.e., no treatment versus hormone therapy, and hormone therapy versus combined therapies). Primary outcome measures–Experiment 1: Presentation of survival rates. Comprehension–gist . Three multiple-choice questions, related to understanding which treatment led to more/less survival. Each answer was coded as 1 (correct) or 0 (incorrect). Comprehension–verbatim . Four open-ended questions, related to the exact amount of extra survival rate for the various treatments. Answers were coded as correct when they were exactly correct. Primary outcome measures–Experiment 2: Side-effects information in addition to survival rates. Gist comprehension trade-off . Three self-composed true/false questions, with an extra ‘I don’t know’ option. The questions were meant to address the essence of weighing the harms and benefits of the options, which was defined by the researchers as knowing that additional treatment increases chances of survival but also brings more risks of side-effects. Each answer was coded as 1 (correct) or 0 (incorrect). Gist comprehension probability of side-effects . Six self-composed multiple-choice questions. Each answer was coded as 1 (correct) or 0 (incorrect). This was not calculated for format A because this format did not contain probability information. Feeling informed . Three items of the Informed subscale of the Decisional Conflict Scale (DCS) on a 5-point scale (1 = strongly disagree, 5 = strongly agree) . This measure was included to assess the effect of adding more detailed side-effect information on participants’ feeling of being informed.
To measure people’s feelings and subjective reactions to the presentation formats, various secondary outcomes were assessed. Affect 10 items of the Short PANAS . Half of the items measure Positive Affect (PA) and half Negative Affect (NA) on a 5-point scale (1 = very slightly or not at all, 5 = extremely). Two items were added, i.e., worried and overwhelmed, based on our previous study . Hypothetical decision One question about participants’ choice after seeing the information. Decision confidence One item about how confident one was about the decision (1 = not confident at all, 10 = very confident). Decision uncertainty [Experiment 2 only] Three items of the Uncertainty subscale of the DCS on a 5-point scale (1 = strongly agree, 5 = strongly disagree) . Preparedness for decision-making [Experiment 2 only] Six items from the Preparation for Decision-Making Scale . We included the six items about decision-making and excluded four items about preparation for decision-making with a doctor in the consultation room, to match the aim of our experiment. Perception of treatment effect [Experiment 1 only] Two items measuring perception of the amount of benefit of the treatments based on Zikmund-Fisher, Angott, and Ubel , measured on a 10-point scale (1 = not reduce the chance at all, 10 = reduce the chance a great deal). Risk perception [Experiment 2 only] Six items about hormone therapy and six about chemotherapy, assessing perceived severity, perceived likelihood, and worry on a 10-point scale . Evaluation of the information Three evaluation questions with a 10-point scale (1 = totally disagree, 10 = totally agree) , a higher score indicates a higher evaluation. Besides these secondary outcome measures, we used a realism check to verify whether participants were able to imagine themselves in the hypothetical situation, using two questions, i.e., ‘The situation described was realistic’ and ‘I had no trouble imagining myself in this situation’, with a 10-point scale (1 = totally disagree, 10 = totally agree) .
10 items of the Short PANAS . Half of the items measure Positive Affect (PA) and half Negative Affect (NA) on a 5-point scale (1 = very slightly or not at all, 5 = extremely). Two items were added, i.e., worried and overwhelmed, based on our previous study .
One question about participants’ choice after seeing the information.
One item about how confident one was about the decision (1 = not confident at all, 10 = very confident).
Three items of the Uncertainty subscale of the DCS on a 5-point scale (1 = strongly agree, 5 = strongly disagree) .
Six items from the Preparation for Decision-Making Scale . We included the six items about decision-making and excluded four items about preparation for decision-making with a doctor in the consultation room, to match the aim of our experiment.
Two items measuring perception of the amount of benefit of the treatments based on Zikmund-Fisher, Angott, and Ubel , measured on a 10-point scale (1 = not reduce the chance at all, 10 = reduce the chance a great deal).
Six items about hormone therapy and six about chemotherapy, assessing perceived severity, perceived likelihood, and worry on a 10-point scale .
Three evaluation questions with a 10-point scale (1 = totally disagree, 10 = totally agree) , a higher score indicates a higher evaluation. Besides these secondary outcome measures, we used a realism check to verify whether participants were able to imagine themselves in the hypothetical situation, using two questions, i.e., ‘The situation described was realistic’ and ‘I had no trouble imagining myself in this situation’, with a 10-point scale (1 = totally disagree, 10 = totally agree) .
An a priori power analysis was performed based on a 3x2 (Experiment 1) and a 5x2 factorial ANOVA design (Experiment 2) with interaction-effect using the software programs G-power and PASS. With a medium effect size (ES) of .25 (Cohen’s f) , alpha of .05, and power of .90, the required sample sizes were 210 and 260. A medium effect size was chosen from a pragmatic point of view as we wanted to formulate recommendations for practical implementations (e.g., PtDAs). However, contrary to what was described in our pre-registration, in both experiments the analyses with comprehension as outcome were performed with cumulative odds ordinal logistic regression with proportional odds (instead of ANOVAs). This was due to the ordinal nature of the comprehension variables with limited possible outcomes (i.e., ranging from 0–3 and 0–6). Likewise, due to the categorical nature of the outcome ‘hypothetical decision’, the effects of format and HL on this outcome were analysed using chi-square tests of association. The other outcomes were analysed using two-way ANOVAs, with format and HL as independent variables. To account for potential effects of multiple hypothesis testing, we applied a Bonferroni correction in post hoc analyses. Results related to numeracy and GL are described in . Analyses were performed using SPSS version 26. Significance levels were set at p < .05.
Sample characteristics shows an overview of participant inclusion, non-response reasons, and randomisation. The panel examined the data for potential inattentive responders and poor data quality. Quality checks were performed on open answers, consistency of answers, straight-lining (i.e., the same answer option is chosen throughout a series of statements), and completion time, resulting in the removal of one respondent in Experiment 1 and two in Experiment 2. Randomisation with quotas on HL was used to assign women to the first or second experiment. This ensured that approximately the same number of participants with low and high HL would participate in the first and second experiments and be assigned to a presentation format. In Experiment 1, women were on average 59.4 ± 6.1 years (N = 219), 63 (28.8%) women had a low educational level, and 98 (44.7%) had low HL. In Experiment 2, women were on average 59.6 ± 6.0 years (N = 282), 80 (28.4%) women had a low educational level, and 116 (41.1%) had low HL. describes background characteristics of both cohorts. Experiment 1 –presentation of survival rates presents descriptive findings for the primary and secondary outcomes of Experiment 1. The ordinal logistic regression results for the interaction and main effects of format and health literacy on comprehension from both Experiments 1 and 2 described below are tabulated in . Primary outcomes–comprehension (H1) Contrary to our hypothesis, there was no significant effect of presentation format (H1a), Wald χ 2 (2) = .83, p = .660, nor an interaction effect between format and HL on gist comprehension (H1c), Wald χ 2 (2) = 2.74, p = .254. HL had a significant effect on gist comprehension, Wald χ 2 (1) = 6.84, p = .009. Those with high HL exhibited a higher gist comprehension compared to those with low HL, with the odds of women with high HL having higher gist comprehension being 2.66 (95% CI, 1.28 to 5.55) times that of women with low HL. Contrary to our hypothesis, there was no significant main effect of format on verbatim comprehension (H1b), Wald χ 2 (2) = 2.15, p = .342. Nor was HL associated with verbatim comprehension, Wald χ 2 (1) = 1.32, p = .251. The model with interaction violated the assumption of proportional odds, therefore a multinomial logistic regression was conducted to test the interaction between format and HL on verbatim comprehension. Contrary to our hypothesis (H1c), this showed no significant interaction, χ 2 (8) = 12.35, p = .136. Secondary outcomes None of the interactions or main effects exploratory tested were significant. Also, neither format, χ 2 (4) = 6.15, p = .188, nor HL, χ 2 (2) = 3.80, p = .150, were associated with the hypothetical decision. Concerning the realism check (how well women empathised with the scenario), there were no effects related to format and/or HL. Experiment 2 –side-effects information in addition to survival rates Building on Experiment 1 we intended to use the best-understood survival rate format from this first experiment in Experiment 2. However, as there were no significant between-format differences, we made the pragmatic decision to continue with the text block format. Primary outcomes–comprehension (H2) presents descriptive findings for the primary and secondary outcomes. Contrary to the hypothesis, there was no main effect of format (H2a), Wald χ 2 (4) = 4.68, p = .322, nor a significant interaction between format and HL on gist comprehension of the trade-off (H2c), Wald χ 2 (4) = 5.92, p = .206. Nor did HL influence gist comprehension of the trade-off, Wald χ 2 (1) = .61, p = .436. Also for gist comprehension of the probability of side-effects, there were no significant effects, neither for the interaction effects of format and HL (H2c), Wald χ 2 (3) = 1.41, p = .703, nor for the main effects of format (H2b), Wald χ 2 (3) = 1.17, p = .760, or HL, Wald χ 2 (1) = 1.56, p = .211. This lack of effects was also not in line with our hypotheses. Primary outcomes–feeling informed (H3) Regarding ‘feeling informed’ (H3), we found an interaction between HL and format, F (4, 274) = 2.67, p = .032, partial η 2 = .04. Therefore, an analysis of simple main effects was performed. For format C (visualised probability information without description), there was a difference in the average score on feeling informed between women with low and high HL, F (1,272) = 7.75, p = .006 after Bonferroni correction, partial η 2 = .03. Women with low HL presented with this format felt more informed (4.40 ± .64) than women with high HL presented with this format (3.89 ± .73), a mean difference of .51 (95% CI, .15 to .86). The interaction is displayed in . Other simple main effects were not significant. Secondary outcomes There was a main effect of HL on Negative Affect (PANAS NA), F (1, 272) = 5.80, p = .017, partial η 2 = .02. Women with low HL experienced more Negative Affect (marginal means 2.60 ± .11) than women with high HL (marginal means 2.27 ± .09), a mean difference of .34 (95% CI, .06 to .61). Interactions or main effects for the other affect outcomes were not significant. For risk perception regarding hormone treatment, there was a main effect for format, F (4, 272) = 4.40, p = .002, partial η 2 = .06. The pairwise comparisons showed a significant difference between format A (no probability information) and format C (visualised probability information without description) of .91 (95% CI, .24 to 1.58), p = .002. Risk perception for women presented with format A (no probability information) was higher (marginal means 7.34 ± .16) compared to risk perception of women presented with format C (visualised probability information without description; marginal means 6.44 ± .17). For risk perception regarding chemotherapy, the ANOVA showed a main effect for format, F (4, 272) = 2.41, p = .049, partial η 2 = .03, but none of the post hoc pairwise comparisons were statistically significant. For the secondary outcomes decision uncertainty, preparedness for decision-making, evaluation of information, and decision confidence, there were no interactions or main effects. The effects of format and HL on the hypothetical decision were also not significant, nor were interactions or main effects regarding the realism check.
shows an overview of participant inclusion, non-response reasons, and randomisation. The panel examined the data for potential inattentive responders and poor data quality. Quality checks were performed on open answers, consistency of answers, straight-lining (i.e., the same answer option is chosen throughout a series of statements), and completion time, resulting in the removal of one respondent in Experiment 1 and two in Experiment 2. Randomisation with quotas on HL was used to assign women to the first or second experiment. This ensured that approximately the same number of participants with low and high HL would participate in the first and second experiments and be assigned to a presentation format. In Experiment 1, women were on average 59.4 ± 6.1 years (N = 219), 63 (28.8%) women had a low educational level, and 98 (44.7%) had low HL. In Experiment 2, women were on average 59.6 ± 6.0 years (N = 282), 80 (28.4%) women had a low educational level, and 116 (41.1%) had low HL. describes background characteristics of both cohorts.
presents descriptive findings for the primary and secondary outcomes of Experiment 1. The ordinal logistic regression results for the interaction and main effects of format and health literacy on comprehension from both Experiments 1 and 2 described below are tabulated in . Primary outcomes–comprehension (H1) Contrary to our hypothesis, there was no significant effect of presentation format (H1a), Wald χ 2 (2) = .83, p = .660, nor an interaction effect between format and HL on gist comprehension (H1c), Wald χ 2 (2) = 2.74, p = .254. HL had a significant effect on gist comprehension, Wald χ 2 (1) = 6.84, p = .009. Those with high HL exhibited a higher gist comprehension compared to those with low HL, with the odds of women with high HL having higher gist comprehension being 2.66 (95% CI, 1.28 to 5.55) times that of women with low HL. Contrary to our hypothesis, there was no significant main effect of format on verbatim comprehension (H1b), Wald χ 2 (2) = 2.15, p = .342. Nor was HL associated with verbatim comprehension, Wald χ 2 (1) = 1.32, p = .251. The model with interaction violated the assumption of proportional odds, therefore a multinomial logistic regression was conducted to test the interaction between format and HL on verbatim comprehension. Contrary to our hypothesis (H1c), this showed no significant interaction, χ 2 (8) = 12.35, p = .136. Secondary outcomes None of the interactions or main effects exploratory tested were significant. Also, neither format, χ 2 (4) = 6.15, p = .188, nor HL, χ 2 (2) = 3.80, p = .150, were associated with the hypothetical decision. Concerning the realism check (how well women empathised with the scenario), there were no effects related to format and/or HL.
Contrary to our hypothesis, there was no significant effect of presentation format (H1a), Wald χ 2 (2) = .83, p = .660, nor an interaction effect between format and HL on gist comprehension (H1c), Wald χ 2 (2) = 2.74, p = .254. HL had a significant effect on gist comprehension, Wald χ 2 (1) = 6.84, p = .009. Those with high HL exhibited a higher gist comprehension compared to those with low HL, with the odds of women with high HL having higher gist comprehension being 2.66 (95% CI, 1.28 to 5.55) times that of women with low HL. Contrary to our hypothesis, there was no significant main effect of format on verbatim comprehension (H1b), Wald χ 2 (2) = 2.15, p = .342. Nor was HL associated with verbatim comprehension, Wald χ 2 (1) = 1.32, p = .251. The model with interaction violated the assumption of proportional odds, therefore a multinomial logistic regression was conducted to test the interaction between format and HL on verbatim comprehension. Contrary to our hypothesis (H1c), this showed no significant interaction, χ 2 (8) = 12.35, p = .136.
None of the interactions or main effects exploratory tested were significant. Also, neither format, χ 2 (4) = 6.15, p = .188, nor HL, χ 2 (2) = 3.80, p = .150, were associated with the hypothetical decision. Concerning the realism check (how well women empathised with the scenario), there were no effects related to format and/or HL.
Building on Experiment 1 we intended to use the best-understood survival rate format from this first experiment in Experiment 2. However, as there were no significant between-format differences, we made the pragmatic decision to continue with the text block format. Primary outcomes–comprehension (H2) presents descriptive findings for the primary and secondary outcomes. Contrary to the hypothesis, there was no main effect of format (H2a), Wald χ 2 (4) = 4.68, p = .322, nor a significant interaction between format and HL on gist comprehension of the trade-off (H2c), Wald χ 2 (4) = 5.92, p = .206. Nor did HL influence gist comprehension of the trade-off, Wald χ 2 (1) = .61, p = .436. Also for gist comprehension of the probability of side-effects, there were no significant effects, neither for the interaction effects of format and HL (H2c), Wald χ 2 (3) = 1.41, p = .703, nor for the main effects of format (H2b), Wald χ 2 (3) = 1.17, p = .760, or HL, Wald χ 2 (1) = 1.56, p = .211. This lack of effects was also not in line with our hypotheses. Primary outcomes–feeling informed (H3) Regarding ‘feeling informed’ (H3), we found an interaction between HL and format, F (4, 274) = 2.67, p = .032, partial η 2 = .04. Therefore, an analysis of simple main effects was performed. For format C (visualised probability information without description), there was a difference in the average score on feeling informed between women with low and high HL, F (1,272) = 7.75, p = .006 after Bonferroni correction, partial η 2 = .03. Women with low HL presented with this format felt more informed (4.40 ± .64) than women with high HL presented with this format (3.89 ± .73), a mean difference of .51 (95% CI, .15 to .86). The interaction is displayed in . Other simple main effects were not significant. Secondary outcomes There was a main effect of HL on Negative Affect (PANAS NA), F (1, 272) = 5.80, p = .017, partial η 2 = .02. Women with low HL experienced more Negative Affect (marginal means 2.60 ± .11) than women with high HL (marginal means 2.27 ± .09), a mean difference of .34 (95% CI, .06 to .61). Interactions or main effects for the other affect outcomes were not significant. For risk perception regarding hormone treatment, there was a main effect for format, F (4, 272) = 4.40, p = .002, partial η 2 = .06. The pairwise comparisons showed a significant difference between format A (no probability information) and format C (visualised probability information without description) of .91 (95% CI, .24 to 1.58), p = .002. Risk perception for women presented with format A (no probability information) was higher (marginal means 7.34 ± .16) compared to risk perception of women presented with format C (visualised probability information without description; marginal means 6.44 ± .17). For risk perception regarding chemotherapy, the ANOVA showed a main effect for format, F (4, 272) = 2.41, p = .049, partial η 2 = .03, but none of the post hoc pairwise comparisons were statistically significant. For the secondary outcomes decision uncertainty, preparedness for decision-making, evaluation of information, and decision confidence, there were no interactions or main effects. The effects of format and HL on the hypothetical decision were also not significant, nor were interactions or main effects regarding the realism check.
presents descriptive findings for the primary and secondary outcomes. Contrary to the hypothesis, there was no main effect of format (H2a), Wald χ 2 (4) = 4.68, p = .322, nor a significant interaction between format and HL on gist comprehension of the trade-off (H2c), Wald χ 2 (4) = 5.92, p = .206. Nor did HL influence gist comprehension of the trade-off, Wald χ 2 (1) = .61, p = .436. Also for gist comprehension of the probability of side-effects, there were no significant effects, neither for the interaction effects of format and HL (H2c), Wald χ 2 (3) = 1.41, p = .703, nor for the main effects of format (H2b), Wald χ 2 (3) = 1.17, p = .760, or HL, Wald χ 2 (1) = 1.56, p = .211. This lack of effects was also not in line with our hypotheses.
Regarding ‘feeling informed’ (H3), we found an interaction between HL and format, F (4, 274) = 2.67, p = .032, partial η 2 = .04. Therefore, an analysis of simple main effects was performed. For format C (visualised probability information without description), there was a difference in the average score on feeling informed between women with low and high HL, F (1,272) = 7.75, p = .006 after Bonferroni correction, partial η 2 = .03. Women with low HL presented with this format felt more informed (4.40 ± .64) than women with high HL presented with this format (3.89 ± .73), a mean difference of .51 (95% CI, .15 to .86). The interaction is displayed in . Other simple main effects were not significant.
There was a main effect of HL on Negative Affect (PANAS NA), F (1, 272) = 5.80, p = .017, partial η 2 = .02. Women with low HL experienced more Negative Affect (marginal means 2.60 ± .11) than women with high HL (marginal means 2.27 ± .09), a mean difference of .34 (95% CI, .06 to .61). Interactions or main effects for the other affect outcomes were not significant. For risk perception regarding hormone treatment, there was a main effect for format, F (4, 272) = 4.40, p = .002, partial η 2 = .06. The pairwise comparisons showed a significant difference between format A (no probability information) and format C (visualised probability information without description) of .91 (95% CI, .24 to 1.58), p = .002. Risk perception for women presented with format A (no probability information) was higher (marginal means 7.34 ± .16) compared to risk perception of women presented with format C (visualised probability information without description; marginal means 6.44 ± .17). For risk perception regarding chemotherapy, the ANOVA showed a main effect for format, F (4, 272) = 2.41, p = .049, partial η 2 = .03, but none of the post hoc pairwise comparisons were statistically significant. For the secondary outcomes decision uncertainty, preparedness for decision-making, evaluation of information, and decision confidence, there were no interactions or main effects. The effects of format and HL on the hypothetical decision were also not significant, nor were interactions or main effects regarding the realism check.
In this study, we investigated the effects of several (visual) presentation formats to present decision-relevant numerical information (i.e., survival rates and side-effects) to patients in support of informed and shared decision making (SDM). Results showed that, based on medium effect sizes, different well-designed presentation formats that adhere to best practices in probability communication did not differ in terms of comprehension of the end-users. However, regardless of presentation format, women with low health literacy (HL) exhibited worse gist understanding of the survival rates than women with high HL. Regarding the side-effects formats, when the format with visualised probability information without a description of the specific side-effects was shown, women with low HL felt better informed than women with high HL. There may be several explanations for the lack of beneficial effects of the visual formats compared to a text format on respondents’ understanding. First, it might be that the text blocks showing the numerical information were quite optimal because they followed general best practices in probability communication as well as graphical design principles. Previous studies found that structuring textual information in, for example, fact boxes or tables can lead to the same or better level of comprehension compared to non-structured textual formats or other graphical displays . The advantage of structured textual formats can be that, unlike bar graphs and icon arrays, no legend needs to be interpreted. It is also possible that despite developing the visualisations in co-creation with the target group, the visualisations were still not optimal. For example, women who were particularly focused on the visualisation and the legend may not have noticed that all the options were about someone who had already undergone surgery. It can also be argued that the relatively limited additional benefit of hormone therapy and chemotherapy (4% and 5%, respectively) versus a relatively large percentage of people surviving without treatment (72%) may have played a role in the lack of effects of the visual formats. It might be that other survival-to-benefit ratios may be more noticeable when displayed visually. Another potential explanation may be related to the difficulty of the information. For example, when developing the survival rate formats with patients, women expressed a need for an overview of options, including ‘no additional treatment’. This resulted in formats with three options, which may have been overwhelming, especially for those with lower information processing skills. Indeed, our study showed that women with low HL had worse understanding of the gist of survival rates than women with high HL. A study on the same three treatment options showed that comprehension increased when presenting options as two decisions instead of one (no additional treatment versus receiving hormone therapy and then hormone therapy versus hormone therapy with chemotherapy) . It may be worthwhile to further explore how women’s expressed need for an overview of treatment options can be combined with sequential presentation of information. Also regarding the side-effects, the difficulty of the information may have played a role. The information about side-effects was generally not well understood, both by people with higher and lower HL. Dividing the probabilities into two categories (i.e., occurrence in more than 10 out of 100 women and occurrence in 1 to 10 out of 100 women) might be more difficult to interpret and compare than exact probability information (e.g., 8 out of 100 women will experience this side-effect). Additionally, more than 10 out of 100 women represents a wide range of probabilities. This probably makes the information, even when visualised, more difficult. Exact point estimates were not available, which raises the question of how to deal with this in practice. A limited amount of research has investigated the presentation of uncertainty in icon arrays with colour gradient, shading, or arrow, but either the effect on comprehension was not (yet) examined, or no differences were found compared to no visualisation . Further research into how to communicate a range is therefore needed. Regardless of format, women with lower HL experienced more negative affect than women with high HL. However, when provided with visualised probability information without a description of the side-effects, women with low HL felt better informed than women with high HL. An explanation might be that those with lower HL might not accurately estimate how informed they are, as found in previous research . However, we found significant correlations between the scores for knowledge and feeling informed for both women with lower and higher HL. This might be because we, unlike the previous study, measured these outcomes immediately after information provision. Also concerning risk perception related to hormone treatment, an effect for visualising probability information was found. Women provided with visualised probability information without a description of side-effects exhibited a lower risk perception than those provided with no probability information. However, this effect was not present in the format containing a description of the side-effects. An explanation could be that the amount of information in this description reduced the positive effect of the visualisation. In these cases, it may be that less information (e.g., not a description for all the side-effects) may be preferred by lower HL people to gain a sense of ‘mastery’ of this complex information. When examining comprehension, gist and verbatim comprehension were assessed using self-composed questions. This was due to an absence of validated questionnaires, as these concepts depend on the specific information presented. Especially for gist comprehension, it remains difficult to assess which gist representations count as ‘accurate’ and how surveys should capture this . We reasoned that the essential information was which additional treatment gives the most benefit (extra survival, Experiment 1) and that additional treatment increases survival but also brings more risks of side-effects (Experiment 2). One may argue that other, unassessed, gist representations can also be distilled from the information, such as that additional benefit of adjuvant systemic therapy is ‘relatively small’ or that even without additional treatment most women will still be alive in 10 years. However, women with low HL exhibited worse gist understanding overall, suggesting that we captured at least some important aspects of gist representations. It should be noted that the comprehension questions were mainly aimed at understanding the core message of the information. This may have resulted in the comprehension questions in Experiment 1 in particular being too easy and not necessarily the most focused on discovering differences between the formats. Therefore, the comprehension questions themselves may also have contributed to the lack of effects. The survey as a whole may also have played a role, as women had to understand not only the information but also the questions. The questionnaire was translated to a reading level of up to sixth grade by a plain language expert and participants saw the information on screen while answering questions. Nevertheless, the questions may have been difficult, especially for those with lower HL. This study used adjuvant systemic therapy for breast cancer as a case example. However, more and more decision-relevant data are becoming available for other forms of cancer as well, such as lung cancer and stomach and oesophageal cancer. The results of the current study can be important in presenting decision-relevant numerical information more broadly in oncology. However, the context and specific decision to be made should be taken into account. For example, the survival rates, available treatment options, prognosis, and the average age of the patient population can influence comprehension. User-testing within the specific context remains necessary. Strengths and limitations A strength of this study is the inclusion of information about both survival rates and side-effects of treatment options. Previous studies examined visual formats of survival rates for adjuvant systemic breast cancer treatment , but without the comparison with a textual format. Other studies investigated side-effects message/presentation format [e.g., , , ], but not in combination with survival rate information. For future research, it may be interesting to investigate whether the different combinations of survival rates and side-effect formats have different effects on the trade-off to be made, as the different combinations of formats were not investigated in the current study. A potential limitation regarding generalizability in practice is the use of hypothetical scenarios. It may be that respondents paid less attention to the information due to the hypothetical scenario. Besides, the information is meant for women diagnosed with breast cancer, a stressful and life-threatening situation involving emotions, which may influence information processing. Another potential limitation is that we do not know what previous experiences our respondents had with cancer. Moreover, multiple (secondary) outcome measures were examined, resulting in multiple testing. However, since no major effects were found, this limitation ultimately did not influence our findings. The effect sizes of the studies on which the International Patient Decision Aid Standards (IPDAS) collaboration’s recommendation to use visualisations is based are generally small to moderate . Sample sizes in our experiments were based on medium effect sizes of Cohen’s f .25, a pragmatic choice as our starting point was to make recommendations for implementation in practice (e.g., PtDAs). However, these medium effect sizes may be a reason for the lack of differences between formats and it should be noted that non-significant findings do not necessarily indicate the true absence of an effect. Moreover, initial power calculations were based on factorial ANOVA designs, whereas ultimately ordinal logistic regression analyses were performed. This may have affected the power. In addition, based on the initial power calculations, group sizes for the bar graph format in Experiment 1 were smaller for the low HL (n = 24) and high HL (n = 31) than the required n = 35 for a 90% power to detect an interaction-effect based on a medium effect size. For Experiment 2, group sizes were smaller for the low HL presented with format B (n = 24), format D (n = 17), and format E (n = 23) than the required n = 26 for a 91% power to detect an interaction-effect based on a medium effect size. However, it should be noted that the expected interaction-effects were ordinal-interactions rather than full crossover interactions, therefore the statistical power to detect the expected interactions is lower than the a priori calculated 90% and 91%. This implies that the question of whether people with lower health literacy levels would benefit more from the formats with the visualizations than people with higher health literacy could not be answered reliably.
A strength of this study is the inclusion of information about both survival rates and side-effects of treatment options. Previous studies examined visual formats of survival rates for adjuvant systemic breast cancer treatment , but without the comparison with a textual format. Other studies investigated side-effects message/presentation format [e.g., , , ], but not in combination with survival rate information. For future research, it may be interesting to investigate whether the different combinations of survival rates and side-effect formats have different effects on the trade-off to be made, as the different combinations of formats were not investigated in the current study. A potential limitation regarding generalizability in practice is the use of hypothetical scenarios. It may be that respondents paid less attention to the information due to the hypothetical scenario. Besides, the information is meant for women diagnosed with breast cancer, a stressful and life-threatening situation involving emotions, which may influence information processing. Another potential limitation is that we do not know what previous experiences our respondents had with cancer. Moreover, multiple (secondary) outcome measures were examined, resulting in multiple testing. However, since no major effects were found, this limitation ultimately did not influence our findings. The effect sizes of the studies on which the International Patient Decision Aid Standards (IPDAS) collaboration’s recommendation to use visualisations is based are generally small to moderate . Sample sizes in our experiments were based on medium effect sizes of Cohen’s f .25, a pragmatic choice as our starting point was to make recommendations for implementation in practice (e.g., PtDAs). However, these medium effect sizes may be a reason for the lack of differences between formats and it should be noted that non-significant findings do not necessarily indicate the true absence of an effect. Moreover, initial power calculations were based on factorial ANOVA designs, whereas ultimately ordinal logistic regression analyses were performed. This may have affected the power. In addition, based on the initial power calculations, group sizes for the bar graph format in Experiment 1 were smaller for the low HL (n = 24) and high HL (n = 31) than the required n = 35 for a 90% power to detect an interaction-effect based on a medium effect size. For Experiment 2, group sizes were smaller for the low HL presented with format B (n = 24), format D (n = 17), and format E (n = 23) than the required n = 26 for a 91% power to detect an interaction-effect based on a medium effect size. However, it should be noted that the expected interaction-effects were ordinal-interactions rather than full crossover interactions, therefore the statistical power to detect the expected interactions is lower than the a priori calculated 90% and 91%. This implies that the question of whether people with lower health literacy levels would benefit more from the formats with the visualizations than people with higher health literacy could not be answered reliably.
No evidence was found for a medium effect size in comprehension when presenting decision-relevant numerical information to patients in either a well-designed text block, bar graph, or icon array that all adhered to risk communication best practices. Providing patients with visualisations might not necessarily yield an advantage over providing structured numerical information. These results have practical implications, for example, for patient decision aid developers. The fact that visualising numerical information is not a magic bullet is relevant when developing patient decision aids. Furthermore, the results of this study show that a deeper understanding of how to present numerical and context-specific information about side-effects seems needed. Especially for patients with lower information processing skills this is important. They understood the information less well and experienced more negative affect when receiving side-effect information.
S1 File Side-effects formats. (PDF) S2 File Table secondary outcomes. (PDF) S3 File Numeracy and graph literacy. (PDF) S4 File Ordinal logistic regression for the interaction and main effects of format and health literacy on comprehension. (PDF)
|
Effects of Exercises of Different Intensities on Bone Microstructure and Cardiovascular Risk Factors in Ovariectomized Mice | 2ffdabd6-7337-44d0-b04b-16e18dca7c8c | 11817207 | Surgical Procedures, Operative[mh] | Menopause, defined as the cessation of menstruation due to the permanent loss of ovarian follicular function, can see women spending up to 40% of their lives in a postmenopausal state . The incidence of osteoporosis and cardiovascular disease (CVD) significantly increases in postmenopausal women . Studies have shown a close link between bone health and CVD . Low bone mineral density is associated with endothelial dysfunction, coronary artery disease, peripheral vascular disease, and cardiovascular mortality . Complications such as fractures caused by osteoporosis and CVD severely diminish the quality of life of patients and are major causes of death among elderly women . Appropriate exercise can increase bone density and prevent cardiovascular disease . Due to the significant role of exercise in disease prevention and treatment, a review published in 2016 proposed that “exercise is a kind of medicine” . However, it remains unclear whether there are differences in the preventive and therapeutic effects of different exercise intensities on cardiovascular disease and osteoporosis in menopausal women, as well as the underlying mechanisms of these effects. Osteocalcin (OCN) is a small protein secreted by osteoblasts that not only affects bone formation but also enters the bloodstream to influence glucose metabolism and the cardiovascular system, serving as an important endocrine factor . Moreover, serum OCN is one of the few factors with functions that cover major menopause-related diseases in women, including osteoporosis, cardiovascular disease, and anxiety . Studies on menopausal women have shown that coronary artery calcium score and atherosclerosis are positively correlated with OCN concentrations , indicating that serum OCN may play a role in cardiovascular diseases among menopausal women. It is necessary to investigate the exercise-induced changes of OCN in order to clarify the roles of exercise on the cardiovascular system. The OVX mouse model can effectively mimic the changes in estrogen levels in postmenopausal women and has therefore been widely accepted as a model for menopause to study postmenopausal symptoms . Therefore, in this study, we used ovariectomized (OVX) mice as a model and subjected them to moderate-intensity continuous exercise and high-intensity interval exercise to examine their effects on serum OCN levels, as well as cardiovascular risk factors and osteoporosis in OVX mice. Our study indicates that there is no difference between the two exercise modalities in improving cardiovascular disease risk factors in OVX mice, with MICT showing superior effects on bone microstructure compared to HIIT. Meanwhile, ucOCN does not appear to be the direct cause of the improvement in cardiovascular risk factors due to exercise; rather, it may be related to changes in estrogen levels. Conversely, ucOCN could serve as a potential biomarker for assessing the effectiveness of exercise in the prevention and treatment of osteoporosis. 2.1. Body Weight and Uterus Weight The wet uterine weights and body weights of each group are shown in B,C. Compared with the Sham group, the uterine weights of the OVX group, as well as the OVX + MICT and OVX + HIIT groups, were significantly reduced. There were no significant differences between the OVX group and the OVX + HIIT or OVX + MICT groups. Additionally, the body weights of the mice in the OVX group were significantly higher than those in the Sham group at the third week post-OVX, and both intensities of exercise significantly inhibited the weight gain in the OVX mice. 2.2. Serum E 2 , cOCN, and ucOcn Level Serum E 2 , cOCN, and ucOCN levels were measured to assess the effects of varied exercise intensities on mice ( D–F). Compared with the Sham group, serum E 2 levels significantly decreased in the OVX, OVX + MICT, and OVX + HIIT groups, with no significant differences among them ( D). As shown in E,F, compared with the Sham group, the serum levels of cOCN in OVX mice were significantly reduced, while the concentrations of ucOCN were significantly increased. However, compared with the OVX group, serum ucOCN levels were significantly decreased in both the OVX + MICT group and the OVX + HIIT group. Therefore, exercise did not improve the effects of ovariectomy on serum E 2 levels but significantly decreased the levels of serum ucOCN. 2.3. Lipid Parameters, Blood Pressure, and Blood Vessel Morphology As shown in A,B, compared to Sham mice, OVX mice exhibited significantly elevated serum TG and significantly decreased HDL-C. Both types of exercise significantly reduced serum TG and increased HDL-C in OVX mice. There were no significant differences in LDL-C among the groups ( C). The changes in T-CHO were similar to HDL-C across the groups ( D). E exhibits the aortic intima smoothness and elastic fiber arrangement of Sham, OVX, OVX + MICT, and OVX + HIIT mice. Sham mice showed smooth aortic intima and dense elastic fibers. OVX mice had aortic protrusions, disordered fibers, and ruptures. The vascular elastic fibers of OVX + MICT mice are arranged neatly and without ruptures, while OVX + HIIT mice have loose fibers with reduced ruptures. Von Kossa staining ( F) revealed no calcification in any group. G shows OVX mice exhibited thicker aortic walls than Sham, while both exercise groups exhibited thinner walls. H indicates higher SBP in OVX mice, and the elevation were inhibited by both exercise types, while DBP showed no significant changes across all groups ( I). 2.4. Microstructure of the Distal Femur As shown in A, the 2D and 3D images of the distal femur revealed a significant reduction in the number of trabeculae in the cancellous bone of the distal femur region in OVX mice, accompanied by a substantial increase in trabecular spacing and deterioration of bone microarchitecture. Quantitative analysis results in B–E show that BMD, BV/TV, Tb.Th, and Tb.N were significantly decreased in the OVX group compared to Sham mice. Compared with OVX mice, BMD, BV/TV, Tb.Th and Tb.N were significantly increased in the OVX + MICT group. Furthermore, BMD, BV/TV and Tb.N were significantly increased in the OVX + HIIT group, with no significant change in Tb.Th. Additionally, BMD and BV/TV were significantly higher in the OVX + MICT group than in the OVX + HIIT group. As shown in F,G, compared with Sham mice, Tb.Sp and DA were significantly increased in the cancellous bone of the distal femur of OVX mice. Compared with OVX mice, Tb.Sp was significantly decreased in the OVX + MICT group, while there was no significant improvement in Tb.Sp in the OVX + HIIT group. Neither type of exercise exhibited a significant effect on DA. These results indicated that MICT on improving the microarchitecture of cancellous bone in mice was superior to HIIT, as evidenced by significantly increased BMD and BV/TV and significantly reduced Tb.Sp compared to HIIT. 2.5. The Number of Osteoblasts and Osteoclasts in the Tibia As shown in A,B, the OVX group exhibited significantly fewer osteoblasts per unit area. Both MICT and HIIT exercises increased osteoblasts. As shown in C,D, the number of osteoclasts per unit area significantly increased in OVX mice, while both exercises markedly reduced the number of osteoclasts. The wet uterine weights and body weights of each group are shown in B,C. Compared with the Sham group, the uterine weights of the OVX group, as well as the OVX + MICT and OVX + HIIT groups, were significantly reduced. There were no significant differences between the OVX group and the OVX + HIIT or OVX + MICT groups. Additionally, the body weights of the mice in the OVX group were significantly higher than those in the Sham group at the third week post-OVX, and both intensities of exercise significantly inhibited the weight gain in the OVX mice. 2 , cOCN, and ucOcn Level Serum E 2 , cOCN, and ucOCN levels were measured to assess the effects of varied exercise intensities on mice ( D–F). Compared with the Sham group, serum E 2 levels significantly decreased in the OVX, OVX + MICT, and OVX + HIIT groups, with no significant differences among them ( D). As shown in E,F, compared with the Sham group, the serum levels of cOCN in OVX mice were significantly reduced, while the concentrations of ucOCN were significantly increased. However, compared with the OVX group, serum ucOCN levels were significantly decreased in both the OVX + MICT group and the OVX + HIIT group. Therefore, exercise did not improve the effects of ovariectomy on serum E 2 levels but significantly decreased the levels of serum ucOCN. As shown in A,B, compared to Sham mice, OVX mice exhibited significantly elevated serum TG and significantly decreased HDL-C. Both types of exercise significantly reduced serum TG and increased HDL-C in OVX mice. There were no significant differences in LDL-C among the groups ( C). The changes in T-CHO were similar to HDL-C across the groups ( D). E exhibits the aortic intima smoothness and elastic fiber arrangement of Sham, OVX, OVX + MICT, and OVX + HIIT mice. Sham mice showed smooth aortic intima and dense elastic fibers. OVX mice had aortic protrusions, disordered fibers, and ruptures. The vascular elastic fibers of OVX + MICT mice are arranged neatly and without ruptures, while OVX + HIIT mice have loose fibers with reduced ruptures. Von Kossa staining ( F) revealed no calcification in any group. G shows OVX mice exhibited thicker aortic walls than Sham, while both exercise groups exhibited thinner walls. H indicates higher SBP in OVX mice, and the elevation were inhibited by both exercise types, while DBP showed no significant changes across all groups ( I). As shown in A, the 2D and 3D images of the distal femur revealed a significant reduction in the number of trabeculae in the cancellous bone of the distal femur region in OVX mice, accompanied by a substantial increase in trabecular spacing and deterioration of bone microarchitecture. Quantitative analysis results in B–E show that BMD, BV/TV, Tb.Th, and Tb.N were significantly decreased in the OVX group compared to Sham mice. Compared with OVX mice, BMD, BV/TV, Tb.Th and Tb.N were significantly increased in the OVX + MICT group. Furthermore, BMD, BV/TV and Tb.N were significantly increased in the OVX + HIIT group, with no significant change in Tb.Th. Additionally, BMD and BV/TV were significantly higher in the OVX + MICT group than in the OVX + HIIT group. As shown in F,G, compared with Sham mice, Tb.Sp and DA were significantly increased in the cancellous bone of the distal femur of OVX mice. Compared with OVX mice, Tb.Sp was significantly decreased in the OVX + MICT group, while there was no significant improvement in Tb.Sp in the OVX + HIIT group. Neither type of exercise exhibited a significant effect on DA. These results indicated that MICT on improving the microarchitecture of cancellous bone in mice was superior to HIIT, as evidenced by significantly increased BMD and BV/TV and significantly reduced Tb.Sp compared to HIIT. As shown in A,B, the OVX group exhibited significantly fewer osteoblasts per unit area. Both MICT and HIIT exercises increased osteoblasts. As shown in C,D, the number of osteoclasts per unit area significantly increased in OVX mice, while both exercises markedly reduced the number of osteoclasts. Menopause is an inevitable life stage for women, during which the aging of ovaries leads to a decrease in estrogen levels, thereby increasing the risk of various diseases . Osteoporosis and cardiovascular disease are common among middle-aged and elderly women, with cardiovascular disease being the primary cause of death among older women . Multiple studies have shown that the risk of cardiovascular disease increases after the onset of menopause . Abnormal blood pressure and lipid parameters are risk factors for cardiovascular disease (CVD) . Studies have shown that reducing systolic and diastolic blood pressure can decrease the risk of CVD . Both epidemiological and experimental studies indicate that decreased HDL-C and increased LDL-C increase the risk of CVD . Clinical studies demonstrated that actively improving abnormal blood lipid levels can regress atherosclerotic plaques and reduce the incidence of CVD . The decline in estrogen levels in postmenopausal women and ovariectomized animals leads to elevated blood pressure and abnormal lipid parameters . In our study, OVX mice exhibited significantly increased systolic blood pressure, decreased HDL-C levels, and increased TG levels. Additionally, these mice showed increased elastic fiber rupture and elevated aortic wall thickness. Both exercise intensities effectively lowered blood pressure, improved the morphological structure of the aortic wall, and reduced wall thickness. These changes might be related to the regulation of lipid parameters by exercise. Moreover, both exercise intensities significantly reduced serum TG levels and increased HDL-C levels. Notably, T-CHO levels decreased in OVX mice and increased in the exercise groups. This might be related to the lack of significant changes in LDL-C levels among the groups and the increase in HDL-C levels due to exercise. The specific mechanisms require further investigation. In summary, the four key factors significantly reducing the risk of cardiovascular diseases are attributed to exercise, including the decrease in TG levels, the increase in HDL content, the reduction in blood vessel wall thickness, and the effective control of SBP. These positive physiological changes not only effectively decrease the deposition of lipids on the inner wall of blood vessels but also enhance the cholesterol reverse transport mechanism, thereby significantly improving blood vessel elasticity and compliance, optimizing the circulatory system, and greatly reducing the burden and potential damage to the heart and blood vessels . This series of comprehensive effects constructs a solid defense against the occurrence and development of atherosclerosis, thereby substantially lowering the incidence of cardiovascular diseases . It is well-documented that weight gain is a common occurrence in OVX mice, primarily due to metabolic alterations resulting from decreased estrogen levels . In alignment with previous studies, our results showed that the body weights of OVX mice were significantly higher than those of the Sham group . This weight gain can influence cardiovascular risk factors through various mechanisms, such as increased release of inflammatory cytokines from adipose tissue and negative impacts on insulin sensitivity . Interestingly, both MICT and HIIT significantly curbed the weight gain in OVX mice. This observation suggests that exercise may counteract weight gain by improving energy balance and metabolic regulation . Specifically, exercise could achieve this by increasing energy expenditure, boosting basal metabolic rate, and enhancing insulin sensitivity . In our previous research, exercise significantly elevated the serum osteocalcin levels in VCD-induced ovarian senescent mice and ameliorated their anxiety-like behaviors . Also, studies reported that OCN is involved in the regulation of glucose and lipid metabolism and is associated with vascular atherosclerosis and vascular calcification . Therefore, in this study, we measured the circulating OCN levels in mice at the 9th week after OVX and found that the levels of ucOCN were significantly elevated, while the levels of cOCN were significantly decreased. Both types of exercise significantly reduced the ucOCN levels in OVX mice but had no significant effect on cOCN levels. The significant negative relationship between OCN and estrogen indicated that changes in OCN were induced by estrogen. In fact, we observed a trend of increased estrogen levels in mice from the exercise groups. According to research reports, Wistar female rats that underwent ovariectomy exhibited a significant increase in serum estrogen levels after engaging in exercise for an extended period of time (1 h/d, 6 d/w) for a duration of three months . Meanwhile, we did not detect a correlation between OCN and TG, HDL-C, SBP, or vascular wall thickness, indicating that the reduction in cardiovascular risk factors by both exercises in OVX mice is not directly related to changes in serum ucOC levels. Similar to our results, Wieczorek-Baranowska and his colleagues reported that 8 weeks of aerobic training in postmenopausal women significantly improved central obesity, decreased OCN levels, and reduced insulin resistance, but they did not observe a direct relationship between OCN concentration changes with training and metabolic markers . Another study has demonstrated that there are significant gender differences in the impact of exercise on the regulation of OCN . In this study, exercise increased the level of circulating OCN in female mice but decreased it in male mice. Notably, this change was associated with improvements in cognitive outcomes yet had no correlation with metabolic outcomes. Meanwhile, although exogenous osteocalcin did not improve metabolism, it had a significant effect on improving cognitive defects induced by a high-fat diet. Furthermore, some studies have reported that OCN-knockout mice did not exhibit significant insulin resistance or glucose and lipid metabolism disorders . In contrast, another study involving 39 young obese male participants randomly divided them into a control group and an exercise group, with the exercise group undergoing an 8-week aerobic exercise training program . The results showed that exercise-induced reduction in body fat and improvement in insulin sensitivity were accompanied by a significant increase in serum osteocalcin levels. Moreover, the increase in osteocalcin was negatively correlated with changes in body weight, BMI, body fat percentage, and insulin resistance index . Therefore, it can be concluded that the impact of exercise on osteocalcin and its role in metabolic regulation may vary due to age and gender differences. Additionally, different animal models and exercise interventions of varying intensities can produce different results. Thus, more systematic and comprehensive studies, including gain-of-function and knockout experiments, are needed in the future to further verify the role of osteocalcin in the regulation of energy metabolism and cardiovascular risk factors by exercise. The regulatory effect of exercise on estrogen levels may be an important mechanism by which it improves cardiovascular risk factors in OVX mice. Meanwhile, ucOCN does not seem to be the direct cause of the improvement in cardiovascular risk factors due to exercise; rather, it may serve as a potential biomarker for assessing the effectiveness of exercise in the prevention and treatment of osteoporosis. The microstructure of bone tissue can effectively reflect the health status of bones, and BV/TV, Tb.Th, Tb.Sp, and Tb.N are the primary indicators reflecting bone microstructure . In this study, significant decreases were observed for BMD, BV/TV, Tb.N, and Tb.Th in the distal cancellous bone region of the femur in OVX mice. Additionally, Tb.Sp and DA have increased significantly. MICT was found to improve the bone microstructure more effectively than HIIT because MICT well improved BMD, BV/TV, and Tb.Sp in the distal cancellous bone of the femur in OVX mice. Our results are consistent with previous research . Furthermore, low-to-moderate-intensity treadmill exercise at a speed of 10 m/min can partially reverse the trabecular bone loss induced by ovariectomy. Although high-intensity treadmill exercise at 18 m/min also exhibits certain positive effects, running at the lower intensity is more effective in reducing bone loss . It further supports our results that moderate-intensity exercise is more beneficial for improving bone health. This may be related to the fact that MICT is more effective in promoting osteoblast number (+10.22%) compared to HIIT. Additionally, MICT, with its moderate-intensity continuous exercise, may apply a more sustained and appropriate load stimulus to the bones. In contrast, the appropriate stimulus time that HIIT exerts on the bones may be shorter. This is also one of the reasons why MICT is superior to HIIT in improving bone microstructure. However, the specific mechanism of action still needs further in-depth research. Morphological analysis of tibial trabecular structure reveals that both MICT and HIIT can significantly increase the number of osteoblasts (+42.07% and +29.41%) and decrease the number of osteoclasts (−50.83% and −57.91%). Osteoblasts are key cells responsible for bone formation and bone matrix synthesis. In this study, MICT and HIIT effectively promoted osteoblast proliferation, accelerating bone formation and repair processes. Notably, MICT exhibited particularly prominent effects in this regard, which partly explains why MICT outperforms HIIT in improving bone microstructure. Osteoclasts are responsible for bone resorption and remodeling. Both MICT and HIIT significantly reduced the number of osteoclasts, indicating that these two exercise modes can inhibit the bone resorption process and reduce bone loss. Research indicates that cOCN, due to its structural characteristic of having two carboxyl groups at the termini, exhibits a strong binding ability to Ca 2+ on the surface of hydroxyapatite, enabling it to effectively deposit in bone matrix . Approximately 60% to 90% of cOCN is deposited in bone matrix, while the remaining portion is released into the circulatory system . In contrast, ucOCN has a weaker affinity for bone and is more abundant in the bloodstream . Notably, osteoclast activity creates acidic resorption lacunae where OCN deposited in the bone matrix may undergo decarboxylation, thereby increasing ucOCN levels in the blood . Therefore, the reduction in osteoclast numbers due to exercise may contribute to lowering circulating ucOCN levels by minimizing this decarboxylation process. Despite the promotion of osteoblast proliferation by both MICT and HIIT, circulating cOCN levels do not increase, implying enhanced deposition of cOCN within the bones. Further research is needed to explore the role of OCN carboxylation in bone health and its potential implications for the prevention and treatment of bone diseases. 4.1. Animals Twenty-four female C57BL/6J mice aged 7–8 weeks (purchased from the Animal Center of the Medical School of Xi’an Jiaotong University, SCXK2012-003) were housed in a sterile animal room at the School of Life Sciences and Technology, Xi’an Jiaotong University. One week after acclimatization, the experiments commenced. The mice were randomly divided into four groups: Sham group (Sham, n = 8), ovariectomized control group (OVX, n = 8), ovariectomized + moderate-intensity continuous training group (OVX + MICT, n = 8), and the ovariectomized + high-intensity interval training group (OVX + HIIT, n = 8). During the experimental period, the mice had free access to standard rodent feed and sterile water. The relative temperature and humidity in the animal room were maintained at 22 °C ± 2 °C and 60% ± 5%, respectively. The diurnal cycle was controlled at 12 h of light and 12 h of darkness. The entire procedure was reviewed and approved by the Biomedical Ethics Committee of the Medical School of Xi’an Jiaotong University in accordance with ethical principles, with an approval number of 2020-625. It was carried out in compliance with the “Guide for the Care and Use of Laboratory Animals” published by the National Institutes of Health (NIH Publication No. 8023, revised in 1978). 4.2. Ovariectomy For the OVX mice, a bilateral dorsal incision surgery was performed. A vertical incision was made on each side of the midline of the back, approximately one finger’s width away from the midline, at the point between the iliac bone and the ribs. The skin and subcutaneous fascia were cut open with scissors, and the abdominal muscles were cut along the edge of the erector spinae muscles. The abdominal cavity was opened with forceps supporting the fat, which was carefully lifted out. Upon locating the ovaries, the blood vessels and fat below them, as well as the uterus, were suture-ligated, and the ovaries were removed. The muscles and skin were sutured layer by layer, and the wound was disinfected with iodophor. For the Sham group mice, their ovaries were retained, and only an equivalent volume of fat next to the ovaries was removed. The wounds of the mice recovered well after the surgery, and no infections occurred. 4.3. Exercise Protocols The two exercise groups of mice underwent an adaptive training period of 6 min per day at a speed of 6 m/min for three consecutive days. Following the adaptive training, the maximum running capacity (MRC) of the mice was measured. The measurement method involved starting at an initial speed of 6 m/min, with an increase of 3 m/min every 3 min, until the mice could no longer keep up with the treadmill speed , with a maximum detectable speed of 24 m/min. The exercise protocol for the OVX + MIIT group was as follows: continuous exercise at 70% of the MRC (17 m/min) for 40 min/d , five days per week, for a duration of 8 weeks. The exercise protocol for the OVX + HIIT group was as follows: intermittent exercise consisting of 90% and 50% of the MRC . Specifically, each day began with 10 min of exercise at 17 m/min, followed by five cycles of 3 min at 21 m/min interspersed with 3 min at 12 m/min, five days per week, for a duration of 8 weeks. All exercise sessions were completed between 6:00 p.m. and 9:00 p.m. 4.4. Blood Pressure Measurement Blood pressure in mouse tails was measured using a small animal blood pressure monitor (BP-2010A, Reward, Beijing, China). The mouse was restrained using a fixing device to expose its tail, which was then placed in a heated chamber at 37 °C to allow the mouse to acclimate to the environment. The pressure cuff of the blood pressure measuring device was placed near the base of the mouse’s tail. Blood pressure measurement began once the mouse had calmed down. Each animal was measured 20 times, and the average of the middle ten readings was taken as the mouse’s blood pressure value. 4.5. Serum Analysis Strictly adhering to the instructions provided by the commercial ELISA kits, we assessed the levels of estradiol (E 2 , Cloud-Clone Corp, Wuhan, China), Gla-osteocalcin (cOCN, Takara, Tokyo, Japan), Glu-osteocalcin (ucOCN, Takara, Tokyo, Japan), triglyceride (TG, Nanjing Jiancheng, Nanjing, China), high-density lipoprotein cholesterol (HDL-C, Nanjing Jiancheng, Nanjing, China), low-density lipoprotein cholesterol (LDL-C, Nanjing Jiancheng, Nanjing, China), and total cholesterol (T-CHO, Nanjing Jiancheng, Nanjing, China). A Bio-Rad Model 680 microplate reader (Bio-Rad, Hercules, CA, USA) was utilized to measure the absorbance values. The detection thresholds were established at 12.35 pg/mL for E 2 , 10.5 ng/mL for cOCN, and 0.25 ng/mL for ucOCN. 4.6. Morphometric Analysis The aorta and tibia were dissected and fixed with 4% formaldehyde. The tibia was then decalcified, sectioned, and prepared for embedding in wax after washing with PBS. The aorta underwent dehydration and wax embedding. Both were sectioned at 8–10 μm, mounted on slides, and stained with hematoxylin and eosin. For Von Kossa staining, sections were washed, immersed in silver solution, exposed to light, treated with sodium thiosulfate, counterstained with Van Gieson’s stain, and then processed for dehydration, clearing, and mounting. Histophysiological evaluations were conducted under a microscope. Aortic structure and thickness were measured at 200× magnification, while osteoblast and osteoclast count per trabecular unit area were calculated at 400× magnification. 4.7. Micro-CT Analysis The microarchitecture of the trabecular bone region in the distal femur of mice was scanned using the German Y. Cheetah micrometer X-ray three-dimensional imaging system (Y.Cheetah; YXLON International GmbH, Hamburg, Germany). The parameters were set as follows: voltage at 80 KV, current at 35 μA, resolution at 6 μm, and a total of 720 scanning layers. After the scanning was completed, the grayscale images obtained from X-ray imaging were reconstructed. Using VG Studio MAX 3.0 analysis software, the region of interest (ROI) was selected. Based on the anatomical features of the femur, the first layer where both the medial and lateral condyles of the distal femur simultaneously disappeared was identified, and a cylinder with a radius of 1.5 cm and a height of 1 cm was selected as the ROI, extending from bottom to top. Trabecular bone within this region was extracted for analysis. This process yielded visual 2D and 3D images and quantitative indices of the trabecular microarchitecture of the distal femur, including bone mineral density (BMD), bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb.N), trabecular separation (Tb.Sp), and degree of anisotropy (DA). 4.8. Statistical Analysis The results are presented as mean ± standard deviation (mean ± SD). Statistical analyses were conducted using SPSS version 20.0 software (SPSS Institute, Chicago, IL, USA). All data were tested using the one-sample Kolmogorov–Smirnov test and were found to be normally distributed. One-way analysis of variance (ANOVA) was employed to assess whether there were differences among the three groups. Once a significant difference was detected, the least significant difference multiple comparison test was used to determine whether the difference between every two groups was statistically significant. p -value < 0.05 was considered statistically significant. Twenty-four female C57BL/6J mice aged 7–8 weeks (purchased from the Animal Center of the Medical School of Xi’an Jiaotong University, SCXK2012-003) were housed in a sterile animal room at the School of Life Sciences and Technology, Xi’an Jiaotong University. One week after acclimatization, the experiments commenced. The mice were randomly divided into four groups: Sham group (Sham, n = 8), ovariectomized control group (OVX, n = 8), ovariectomized + moderate-intensity continuous training group (OVX + MICT, n = 8), and the ovariectomized + high-intensity interval training group (OVX + HIIT, n = 8). During the experimental period, the mice had free access to standard rodent feed and sterile water. The relative temperature and humidity in the animal room were maintained at 22 °C ± 2 °C and 60% ± 5%, respectively. The diurnal cycle was controlled at 12 h of light and 12 h of darkness. The entire procedure was reviewed and approved by the Biomedical Ethics Committee of the Medical School of Xi’an Jiaotong University in accordance with ethical principles, with an approval number of 2020-625. It was carried out in compliance with the “Guide for the Care and Use of Laboratory Animals” published by the National Institutes of Health (NIH Publication No. 8023, revised in 1978). For the OVX mice, a bilateral dorsal incision surgery was performed. A vertical incision was made on each side of the midline of the back, approximately one finger’s width away from the midline, at the point between the iliac bone and the ribs. The skin and subcutaneous fascia were cut open with scissors, and the abdominal muscles were cut along the edge of the erector spinae muscles. The abdominal cavity was opened with forceps supporting the fat, which was carefully lifted out. Upon locating the ovaries, the blood vessels and fat below them, as well as the uterus, were suture-ligated, and the ovaries were removed. The muscles and skin were sutured layer by layer, and the wound was disinfected with iodophor. For the Sham group mice, their ovaries were retained, and only an equivalent volume of fat next to the ovaries was removed. The wounds of the mice recovered well after the surgery, and no infections occurred. The two exercise groups of mice underwent an adaptive training period of 6 min per day at a speed of 6 m/min for three consecutive days. Following the adaptive training, the maximum running capacity (MRC) of the mice was measured. The measurement method involved starting at an initial speed of 6 m/min, with an increase of 3 m/min every 3 min, until the mice could no longer keep up with the treadmill speed , with a maximum detectable speed of 24 m/min. The exercise protocol for the OVX + MIIT group was as follows: continuous exercise at 70% of the MRC (17 m/min) for 40 min/d , five days per week, for a duration of 8 weeks. The exercise protocol for the OVX + HIIT group was as follows: intermittent exercise consisting of 90% and 50% of the MRC . Specifically, each day began with 10 min of exercise at 17 m/min, followed by five cycles of 3 min at 21 m/min interspersed with 3 min at 12 m/min, five days per week, for a duration of 8 weeks. All exercise sessions were completed between 6:00 p.m. and 9:00 p.m. Blood pressure in mouse tails was measured using a small animal blood pressure monitor (BP-2010A, Reward, Beijing, China). The mouse was restrained using a fixing device to expose its tail, which was then placed in a heated chamber at 37 °C to allow the mouse to acclimate to the environment. The pressure cuff of the blood pressure measuring device was placed near the base of the mouse’s tail. Blood pressure measurement began once the mouse had calmed down. Each animal was measured 20 times, and the average of the middle ten readings was taken as the mouse’s blood pressure value. Strictly adhering to the instructions provided by the commercial ELISA kits, we assessed the levels of estradiol (E 2 , Cloud-Clone Corp, Wuhan, China), Gla-osteocalcin (cOCN, Takara, Tokyo, Japan), Glu-osteocalcin (ucOCN, Takara, Tokyo, Japan), triglyceride (TG, Nanjing Jiancheng, Nanjing, China), high-density lipoprotein cholesterol (HDL-C, Nanjing Jiancheng, Nanjing, China), low-density lipoprotein cholesterol (LDL-C, Nanjing Jiancheng, Nanjing, China), and total cholesterol (T-CHO, Nanjing Jiancheng, Nanjing, China). A Bio-Rad Model 680 microplate reader (Bio-Rad, Hercules, CA, USA) was utilized to measure the absorbance values. The detection thresholds were established at 12.35 pg/mL for E 2 , 10.5 ng/mL for cOCN, and 0.25 ng/mL for ucOCN. The aorta and tibia were dissected and fixed with 4% formaldehyde. The tibia was then decalcified, sectioned, and prepared for embedding in wax after washing with PBS. The aorta underwent dehydration and wax embedding. Both were sectioned at 8–10 μm, mounted on slides, and stained with hematoxylin and eosin. For Von Kossa staining, sections were washed, immersed in silver solution, exposed to light, treated with sodium thiosulfate, counterstained with Van Gieson’s stain, and then processed for dehydration, clearing, and mounting. Histophysiological evaluations were conducted under a microscope. Aortic structure and thickness were measured at 200× magnification, while osteoblast and osteoclast count per trabecular unit area were calculated at 400× magnification. The microarchitecture of the trabecular bone region in the distal femur of mice was scanned using the German Y. Cheetah micrometer X-ray three-dimensional imaging system (Y.Cheetah; YXLON International GmbH, Hamburg, Germany). The parameters were set as follows: voltage at 80 KV, current at 35 μA, resolution at 6 μm, and a total of 720 scanning layers. After the scanning was completed, the grayscale images obtained from X-ray imaging were reconstructed. Using VG Studio MAX 3.0 analysis software, the region of interest (ROI) was selected. Based on the anatomical features of the femur, the first layer where both the medial and lateral condyles of the distal femur simultaneously disappeared was identified, and a cylinder with a radius of 1.5 cm and a height of 1 cm was selected as the ROI, extending from bottom to top. Trabecular bone within this region was extracted for analysis. This process yielded visual 2D and 3D images and quantitative indices of the trabecular microarchitecture of the distal femur, including bone mineral density (BMD), bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb.N), trabecular separation (Tb.Sp), and degree of anisotropy (DA). The results are presented as mean ± standard deviation (mean ± SD). Statistical analyses were conducted using SPSS version 20.0 software (SPSS Institute, Chicago, IL, USA). All data were tested using the one-sample Kolmogorov–Smirnov test and were found to be normally distributed. One-way analysis of variance (ANOVA) was employed to assess whether there were differences among the three groups. Once a significant difference was detected, the least significant difference multiple comparison test was used to determine whether the difference between every two groups was statistically significant. p -value < 0.05 was considered statistically significant. In summary, both MICT and HIIT can effectively improve the cardiovascular disease-related risk factors in OVX mice, but moderate-intensity treadmill exercise much more enhances bone mineral density and improves bone microstructure in OVX mice. It means that in postmenopausal women, opting for MICT appears to be more beneficial to both the cardiovascular system and the skeletal system. UcOCN could serve as a metabolic biomarker for improvements of bone health through exercise, but it has not been found to participate in the regulation of cardiovascular disease-related risk factors, at least in the current study. |
Freiburg Neuropathology Case Conference | ffd4250f-7620-4920-a80d-f682ca175e4b | 8894187 | Pathology[mh] | An 89-year-old patient was admitted through our Accident and Emergency department after a domestic fall. Upon neurological examination, the patient appeared somnolent and had a dysarthric speech. A cranial computer tomography (CT, Fig. a), as well as subsequent magnetic resonance imaging (MRI, Figs. and ) of the head revealed a right cerebellar mass. A cranial CT also performed in relation with a domestic fall 3.5 years earlier already showed a small hypodense lesion in the same location (Fig. b). Due to the increase in size and the increasing mass effect of the lesion, with compromised cerebrospinal fluid (CSF) outflow, surgery was recommended. The operation was performed with the patient under general anesthesia and in a prone position. After suboccipital craniotomy, access to the tumor was gained. The tumor was found to be hard and very bloody and was removed circumferentially. Despite its proximity to the tentorium, the tumor was located strictly intra-axially. The patient was extubated on the first postoperative day without any new focal neurological deficit; however, mobilisation was difficult and the patient was only discharged from the intensive care unit on the seventh postoperative day. The patient unexpectedly succumbed 3 days later, most likely due to a pulmonary embolism. The cranial CT upon admission (Fig. a) revealed a well-circumscribed right cerebellar mass. In retrospect, the lesion had already been apparent on a previous cranial CT performed 3.5 years earlier. At that time the lesion appeared to be much smaller (Fig. b). On T2 weighted images from the current MRI (Fig. a) the lesion had a multicystic lobulated matrix and presented with a space-occupying effect and surrounding hyperintense signal alterations in fluid attenuated inversion recovery images (FLAIR, not shown) extending to the contralateral side. The local mass effect included a displacement of the fourth ventricle and consecutive signs of an obstructive hydrocephalus with enlargement of the lateral ventricles and the third ventricle and a periventricular oozing (not shown). On native T1 weighted images (Fig. b) the lesion was hypointense. On T1 weighted images after administration of gadolinium the walls of the cystic components as well as the nodular parts of the lesion showed homogeneous and intense contrast enhancement (Fig. c). The lesion had a broad-based contact to the inconspicuously configured tentorium cerebelli (Fig. d). The nodular parts of the mass showed signs of high perfusion and hypervascularisation in the MRI-perfusion relative cerebral blood volume (rCBV) map compared to normal brain tissue (Fig. a). On diffusion weighted images (b‑value = 1000, Fig. b), the lesion did not show any signs of restricted diffusion. Hemangioblastoma Hemangioblastomas are benign (WHO grade I), slow growing, vascular and relatively rare (7%) neoplasms of the posterior fossa. Second to metastasis, they are the most common posterior fossa tumor in adults . Even though large case series reported an age peak for hemangioblastomas between 30 and 65 years old, the prevalence for hemangioblastomas in patients > 65 years old has been reported with up to 13.6% . Hemangioblastomas occur in both sporadic and multiple forms, whereas the multiple form is associated with von Hippel-Lindau (VHL) disease . In VHL hemangioblastomas are usually located in the posterior fossa (60–76%) and four different morphological types of hemangioblastomas have been described: solid (48%), cystic (26%), cystic with mural nodules (21%), and both cystic and solid (5%) . The clinical presentation largely depends on the degree of mass effect with a long history (6–10 months) of minor symptoms followed by a sudden exacerbation due to high intracranial pressure (50% of presentations), most often related to cerebrospinal fluid (CSF) obstruction . Radiological features include a well-circumscribed hypointense to isointense T1-weighted and hyperintense T2-weighted mass, with intense contrast enhancement of the nodular parts as well as vascular flow voids in the surrounding tissue. The cysts contain fluid slightly hyperintense to CSF fluid in T1-weighted images without a contrast-enhancing wall [ , pp. 606–609]. In the present case, we considered hemangioblastoma to be a valid differential diagnosis. The radiological features of the cerebellar mass matched with many of the described patterns especially the hypervascularisation and the cystic components of the lesion. Furthermore, the slow size progression in the past 3.5 years was in line with the diagnosis of a nonmalignant tumor. Solitary Fibrous Tumor of the Dura The solitary fibrous tumor of the dura replaces the tumor entity previously referred to as hemangiopericytoma since the 5th edition of the WHO classification of CNS tumors from 2021 . With only about 0.4% of all CNS tumors it is very rare and mainly occurs around the age of 40 years (20–65 years). Depending on the subtype and the tumor size the symptoms can vary; most commonly headaches followed by seizures, visual dysfunction and motor weakness have been reported . Solitary fibrous tumors of the dura are thought to originate from mesenchymal spindle cells and are located mostly along the occipital dura, originating from the falx or tentorium cerebelli with or without a dural tail sign. They often show signs of hypervascularisation including prominent flow voids [ , p. 605]. In a case series by Zhou et al. 39 patients with anaplastic hemangiopericytomas (former WHO grade III) were analysed by their MRI appearance. The image findings included lobulations/cross-leaf growth, necrosis and cystic changes, a rare dural tail sign, bleedings, more significant oedema and damage of the nearby skull as well as extracranial metastases . In our patient some of the imaging features of solitary fibrous tumors of the dura, more precisely of the former subtype anaplastic hemangiopericytoma were present. Nevertheless, the lesion showed no sign of osseous infiltration or metastases and had a slow growth progression. This differential diagnosis had to be considered, yet it seemed less likely due to the lack of dural involvement and the advanced age of the patient. Meningioma Meningiomas account for about 20% of all intracranial tumors with a peak incidence at 45–55 years old . They are most commonly located supratentorially but are also found in approximately 9–15% of patients in the posterior fossa . Usually, they demonstrate as an extra-axial, well-circumscribed, contrast enhancing (in about 90%) mass with broad-based dural attachement and CSF cleft. In nonenhanced cranial CT they are mostly hyperdense (70%) to isodense (30%) and in 25% with homogeneous, sand-like or sprinkled calcifications. Additionally, hyperostotic or permeative sclerotic bone changes are possible. In non-enhanced MRI they have an isointense to minimally hyperintense signal in T1w and variable signal in T2w images and are often surrounded by a perifocal oedema (60%). Within highly vascular meningiomas T2-flow voids are seen. The typical dural tail sign can be found but is not specific. Furthermore, cysts (2–4%), necrosis and hemorrhage are also possible but uncommon features . Atypical (WHO grade II) and anaplastic/malignant (WHO grade III) meningiomas tend to be more aggressive and account for about 10% of meningiomas. Unlike WHO grade I meningiomas, they are indistinct from or infiltrate the brain parenchyma. Image morphological differentiation of the meningioma-types is hardly possible, but a large perifocal oedema and a low apparent diffusion coefficient (ADC) indicates high-grade variants . In our patient, the mass showed a small contact zone with the tentorium but neither a dural tail nor osseous involvement was present. Furthermore, the lesion seemed to be located intra-axially, which, together with the perifocal oedema, would point in the direction of an atypical or anaplastic meningioma (WHO grade II/III). Considering the clinical course and the morphology of the underlying lesion, the diagnosis of a meningioma seemed possible but not very likely. Brain Metastases Brain metastases occur in 20–40% of systemic tumor diseases with a peak prevalence at 65 years and older. They are less commonly located in the cerebellum (15%) and the brainstem but still account for the most common malignancies of the posterior fossa (around 75%) in adults . Most commonly, the primary malignancies in infratentorial metastases are lung and breast cancers. In melanomas, posterior fossa metastases are very rare . Brain metastases can be asymptomatic or lead to a variety of neurological symptoms especially seizures and local mass effect-induced symptoms . The imaging features of brain metastasis are quite variable. Most commonly, they present as oval nodular or ring-enhancing lesions with a perilesional oedema. They can also contain central necrotic/cystic portions [ , p. 755]. In perfusion imaging, they often present with elevated rCBV compared to normal brain tissue. Even though several entities such as renal cell carcinoma or melanoma show hypervascular metastases, pronounced flow voids are uncommon in brain metastasis . The slow growth progression of the cerebellar lesion over the course of 3.5 years made this diagnosis highly unlikely in our patient. Hemangioblastomas are benign (WHO grade I), slow growing, vascular and relatively rare (7%) neoplasms of the posterior fossa. Second to metastasis, they are the most common posterior fossa tumor in adults . Even though large case series reported an age peak for hemangioblastomas between 30 and 65 years old, the prevalence for hemangioblastomas in patients > 65 years old has been reported with up to 13.6% . Hemangioblastomas occur in both sporadic and multiple forms, whereas the multiple form is associated with von Hippel-Lindau (VHL) disease . In VHL hemangioblastomas are usually located in the posterior fossa (60–76%) and four different morphological types of hemangioblastomas have been described: solid (48%), cystic (26%), cystic with mural nodules (21%), and both cystic and solid (5%) . The clinical presentation largely depends on the degree of mass effect with a long history (6–10 months) of minor symptoms followed by a sudden exacerbation due to high intracranial pressure (50% of presentations), most often related to cerebrospinal fluid (CSF) obstruction . Radiological features include a well-circumscribed hypointense to isointense T1-weighted and hyperintense T2-weighted mass, with intense contrast enhancement of the nodular parts as well as vascular flow voids in the surrounding tissue. The cysts contain fluid slightly hyperintense to CSF fluid in T1-weighted images without a contrast-enhancing wall [ , pp. 606–609]. In the present case, we considered hemangioblastoma to be a valid differential diagnosis. The radiological features of the cerebellar mass matched with many of the described patterns especially the hypervascularisation and the cystic components of the lesion. Furthermore, the slow size progression in the past 3.5 years was in line with the diagnosis of a nonmalignant tumor. The solitary fibrous tumor of the dura replaces the tumor entity previously referred to as hemangiopericytoma since the 5th edition of the WHO classification of CNS tumors from 2021 . With only about 0.4% of all CNS tumors it is very rare and mainly occurs around the age of 40 years (20–65 years). Depending on the subtype and the tumor size the symptoms can vary; most commonly headaches followed by seizures, visual dysfunction and motor weakness have been reported . Solitary fibrous tumors of the dura are thought to originate from mesenchymal spindle cells and are located mostly along the occipital dura, originating from the falx or tentorium cerebelli with or without a dural tail sign. They often show signs of hypervascularisation including prominent flow voids [ , p. 605]. In a case series by Zhou et al. 39 patients with anaplastic hemangiopericytomas (former WHO grade III) were analysed by their MRI appearance. The image findings included lobulations/cross-leaf growth, necrosis and cystic changes, a rare dural tail sign, bleedings, more significant oedema and damage of the nearby skull as well as extracranial metastases . In our patient some of the imaging features of solitary fibrous tumors of the dura, more precisely of the former subtype anaplastic hemangiopericytoma were present. Nevertheless, the lesion showed no sign of osseous infiltration or metastases and had a slow growth progression. This differential diagnosis had to be considered, yet it seemed less likely due to the lack of dural involvement and the advanced age of the patient. Meningiomas account for about 20% of all intracranial tumors with a peak incidence at 45–55 years old . They are most commonly located supratentorially but are also found in approximately 9–15% of patients in the posterior fossa . Usually, they demonstrate as an extra-axial, well-circumscribed, contrast enhancing (in about 90%) mass with broad-based dural attachement and CSF cleft. In nonenhanced cranial CT they are mostly hyperdense (70%) to isodense (30%) and in 25% with homogeneous, sand-like or sprinkled calcifications. Additionally, hyperostotic or permeative sclerotic bone changes are possible. In non-enhanced MRI they have an isointense to minimally hyperintense signal in T1w and variable signal in T2w images and are often surrounded by a perifocal oedema (60%). Within highly vascular meningiomas T2-flow voids are seen. The typical dural tail sign can be found but is not specific. Furthermore, cysts (2–4%), necrosis and hemorrhage are also possible but uncommon features . Atypical (WHO grade II) and anaplastic/malignant (WHO grade III) meningiomas tend to be more aggressive and account for about 10% of meningiomas. Unlike WHO grade I meningiomas, they are indistinct from or infiltrate the brain parenchyma. Image morphological differentiation of the meningioma-types is hardly possible, but a large perifocal oedema and a low apparent diffusion coefficient (ADC) indicates high-grade variants . In our patient, the mass showed a small contact zone with the tentorium but neither a dural tail nor osseous involvement was present. Furthermore, the lesion seemed to be located intra-axially, which, together with the perifocal oedema, would point in the direction of an atypical or anaplastic meningioma (WHO grade II/III). Considering the clinical course and the morphology of the underlying lesion, the diagnosis of a meningioma seemed possible but not very likely. Brain metastases occur in 20–40% of systemic tumor diseases with a peak prevalence at 65 years and older. They are less commonly located in the cerebellum (15%) and the brainstem but still account for the most common malignancies of the posterior fossa (around 75%) in adults . Most commonly, the primary malignancies in infratentorial metastases are lung and breast cancers. In melanomas, posterior fossa metastases are very rare . Brain metastases can be asymptomatic or lead to a variety of neurological symptoms especially seizures and local mass effect-induced symptoms . The imaging features of brain metastasis are quite variable. Most commonly, they present as oval nodular or ring-enhancing lesions with a perilesional oedema. They can also contain central necrotic/cystic portions [ , p. 755]. In perfusion imaging, they often present with elevated rCBV compared to normal brain tissue. Even though several entities such as renal cell carcinoma or melanoma show hypervascular metastases, pronounced flow voids are uncommon in brain metastasis . The slow growth progression of the cerebellar lesion over the course of 3.5 years made this diagnosis highly unlikely in our patient. In the hematoxylin and eosin (H&E) stained sections of the formaldehyde-fixed and paraffin-embedded biopsy material, fragments of a highly vascular tumor were found (Fig. ). The vascular cells within the tumor are more abundant than the neoplastic stromal cells. The immunohistochemical reaction against CD34 marks the endothelial layer of these vessels but not the stromal cells (Fig. ). Most vessels are small in diameter and are best termed “capillary”. In addition, larger vessels appear within the tumor too. The stromal tumor cells lying between the capillaries often exhibit a roundish nucleus of moderate chromatin density. Only a few tumor cells show a hyperchromatic, atypical nucleus. Mitotic figures are scarce within the tumor cells, in line with a low proliferative activity of less than 1%, as shown in the immunohistochemical staining against Ki-67 (Fig. ). Moreover, many tumor cells exhibit medium to large cell bodies with multiple vacuoles (Fig. ). A smaller portion of the tumor cells has a slightly epithelioid appearanceand grows in a more solid pattern. Many smaller and fresh hemorrhages can be multifocally observed. Hemosiderin deposits as a sign of older bleedings are not seen. Most tumor cells show a strong signal in the immunohistochemical reaction for inhibin alpha (Fig. ). Immunohistochemical reactions against the epithelial membrane antigen (EMA, Fig. a) and STAT6 remain negative (Fig. b). Furthermore, gliotic altered cerebellar brain tissue is found in the border regions of the biopsy. This brain tissue appears sharply demarcated from the adjacent tumor tissue. In summary, the histopathological finding of a tumor with two major components, namely neoplastic stromal cells that appear partially vacuolated and abundant vascular cellularity, leads to the diagnosis of a hemangioblastoma, CNS WHO grade I. Hemangioblastoma (WHO Grade I) Hemangioblastomas are rare benign neoplasms that account for less than 2% of all intracranial tumors. They typically occur in the cerebellum (up to 76%; like in the outlined case) and are less frequent in the brainstem or along the spinal cord . Most hemangioblastoma cases occur sporadically or less commonly associated with VHL syndrome ; however, about 70–80% of VHL patients exhibit hemangioblastomas of the central nervous system (CNS). VHL-associated hemangioblastomas tend to appear at a younger age than sporadic forms (30–40 years vs. 50–70 years), but both are primarily seen in adults . In the described case, VHL was not known, and the patient’s age was 89 years which is older than the age-related peak incidence but still not uncommon for sporadic cases . The differential diagnosis for highly vascular tumors with a low proliferation rate includes a solitary fibrous tumor (SFT; formerly known as hemangiopericytoma), CNS WHO grade 1, as well as an angiomatous meningioma, CNS WHO grade I . The SFTs are also rare tumors within the CNS that make up less than 1% of all CNS tumors. They are usually found supratentorial, superficial, and closely related to the meninges ; however, there are rare cases reported with a cerebellopontine localization of SFTs . Peak incidence occurs between 50 and 70 years . On the histopathological level, SFTs are characterized by prominent, branching, and staghorn-shaped blood vessels and randomly arranged spindled-ovoid monomorphic cells between these vessels. Molecularly, SFTs show a genomic inversion at the 12q13 locus leading to a NAB2:STAT6 fusion. This fusion conditions a solid nuclear expression of STAT6, which can be detected and seen as an immunohistochemical hallmark of SFTs . In the above-explained case, the missing nuclear STAT6 expression and the cerebellar localization of the tumor make the diagnosis of an SFT unlikely. Angiomatous meningiomas are a meningioma variant graded as CNS WHO grade 1. Meningiomas per se are a frequent entity of brain tumors (37.6% of all CNS tumors) occurring most likely in older patients. The risk increases with age . In the angiomatous variant, numerous blood vessels often represent a greater portion than the meningioma cells themselves . The blood vessels are often thick-walled and hyalinized. The tumor cells exhibit a positive signal in the immunohistochemical reaction for EMA. EMA immunohistochemistry can help identify the occasionally sparse tumor cells between the blood vessels and exclude differential diagnoses, such as hemangioblastoma, where tumor cells are negative for EMA. In the current case, clinicians also thought about the clinical presentation of a brain metastasis. Indeed, histologically, hemangioblastomas can present hypercellular, and the vacuolation of the tumor cells can impress as a clear cell component that resembles metastatic renal clear cell carcinoma . In such cases, immunohistochemistry for renal clear cell carcinoma markers such as renal cell carcinoma marker (RCCm) or CD10 may help differentiate these entities. In addition, the fact that patients with VHL syndrome tend to develop renal cell carcinomas can make it necessary to exclude a cerebral metastasis with the mentioned immunohistochemistry . Hemangioblastomas are rare benign neoplasms that account for less than 2% of all intracranial tumors. They typically occur in the cerebellum (up to 76%; like in the outlined case) and are less frequent in the brainstem or along the spinal cord . Most hemangioblastoma cases occur sporadically or less commonly associated with VHL syndrome ; however, about 70–80% of VHL patients exhibit hemangioblastomas of the central nervous system (CNS). VHL-associated hemangioblastomas tend to appear at a younger age than sporadic forms (30–40 years vs. 50–70 years), but both are primarily seen in adults . In the described case, VHL was not known, and the patient’s age was 89 years which is older than the age-related peak incidence but still not uncommon for sporadic cases . The differential diagnosis for highly vascular tumors with a low proliferation rate includes a solitary fibrous tumor (SFT; formerly known as hemangiopericytoma), CNS WHO grade 1, as well as an angiomatous meningioma, CNS WHO grade I . The SFTs are also rare tumors within the CNS that make up less than 1% of all CNS tumors. They are usually found supratentorial, superficial, and closely related to the meninges ; however, there are rare cases reported with a cerebellopontine localization of SFTs . Peak incidence occurs between 50 and 70 years . On the histopathological level, SFTs are characterized by prominent, branching, and staghorn-shaped blood vessels and randomly arranged spindled-ovoid monomorphic cells between these vessels. Molecularly, SFTs show a genomic inversion at the 12q13 locus leading to a NAB2:STAT6 fusion. This fusion conditions a solid nuclear expression of STAT6, which can be detected and seen as an immunohistochemical hallmark of SFTs . In the above-explained case, the missing nuclear STAT6 expression and the cerebellar localization of the tumor make the diagnosis of an SFT unlikely. Angiomatous meningiomas are a meningioma variant graded as CNS WHO grade 1. Meningiomas per se are a frequent entity of brain tumors (37.6% of all CNS tumors) occurring most likely in older patients. The risk increases with age . In the angiomatous variant, numerous blood vessels often represent a greater portion than the meningioma cells themselves . The blood vessels are often thick-walled and hyalinized. The tumor cells exhibit a positive signal in the immunohistochemical reaction for EMA. EMA immunohistochemistry can help identify the occasionally sparse tumor cells between the blood vessels and exclude differential diagnoses, such as hemangioblastoma, where tumor cells are negative for EMA. In the current case, clinicians also thought about the clinical presentation of a brain metastasis. Indeed, histologically, hemangioblastomas can present hypercellular, and the vacuolation of the tumor cells can impress as a clear cell component that resembles metastatic renal clear cell carcinoma . In such cases, immunohistochemistry for renal clear cell carcinoma markers such as renal cell carcinoma marker (RCCm) or CD10 may help differentiate these entities. In addition, the fact that patients with VHL syndrome tend to develop renal cell carcinomas can make it necessary to exclude a cerebral metastasis with the mentioned immunohistochemistry . |
Psychiatrist-led hepatitis C (HCV) treatment at an opioid agonist treatment clinic in Stockholm– a model to enhance the HCV continuum of care | ac3b10c9-9c02-40be-aa9d-69e9d7adf200 | 11948708 | Psychiatry[mh] | An estimated 50 million people worldwide are infected with hepatitis C virus (HCV) . People who inject drugs (PWID) and people with opioid agonist therapy (OAT) are often overlapping populations with an increased prevalence of hepatitis C virus (HCV) infections. Recent studies provide strong evidence regarding the effectiveness of HCV treatment with direct-acting antivirals (DAAs) and low levels of reinfections among these populations [ – ]. Increased access to HCV care for people with OAT is essential to reach the WHO goal of eliminating HCV as a major public health threat by 2030. The 2016 WHO HCV elimination targets include an 80% reduction in new cases and a 65% reduction in HCV-related deaths . In 2021, new targets were introduced and defined as an absolute annual HCV incidence of < 5/100,000 in all persons and < 2/100,000 in PWID . Although access to OAT has increased in Sweden over the past decade, national coverage remains low compared to other European Union countries, with only 62 OAT recipients per 100,000 inhabitants, compared to the EU average of 100 per 100,000 . OAT reduces the risk of HCV among PWID, and its effectiveness is further enhanced when combined with needle and syringe programs (NSP) . In a systematic review from 2023, OAT coverage among PWID in Sweden was reported in the higher interval of > 40 OAT recipients per 100 PWID, while NSP coverage was at a moderate level of 100 to 200 needle and syringes distributed per PWID annually . Since 2018, HCV treatment in Sweden has been universal and fully reimbursed. However, DAAs must be prescribed by, or in consultation with, a physician at an infectious diseases (ID) or gastroenterology clinic with experience in treating patients with HCV . Furthermore, HCV treatments must be registered in the national quality register InfCare Hepatitis to ensure national follow-up and quality assurance of care. Even with universal access to HCV treatment, patients still need to be linked to care. As numerous studies have shown, there are multiple factors that might negatively affect the ‘HCV care cascade’ or the ‘continuum of care’ defined as retention in every step from diagnosis to reaching cure [ – ]. Over time, from screening of anti-HCV, through confirmatory HCV RNA, linkage to a specialist assessment, follow-up visit for fibrosis assessment and treatment start, a great proportion of patients might be lost to follow-up (LTFU) . Barriers to the HCV care cascade might include invasive testing methods, limited access to pangenotypic treatments, and logistical challenges like travel costs and clinic distances. Systemic factors such as inadequate funding, stigma, and weak governance can further hinder engagement, while personal factors like low motivation might be addressed through peer-support programs . By addressing these factors and treating people with HCV geographically closer to where they already access services, aiming for a ‘one-stop-shop’, the continuum of care could be improved . Hence, HCV treatment should be offered in settings such as substance use clinics, OAT clinics, prisons and at needle and syringe programs (NSP). Before the introduction of DAA, overall lifetime HCV treatment uptake among OAT participants in Sweden was low, with only 1–6% treated [ – ]. An observational study with data from the Swedish Prescribed Drug Registers noted an estimated cumulative DAA treatment uptake of 28% among OAT participants between 2014 and 2017 . However, these estimates represent a time when restrictions regarding level of fibrosis was still present in the Swedish HCV treatment guidelines. In 2019, there were an estimated 29 700 viremic HCV infections in Sweden, and an HCV transmission modelling study from 2021 concluded that Sweden would achieve and exceed the WHO targets for diagnosis, treatment and liver-related death by 2030. However, fully achieving all WHO targets, including a substantially reduced incidence, would require expanding harm reduction programs (including OAT) to engage and treat more than 90% of PWID with HCV in these programs . Since the introduction of DAA, several innovative models of HCV care targeting OAT participants and PWID have emerged. These include decentralized mobile clinics with point-of-care testing and treatment, peer-led test-and-treat programs, and community pop-up clinics to simplify access to services, along with the integration of HCV treatment directly into OAT settings [ – ]. However, different countries, regions and setting have different challenges related to local guidelines and resources. The Swedish national HCV elimination plan was published in July 2022. The elimination plan highlights that “a close collaboration with substance use disorder clinics is important” and that “treatment of HCV, in collaboration with an infectious disease clinic, should be offered directly at OAT clinics with diagnostics, investigation, treatment and follow-up on-site” . In this study, we aim to methodologically describe how the first psychiatrist-led HCV treatment model of care was introduced at an OAT clinic in Stockholm, Sweden. Furthermore, we aim to evaluate the outcome of this model of care by assessing HCV treatment success, defined as sustained virological response (SVR) with a negative HCV RNA test 12 weeks after end of treatment (EOT), as well as the rate of reinfection during follow-up.
Setting and patients In Sweden, OAT is managed only by specialist in psychiatry +/- specialists in addiction medicine (which is a sub-specialty of psychiatry). The Maria OAT clinic is located in central Stockholm and provides OAT (methadone, buprenorphine and buprenorphine/naloxone) for approximately 500 patients. Of these, 36% dose daily at the clinic, 36% have take-home doses from the clinic and 28% have fully prescribed OAT through the pharmacy, corresponding to the differentiated treatment groups defined below. The majority (82%) have a history of injecting drug use (IDU), and the primary injected opioid in Stockholm is heroin. Previous studies have noted a 69% viremic HCV prevalence among OAT patients in Sweden and that HCV has been poorly diagnosed and followed-up at OAT clinics . An unpublished report from the Maria OAT clinic noted that among all OAT patients in 2018 ( n = 418), almost half (46%) were not tested for HCV or had an unknown HCV status in the digital medical chart. Of those tested ( n = 225), 64% had a chronic infection, 25% had a cleared infection (spontaneously or treatment induced) and 11% had never been exposed to HCV (personal communication Tobias Nordin, Maria OAT clinic). Differentiation of OAT patients The Maria OAT clinic offers a specialized OAT treatment approach, categorizing patients into three groups based on their treatment needs, as outlined below: High treatment needs (HTN)– patients with clinical complexity and concomitant high-risk drug use, including IDU. The treatment focus has a pronounced harm-reduction approach with OAT medicine dispensed with daily monitoring. Moderate treatment needs (MTN)– patients with clinical complexity to some degree but with treatment needs adequately addressed and less frequent drug-related high-risk presentation. Treatment is focused on rehabilitation with regular monitoring but with increased treatment autonomy. Low treatment needs (LTN)– patients with minimal high-risk or problematic substance use and with stable medical, psychiatric and psychosocial conditions. Patients receive OAT medication through prescription and manage the OAT treatment on their own without supervised intake. As part of the HCV model-of-care introduced in this study, all OAT patients at the Maria OAT clinic could access HCV testing and treatment on-site. However, most patients in the low treatment needs group and many patients in the moderate treatment needs group would also access HCV treatment through standard-of-care referrals to regional ID clinics. Introducing psychiatrist-led HCV treatment In October 2017, psychiatrist-led HCV treatment (with consultation support from the local infectious diseases clinic) was initiated at the Maria OAT clinic. Prior to this, there were no HCV treatments offered on-site. Instead, OAT patients were referred to the specialized ID clinics, often resulting in a missed visit, particularly among OAT patients in the high treatment needs group (personal communication Per-Erik Klasa, Maria OAT clinic). During 2017–2019, the Maria OAT clinic collaborated with the adjacent Stockholm NSP and performed liver stiffness measurements (LSM) with Fibroscan at the OAT clinic for those with viremic HCV. On a parallel level, an educational effort was initiated with the aim of increasing HCV knowledge among OAT staff, with a specific focus on teaching a psychiatrist at the clinic to manage and treat HCV on-site. This educational program was initiated by an ID specialist from the Karolinska University Hospital in Stockholm. In early 2020 the Maria OAT clinic’s staff also attended the course “Hepatitis C in Primary Care and Drug and Alcohol Settings Education Program” that specifically targeted Swedish psychiatrists and primary care physicians . The course was developed by the Australasian Society for HIV, Viral Hepatitis and Sexual Health Medicine (ASHM) in collaboration with the Kirby Institute, UNSW Sydney and the International Network on Health and Hepatitis in Substance Users (INHSU). An evaluation of the course noted that “self-efficacy related to HCV management and treatment improved immediately following the delivery of this HCV educational program” . The overall HCV education concept at the Maria OAT clinic was to teach psychiatrists to be independent in HCV investigation, treatment and follow-up (Fig. ). Briefly, during phase one the primary focus was to introduce treatment on-site and educate OAT staff on HCV. During phase two, hands-on clinical guidance to nurses and psychiatrists was provided by an ID specialist, twice a week. At this stage, the ‘remote consultation form’ for HCV treatment initiation was introduced for future consultations with an ID physician experienced in HCV treatment . The form was initially developed by the Gastroenterological Society of Australia– Australian Liver Association and then adopted to current Swedish HCV treatment guidelines. The form contained information on patient data (name and date of birth), HCV history, prior HCV treatments, intercurrent medical conditions, current medication (checked for DAA interactions through University of Liverpool’s ‘HEP Drug Interactions’ ), laboratory results, liver fibrosis assessment (LSM with Fibroscan and/or APRI score) and a suggested choice of DAA treatment based on HCV genotype. Finally, in phase three, all investigations and treatments were performed by the OAT psychiatrists. At this point, there was no ID specialist on-site at the OAT clinic but a possibility to contact one on demand or more planned through a biweekly telemedicine HCV conference. The ‘remote consultation form’ was used as the basis for the HCV treatment consultations. HCV treatment Participants were included for HCV treatment between 27th October 2017 to 17th June 2022. During the first year, HCV treatment was initiated by the ID specialist, or by the psychiatrist with support from the ID-specialist on-site. From 2019, all treatments were initiated by the psychiatrist with remote ID specialist consultations when needed. All participants were assessed regarding level of fibrosis (F0-F4), most often with Fibroscan. Severe fibrosis could also be excluded using the algorithm APRI score < 1 in combination with age < 35 years and duration of IDU < 15 years . Participants with cirrhosis (F4), i.e. an LSM > 12.5 kPa , underwent ultrasound prior to treatment initiation to rule out hepatocellular carcinoma and were planned for repeated ultrasound follow-ups post SVR, in accordance with Swedish HCV guidelines. Swedish guidelines allow for treatment of acute HCV as well as unlimited retreatments. The guidelines still demand HCV genotype testing before treatment start. In this study, all HCV testing was performed through venipuncture and there was no access to capillary point-of-care tests such as dried blood spot or on-site HCV RNA testing. However, to facilitate HCV testing in this model of care, the Maria OAT clinic implemented HCV testing on-site to reduce the risk of missed diagnosis and LTFU. All laboratories in Stockholm that conduct HCV testing offer HCV RNA reflex testing for HCV-antibody positive samples to determine current HCV status. All HCV-treated participants were HCV RNA tested at EOT, at SVR and then followed with repeated HCV RNA tests to the last negative HCV RNA test or a subsequent reinfection until December 31st, 2022. InfCare hepatitis All HCV treatment data were registered in the national quality register InfCare Hepatitis . The registry contains demographic data (age and gender) and HCV treatment data (fibrosis evaluation, blood tests including HCV serology/virology, HCV genotypes and prescribed DAA treatment). All HCV RNA follow-up tests post SVR were registered in InfCare Hepatitis. Statistics Demographic data are presented as proportions, means or medians with ranges. All participants were followed with repeated HCV tests post treatment with SVR to identify possible reinfection. The actuarial method was used to define the time to reinfection as the midpoint between the last HCV RNA negative test and the following positive HCV RNA test. Reinfection rates were defined as the number of reinfections (n = x) per 100 person-years (x/100 PY), with 95% confidence intervals (CIs). Data were analysed using JMP ® , Version 15, SAS Institute Inc., Cary, NC.
In Sweden, OAT is managed only by specialist in psychiatry +/- specialists in addiction medicine (which is a sub-specialty of psychiatry). The Maria OAT clinic is located in central Stockholm and provides OAT (methadone, buprenorphine and buprenorphine/naloxone) for approximately 500 patients. Of these, 36% dose daily at the clinic, 36% have take-home doses from the clinic and 28% have fully prescribed OAT through the pharmacy, corresponding to the differentiated treatment groups defined below. The majority (82%) have a history of injecting drug use (IDU), and the primary injected opioid in Stockholm is heroin. Previous studies have noted a 69% viremic HCV prevalence among OAT patients in Sweden and that HCV has been poorly diagnosed and followed-up at OAT clinics . An unpublished report from the Maria OAT clinic noted that among all OAT patients in 2018 ( n = 418), almost half (46%) were not tested for HCV or had an unknown HCV status in the digital medical chart. Of those tested ( n = 225), 64% had a chronic infection, 25% had a cleared infection (spontaneously or treatment induced) and 11% had never been exposed to HCV (personal communication Tobias Nordin, Maria OAT clinic).
The Maria OAT clinic offers a specialized OAT treatment approach, categorizing patients into three groups based on their treatment needs, as outlined below: High treatment needs (HTN)– patients with clinical complexity and concomitant high-risk drug use, including IDU. The treatment focus has a pronounced harm-reduction approach with OAT medicine dispensed with daily monitoring. Moderate treatment needs (MTN)– patients with clinical complexity to some degree but with treatment needs adequately addressed and less frequent drug-related high-risk presentation. Treatment is focused on rehabilitation with regular monitoring but with increased treatment autonomy. Low treatment needs (LTN)– patients with minimal high-risk or problematic substance use and with stable medical, psychiatric and psychosocial conditions. Patients receive OAT medication through prescription and manage the OAT treatment on their own without supervised intake. As part of the HCV model-of-care introduced in this study, all OAT patients at the Maria OAT clinic could access HCV testing and treatment on-site. However, most patients in the low treatment needs group and many patients in the moderate treatment needs group would also access HCV treatment through standard-of-care referrals to regional ID clinics.
In October 2017, psychiatrist-led HCV treatment (with consultation support from the local infectious diseases clinic) was initiated at the Maria OAT clinic. Prior to this, there were no HCV treatments offered on-site. Instead, OAT patients were referred to the specialized ID clinics, often resulting in a missed visit, particularly among OAT patients in the high treatment needs group (personal communication Per-Erik Klasa, Maria OAT clinic). During 2017–2019, the Maria OAT clinic collaborated with the adjacent Stockholm NSP and performed liver stiffness measurements (LSM) with Fibroscan at the OAT clinic for those with viremic HCV. On a parallel level, an educational effort was initiated with the aim of increasing HCV knowledge among OAT staff, with a specific focus on teaching a psychiatrist at the clinic to manage and treat HCV on-site. This educational program was initiated by an ID specialist from the Karolinska University Hospital in Stockholm. In early 2020 the Maria OAT clinic’s staff also attended the course “Hepatitis C in Primary Care and Drug and Alcohol Settings Education Program” that specifically targeted Swedish psychiatrists and primary care physicians . The course was developed by the Australasian Society for HIV, Viral Hepatitis and Sexual Health Medicine (ASHM) in collaboration with the Kirby Institute, UNSW Sydney and the International Network on Health and Hepatitis in Substance Users (INHSU). An evaluation of the course noted that “self-efficacy related to HCV management and treatment improved immediately following the delivery of this HCV educational program” . The overall HCV education concept at the Maria OAT clinic was to teach psychiatrists to be independent in HCV investigation, treatment and follow-up (Fig. ). Briefly, during phase one the primary focus was to introduce treatment on-site and educate OAT staff on HCV. During phase two, hands-on clinical guidance to nurses and psychiatrists was provided by an ID specialist, twice a week. At this stage, the ‘remote consultation form’ for HCV treatment initiation was introduced for future consultations with an ID physician experienced in HCV treatment . The form was initially developed by the Gastroenterological Society of Australia– Australian Liver Association and then adopted to current Swedish HCV treatment guidelines. The form contained information on patient data (name and date of birth), HCV history, prior HCV treatments, intercurrent medical conditions, current medication (checked for DAA interactions through University of Liverpool’s ‘HEP Drug Interactions’ ), laboratory results, liver fibrosis assessment (LSM with Fibroscan and/or APRI score) and a suggested choice of DAA treatment based on HCV genotype. Finally, in phase three, all investigations and treatments were performed by the OAT psychiatrists. At this point, there was no ID specialist on-site at the OAT clinic but a possibility to contact one on demand or more planned through a biweekly telemedicine HCV conference. The ‘remote consultation form’ was used as the basis for the HCV treatment consultations.
Participants were included for HCV treatment between 27th October 2017 to 17th June 2022. During the first year, HCV treatment was initiated by the ID specialist, or by the psychiatrist with support from the ID-specialist on-site. From 2019, all treatments were initiated by the psychiatrist with remote ID specialist consultations when needed. All participants were assessed regarding level of fibrosis (F0-F4), most often with Fibroscan. Severe fibrosis could also be excluded using the algorithm APRI score < 1 in combination with age < 35 years and duration of IDU < 15 years . Participants with cirrhosis (F4), i.e. an LSM > 12.5 kPa , underwent ultrasound prior to treatment initiation to rule out hepatocellular carcinoma and were planned for repeated ultrasound follow-ups post SVR, in accordance with Swedish HCV guidelines. Swedish guidelines allow for treatment of acute HCV as well as unlimited retreatments. The guidelines still demand HCV genotype testing before treatment start. In this study, all HCV testing was performed through venipuncture and there was no access to capillary point-of-care tests such as dried blood spot or on-site HCV RNA testing. However, to facilitate HCV testing in this model of care, the Maria OAT clinic implemented HCV testing on-site to reduce the risk of missed diagnosis and LTFU. All laboratories in Stockholm that conduct HCV testing offer HCV RNA reflex testing for HCV-antibody positive samples to determine current HCV status. All HCV-treated participants were HCV RNA tested at EOT, at SVR and then followed with repeated HCV RNA tests to the last negative HCV RNA test or a subsequent reinfection until December 31st, 2022.
All HCV treatment data were registered in the national quality register InfCare Hepatitis . The registry contains demographic data (age and gender) and HCV treatment data (fibrosis evaluation, blood tests including HCV serology/virology, HCV genotypes and prescribed DAA treatment). All HCV RNA follow-up tests post SVR were registered in InfCare Hepatitis.
Demographic data are presented as proportions, means or medians with ranges. All participants were followed with repeated HCV tests post treatment with SVR to identify possible reinfection. The actuarial method was used to define the time to reinfection as the midpoint between the last HCV RNA negative test and the following positive HCV RNA test. Reinfection rates were defined as the number of reinfections (n = x) per 100 person-years (x/100 PY), with 95% confidence intervals (CIs). Data were analysed using JMP ® , Version 15, SAS Institute Inc., Cary, NC.
By June 2022, 133 participants had initiated HCV treatment on-site at the Maria OAT clinic. Six participants were retreated, giving a total of 139 treatment initiations. Thirty-five participants (25%) initiated HCV treatment through the ID-specialist together with the psychiatrist in phase two during the first year, while 104 participants (75%) were managed by the psychiatrist alone in phase three (Fig. ). HCV treatment was mostly provided through directly observed treatment (DOT) or weekly administrations, with a minority initiating HCV treatment while in prison ( n = 1) or in treatment homes ( n = 13). The demographics of treated participants and treatment groups are depicted in Table . Most treatments were started in the high treatment needs group (57,6%) and in the moderate treatment needs group (34,5%). Overall, 72% were men and mean age was 44.7 years (range 22–65). The distribution of genotypes (GT) was 49%, 41% and 10% for GT 3, GT 1 and GT 2, respectively. The majority had absent or mild fibrosis (63%), while 5% had LSM indicating liver cirrhosis. The treatment strategy was following Swedish HCV treatment guidelines, and the two most used DAA-treatment strategies were eight weeks of sofosbuvir/ledipasvir for GT 1 (25.2%) and 12 weeks of sofosbuvir/ledipasvir for GT 2 and 3 (44.6%). All treated patients, 139/139 (100%), were HCV RNA negative at EOT and 123/139 (88%) reached SVR, with 8 viral recurrences, 4 LTFU and 4 deaths between EOT and SVR (Fig. ). All viral recurrences between EOT and SVR were found in the high treatment needs group, while LTFU and deaths were found in both the high treatment needs group (LTFU n = 2, deaths n = 3) and in the moderate treatment needs group (LTFU n = 2, deaths n = 1). There were no demographic differences between the three OAT treatment groups regarding age and gender. A total of 11 reinfections were noted post SVR (Fig. ), over 151 person-years of follow-up, giving a reinfection rate of 7.3/100 PY (95% CI 4.1, 12.9). Most reinfections post SVR, 91% (10/11), occurred in the high treatment needs group, with a reinfection rate of 12.5/100 PY (95% CI 7.0-22.3), over 80 person-years follow-up. One reinfection noted in the moderate treatment needs group represented a reinfection rate of 1.5/100 PY (95% CI 0.2–10.9), over 64 person-years of follow-up. There were no reinfections in the low treatment needs group (6 person-years of follow-up). Over the study period, the numbers of HCV treatment initiations were: 2017, from October ( n = 5); 2018 ( n = 35), 2019 ( n = 38); 2020 ( n = 22); 2021 ( n = 27) and in 2022, until September ( n = 12).
In this study, we successfully introduced psychiatrist-led HCV treatment at an OAT clinic in Stockholm. Overall, we noted great HCV treatment results and levels of reinfections consistent with the literature. In a review from 2018, Hajarizadeh et al. concluded that treatment completion was 97.4% and 96.9% and SVR was 90.7% and 87.4% among those with OAT and those with recent IDU, respectively . The review also concluded that people with recent IDU had a generally lower level of SVR than those with no history of IDU and that LTFU at SVR was the main contributor rather than virological failure. These data correspond well to our data where treatment completion was 100% and SVR was 88%. However, the decreased level of SVR in our data reflect both LTFU and viral recurrence between EOT and SVR. A meta-analysis from 2020 investigating HCV reinfection after successful antiviral treatment among PWID noted a reinfection rate of 6.2/100 PY among people with recent IDU and 3.8/100 PY among those receiving OAT . In further stratified analyses among people with OAT, reinfection rates were 1.4/100 PY and 5.9/100 PY among those with no recent drug and those with recent drug use, respectively. In adjusted rate ratio analyses, those with recent drug use were at a 3.5-fold higher risk of reinfection, than those with no recent drug use . The overall reinfection rate at the Maria OAT clinic was, 7.3/100 PY, but was higher in the high treatment needs group (12.5/100 PY), indicating a higher level of high-risk drug use. Although the pooled reinfection rates in the meta-analysis were lower than those in our study, the meta-analysis included studies with reinfection rates between 16.7 and 23.8/100 PY among DAA-treated PWID and persons with OAT . Another international multicenter study prospectively followed HCV treated OAT participants for reinfection up to 3 years after successful DAA treatment. The overall reinfection rate was 1.7/100 PY and 1.9/100 PY among those with recent drug use. However, only 20% reported IDU during follow-up and the authors concluded that reinfection rates could be underestimated as the study participants may have represented a group with higher stability and lower risk for reinfection. Additionally, the high level of LTFU may have represented participants with lower stability, underestimating reinfection rates . In a study from Canada ( n = 482) with 46% receiving OAT, 91% of reinfections occurred among people with known recent IDU at the time of treatment start, which corresponds well to our data. The overall reinfection rate was 3.6/100 PY and among those with recent IDU, the reinfection rate was 6.6/100 PY. In a study from the USA ( n = 141), overall low levels of reinfections among people receiving OAT were noted (1.1/100 PY), but were higher among those with recent IDU, 7.7/100 PY . A study investigating HCV treatment at the Stockholm NSP, noted that PWID with OAT and concurrent IDU, attending the NSP, received HCV treatment to a higher extent in settings other than the NSP, mainly in OAT clinics and ID clinics. However, the reinfection rates did not differ between those treated at the NSP, 8.4/100 PY (95% CI 5.3, 13.2) or at other clinics, 9.9/100 PY (95% CI 6.9, 14.3) . Altogether, as noted in a modelling study, persistent high HCV treatment rates in PWID will result in an initial increased number of reinfections that will be curbed over time . Thus, reinfections will occur in high-risk populations, but retreatment needs to be recognized as a vital part of an effective HCV elimination strategy . Given that Sweden generally has low OAT and NSP coverage, increased access to such harm reduction interventions need to be a future priority as pinpointed in the recent report by the Swedish Government’s Drug Commission of Inquiry . While updated figures on overall HCV prevalence among OAT participants in Stockholm or nationally currently are unavailable, another recent study confirms a significant increase in treatment uptake among PWID in Stockholm, both within and outside of OAT programs. This has led to a significant reduction in HCV prevalence, from 60% in 2017 to 30% in 2021 . The HCV care cascade represents a series of sequential steps in the diagnosis, linkage to care, treatment, reaching SVR and follow-up. However, OAT patients and PWID often face challenges at each step, leading to poor treatment uptake and completion rates. Bringing HCV treatment to the individuals’ geographical locations, including OAT clinics, can significantly improve access and engagement in care, which is also highlighted in the Swedish HCV elimination strategy . Task shifting involves delegating specific responsibilities and tasks to healthcare providers with appropriate training and expertise beyond the traditional roles . Psychiatrists and addiction specialists already play a crucial role in the care of OAT patients and PWID. Expanding the scope of practice for psychiatrists and addiction specialists to include HCV treatment can have several advantages for OAT patients and PWID. This approach could reduce the burden of referrals to specialized clinics, which can be challenging for OAT patients and PWID due to logistical barriers, resulting in missed visits . Bringing HCV treatment to OAT clinics also facilitates a more patient-centered and supportive environment and can reduce the stigma associated with seeking separate specialized care, increasing the likelihood of individuals entering and remaining in HCV care . OAT clinics have often established trusted patient relationships with an understanding of possible specific needs. Thus, incorporating HCV treatment into these practices can provide comprehensive, integrated care that addresses both mental health, substance use disorders and HCV infection. Expanding the pool of healthcare providers who can prescribe and monitor HCV treatment will result in increased treatment capacity, reduced waiting times, and enhanced accessibility, particularly in areas with limited access to specialized HCV care. Task-shifting to psychiatrists and addiction specialists, will also reduce the workload on ID-specialists, hepatologists and gastroenterologists, allowing a focus on more complex cases and specialized care (e.g., patients with cirrhosis and coinfections), thereby improving the overall efficiency of the healthcare system. However, to implement and maintain effective task-shifting strategies, continuous support and education from HCV specialists is essential and could be advantageously provided through telemedicine . As previously noted, a large proportion of OAT patients are not adequately tested for HCV at OAT clinics. To enhance the HCV treatment cascade and continuum of care, targeted HCV diagnosis efforts, including structured HCV testing and follow-up, are needed. The introduction of psychiatrist-led HCV treatment at the Maria OAT clinic has resulted in a more structured approach to HCV diagnostics, ensuring that all OAT participants are HCV tested upon inclusion and at least annually for those with ongoing risk behaviours, while also providing easy access to HCV treatment. This successful model of care, with psychiatrist-led HCV treatment on-site at OAT clinics, has since 2021 been further implemented at four other OAT clinics in Stockholm. However, streamlining HCV care by utilizing point-of-care testing, introducing patient education, peer-support and simplifying treatment regimens (e.g. introducing pangenotypic treatment) would further enhance treatment accessibility and completion rates among OAT patients and PWID [ , , ]. During the COVID-19 pandemic, the number of HCV treatments decreased worldwide and in Sweden . Between 2019 and 2020, HCV treatment initiations at ID clinics decreased by over 50% and remained at that level during 2021–2022 (personal communication with registry holder for InfCare Hepatitis). Experiences have varied by region, with e.g., disrupted ID outreach activities to OAT clinics to full or temporary pauses in treatment starts at ID clinics that instead needed to focus on COVID-19 . HCV treatments were also discontinued or decreased at many NSP as these programs are mainly managed by ID clinics in Sweden. At the Maria OAT clinic, there was no discontinuation of HCV treatment on-site since psychiatrists themselves treated HCV. However, a 42% decrease in HCV starts in 2020 compared to 2019 was noted, although treatment numbers increased again in 2021. HCV treatment education for psychiatrists, addiction specialists and staff at OAT clinics thus makes HCV treatment and care more sustainable for OAT patients, than e.g., relying on referrals, intermittently funded projects or that only ID specialists treat HCV, which was specifically noted during the COVID-19 pandemic. Some limitations of our study need to be addressed. The outcomes were based on real-world, convenience sampling data from HCV-treated participants on-site at the Maria OAT clinic and may thus not be generalizable to other treatment settings. Additionally, our study design lacked a control group, which means that our model of care cannot be fully evaluated, as the HCV-treated participants eventually could have received HCV treatment elsewhere. Most HCV-treated participants were found in the high treatment needs group, and OAT patients in the moderate- and low treatment needs groups might have received HCV treatment elsewhere, which was not investigated in this study. As an effect, possible unknown treatments and reinfections in those groups could affect overall SVR and reinfection rates in the whole Maria OAT cohort, most likely resulting in overall higher SVR rates and lower reinfection rates, as there is a suggested lower levels of high-risk drug use in the moderate- and low treatment needs groups. On the other hand, a strength of our study was that HCV treatments in the high treatment needs group were all performed on-site at the OAT clinic and not elsewhere, indicating valid treatment outcomes in this group and a successful model of care. Still, some OAT participants may have missed HCV investigation and treatment due to lack of engagement and other reasons not investigated in this study. A further limitation is that we lack data on individual risk behavior and IDU post SVR and instead rely on a general perception of risk behavior in the differentiated OAT treatment groups. However, most reinfections were expectedly noted in the high treatment needs group, where continuous IDU was highly prevalent. Lastly, we performed no phylogenetic testing or genotype testing on those with viral recurrence between EOT and SVR. Thus, we cannot fully differentiate virological treatment failure from reinfection. However, all participants with viral recurrence were HCV RNA negative at EOT which indicates a high possibility for cure, rather than virological failure. This may have led to an underestimation of reinfections in this study.
Introducing psychiatrist-led HCV treatment at an OAT clinic was effective, with good treatment results and levels of reinfections consistent with the literature. Enhancing the HCV care cascade and continuum of care for OAT patients offer significant benefits and is essential for local, and global HCV elimination. Task shifting involving psychiatrists and addiction specialists in the treatment of HCV not only ensures integrated care but also optimizes healthcare resources. These approaches, coupled with comprehensive harm reduction strategies, can effectively address the challenges associated with HCV among OAT patients and PWID, leading to improved treatment uptake, completion rates, and ultimately, better health outcomes.
|
Decoding chromosomal instability insights in CRC by integrating omics and patient-derived organoids | 91a0ba08-e3b4-41c9-aaff-560920693151 | 11869439 | Biochemistry[mh] | Chromosomal instability (CIN), defined by the ongoing rate of chromosome missegregation, is a recognized hallmark of cancer, conferring the necessary phenotypic plasticity for cells to survive in stressful conditions as well as increasing heterogeneity, promoting tumour evolution . 70% of colorectal cancers (CRCs) display CIN , associated with poor prognosis . However, strategies specifically targeting this tumour type are lacking, and patients are neither treated nor stratified based on this feature. CIN can be triggered by mutations or treatments that impair the cellular processes involved in accurate chromosome segregation; telomere alterations and DNA-repair damage can also contribute to inducing CIN. In addition, chromosomal breaks and rearrangements induced by CIN and chromothripsis can help alter genome integrity and increase CIN . These complex alterations constitute a significant burden for cancer cells, as normal cells fail to survive even small alterations . Somewhat paradoxically, accumulating additional alterations can improve tumourigenesis, possibly due to emerging mechanisms that help the cell to overcome fitness stress , while an excess of CIN can in turn increase cell stress until a point of no return thereby inducing cell death . Therefore, the knowledge of the mechanisms that help to cope with CIN stress could lead to develop innovative treatment strategies, and omics technologies, able to explore the impact of chromosomal abnormalities on a large scale, can certainly play a key role in this endeavour. The dynamic nature of CIN requires the use of proper functional models. Indeed, although 2D cancer cell lines have been extensively used to study the dynamics of CIN in cancer, they tend to accumulate genome abnormalities per se and do not represent intratumoural heterogeneity. Tumour patient-derived organoids (PDOs) currently provide the most faithful depiction of human cancer and could represent the tool to fill the CIN knowledge gap. A few studies employed organoids as CIN models. Indeed, CRC PDOs were shown to widely display CIN, and that despite mitotic errors led to cell death, some PDOs were largely insensitive to them , while radioresistant rectal organoids displayed less CIN than sensitive ones . Ovarian cancer PDOs were shown to be good models of CIN in terms of genomics and transcriptomics , while oesophageal cancer PDOs were employed to investigate the causes of CIN . Investing the poor defined molecular mechanism that underlie CIN in PDOs is crucial to increase our knowledge of this complex phenomenon, as until recently all we know came from 2D cell lines or tissues. However, more data are needed to confirm that the CIN profile could be truly recapitulated by organoids, at both genotype and phenotype level and none of the studies conducted so far with PDOs focused on the processes that are ongoing to help endure CIN. We previously demonstrated that PDOs from patients with advanced CRC faithfully recapitulated the genome and transcriptome of tissues, and through an integrated proteotranscriptomic approach we could identify promising biomarkers of response/resistance to both standard and non-standard drugs . Here, we used the weighted Genome Instability Index (wGII) to classify our PDO models according to CIN . We demonstrated that they reproduce the CIN phenotype of tissues in terms of genome, transcriptome and proteome. Proteotranscriptomics uncovered a significant relationship between metabolic rewiring and epithelial-mesenchymal transition (EMT) in CIN + CRC PDOs. Moreover, a proteome-wGII correlation reinforced these processes pointing more specifically to enhanced mitochondrial metabolism, with a significant increase in activities related to acyl-CoA species, and to the activation of YAP signalling in CIN. Using omics and functional genomic databases, we prioritised a subset of these proteins and molecular processes putatively relevant in CIN. Taken together, our results add knowledge of the molecular mechanisms that could be operating in high-CIN CRC PDOs, helping them withstand the stressful conditions imposed by this phenomenon, and which constitute putative therapeutic targets.
Detailed methods are described in Supplementary Methods. All reagents and tools are listed in Table S1. Ethics The study was approved by the Hospital Clínico Universitario de Valencia Ethics Committee (2018/063, 2021/083) in compliance with the Declaration of Helsinki. All patients provided written informed consent. Tissue processing and organoid culture Fresh tissues were processed as previously published . Supplementary table S2 shows the main features and analysis performed for each of the organoids employed in this paper. Copy number and CIN status determination Cytoscan HD was performed on PDOs and tissues according to the manufacturer protocol. Data were analysed with ChAS and IGV (v. 3.0). Weighted Genome Instability Index (wGII) was calculated as published elsewhere . Genomic co-ordinates of gain/lost regions retrieved from ChAS were used to estimate the total of copy number alterations (CNA) as base pair (bp), normalized for chromosome length. RNA-Sequencing (RNA-seq) and Proteomics analysis by LC–MS/MS-SWATH RNA-seq and quantitative proteomics were performed as previously published . See Supplementary Methods for details. Differential gene expression analysis was conducted with the DESeq2 v1.34 package with RStudio. GSEA was used for hallmarks analysis. Unsupervised hierarchical cluster analysis based on the “EMT” gene set from the Molecular Signatures Database (MSigDB, GSEA) was done using a Euclidean distance measure and Ward linkage. Motif analysis has been conducted using FASTQ files to run ISMARA software ( https://ismara.unibas.ch/mara/ ). Obtained motif z-values have been filtered out excluding those lower than 1.5. Differential expression analysis based on SWATH normalized protein areas was performed. Functional analysis was conducted using STRING database and Cytoscape StringApp. The correlation between CIN and protein abundance was analysed by calculating the Pearson coefficient. To identify mitochondrial proteins, we used information from UniProt (subcellular location and GO Cellular component). Proteins containing the term ‘Mitochondria’ in either of these two entries were considered to be mitochondrial proteins, although they may also be in other locations. Differential proteins according to RNAseq or proteomic (differential and Pearson correlation analysis) data are considered. Immunofluorescence staining of PDOs paraffin sections PDOs domes were collected in 4% neutral buffered formalin and paraffin embedded as previously described , 4 um slides were cut, dewaxed and sodium citrate antigen retrieval was performed (Target Retrieval Solution, Citrate pH 9, S236784-2) followed by blocking (Dako-REAL™, Dako, cat. No. S2023). PDO’ slides were incubated with the following primary antibodies in EnVision FLEX antibody diluent (Dako, cat. No. K8006): ActYAP1 (abcam; ab205270; 1:200), IPO7 (Santa Cruz; sc-365231; 1:50), acetylated-lysine (Cell signalling; #9441; 1:200). After washing three times with PBS samples were incubated with corresponding Alexa 488- and Alexa 647-conjugated secondary antibodies (Invitrogen; A11001, A31571; 1:500) and mounted in Prolong Gold Antifade Mountant with DNA Stain DAPI (Invitrogen; P36941). Samples were imaged on a confocal microscope LEICA TCS-SP8. Representative images were acquired and shown as Z‐projections, single slices or XZ cross sections. Image analysis has been performed with CellProfiler and ImageJ software. Statistical analyses The significance threshold of all statistical analyses was set at p -value below 0.05. wGII correlation between PDOs and matched fresh tissue was conducted with linear regression analysis. The PDOs-TCGA transcriptomic comparison was performed with Chi-square test. The correlation between CIN and protein abundance was analysed by calculating the Pearson coefficient. We used the Pearson function of Microsoft Excel spreadsheet with the median normalized and log2 transformed data for each protein and the wGII value for each sample as function inputs. Venn diagrams were used to visualize differences between categorical groups. Statistical analyses related to publicly datasets are described in the corresponding section. Publicly available datasets analysis For cell lines dependencies, CIN + and CIN- cell lines were analysed for combined CRISPR/Cas9 DepMap Public 23Q4 + Score and Chronos datasets and RNA-interference combined Achilles + DRIVE + Marcotte, DEMETER2 from DepMap portal ( https://depmap.org/portal/ ), selecting genes from the Pearson signature proteins. The dependence score for each gene in CIN + vs CIN- was compared via multiple t-test. Data were represented with Volcano plots indicating the dependence score effect size. For drugs analysis the AUCs for each compound of the CTD^2 dataset from DepMap portal were compared for CIN + vs CIN- via multiple t-test. Proteins from the Pearson signature were searched in the proteomic data from Zhang et al. for CIN vs MSI annotated samples. Z-scores were calculated for each protein, and a clustering heatmap was built with pheatmap package. Fisher exact test was used to evaluate statistical significance of tissue categorization. Kaplan–Meier plotter software ( https://kmplot.com/analysis/ ) was used to evaluate the prognostic value of the prioritized targets. TNMplot web tool ( https://tnmplot.com/analysis/ ) was used to compare gene expression in CRC tumour versus normal tissue using RNAseq data and normal tissues near the tumour area.
The study was approved by the Hospital Clínico Universitario de Valencia Ethics Committee (2018/063, 2021/083) in compliance with the Declaration of Helsinki. All patients provided written informed consent.
Fresh tissues were processed as previously published . Supplementary table S2 shows the main features and analysis performed for each of the organoids employed in this paper.
Cytoscan HD was performed on PDOs and tissues according to the manufacturer protocol. Data were analysed with ChAS and IGV (v. 3.0). Weighted Genome Instability Index (wGII) was calculated as published elsewhere . Genomic co-ordinates of gain/lost regions retrieved from ChAS were used to estimate the total of copy number alterations (CNA) as base pair (bp), normalized for chromosome length.
RNA-seq and quantitative proteomics were performed as previously published . See Supplementary Methods for details. Differential gene expression analysis was conducted with the DESeq2 v1.34 package with RStudio. GSEA was used for hallmarks analysis. Unsupervised hierarchical cluster analysis based on the “EMT” gene set from the Molecular Signatures Database (MSigDB, GSEA) was done using a Euclidean distance measure and Ward linkage. Motif analysis has been conducted using FASTQ files to run ISMARA software ( https://ismara.unibas.ch/mara/ ). Obtained motif z-values have been filtered out excluding those lower than 1.5. Differential expression analysis based on SWATH normalized protein areas was performed. Functional analysis was conducted using STRING database and Cytoscape StringApp. The correlation between CIN and protein abundance was analysed by calculating the Pearson coefficient. To identify mitochondrial proteins, we used information from UniProt (subcellular location and GO Cellular component). Proteins containing the term ‘Mitochondria’ in either of these two entries were considered to be mitochondrial proteins, although they may also be in other locations. Differential proteins according to RNAseq or proteomic (differential and Pearson correlation analysis) data are considered.
PDOs domes were collected in 4% neutral buffered formalin and paraffin embedded as previously described , 4 um slides were cut, dewaxed and sodium citrate antigen retrieval was performed (Target Retrieval Solution, Citrate pH 9, S236784-2) followed by blocking (Dako-REAL™, Dako, cat. No. S2023). PDO’ slides were incubated with the following primary antibodies in EnVision FLEX antibody diluent (Dako, cat. No. K8006): ActYAP1 (abcam; ab205270; 1:200), IPO7 (Santa Cruz; sc-365231; 1:50), acetylated-lysine (Cell signalling; #9441; 1:200). After washing three times with PBS samples were incubated with corresponding Alexa 488- and Alexa 647-conjugated secondary antibodies (Invitrogen; A11001, A31571; 1:500) and mounted in Prolong Gold Antifade Mountant with DNA Stain DAPI (Invitrogen; P36941). Samples were imaged on a confocal microscope LEICA TCS-SP8. Representative images were acquired and shown as Z‐projections, single slices or XZ cross sections. Image analysis has been performed with CellProfiler and ImageJ software.
The significance threshold of all statistical analyses was set at p -value below 0.05. wGII correlation between PDOs and matched fresh tissue was conducted with linear regression analysis. The PDOs-TCGA transcriptomic comparison was performed with Chi-square test. The correlation between CIN and protein abundance was analysed by calculating the Pearson coefficient. We used the Pearson function of Microsoft Excel spreadsheet with the median normalized and log2 transformed data for each protein and the wGII value for each sample as function inputs. Venn diagrams were used to visualize differences between categorical groups. Statistical analyses related to publicly datasets are described in the corresponding section.
For cell lines dependencies, CIN + and CIN- cell lines were analysed for combined CRISPR/Cas9 DepMap Public 23Q4 + Score and Chronos datasets and RNA-interference combined Achilles + DRIVE + Marcotte, DEMETER2 from DepMap portal ( https://depmap.org/portal/ ), selecting genes from the Pearson signature proteins. The dependence score for each gene in CIN + vs CIN- was compared via multiple t-test. Data were represented with Volcano plots indicating the dependence score effect size. For drugs analysis the AUCs for each compound of the CTD^2 dataset from DepMap portal were compared for CIN + vs CIN- via multiple t-test. Proteins from the Pearson signature were searched in the proteomic data from Zhang et al. for CIN vs MSI annotated samples. Z-scores were calculated for each protein, and a clustering heatmap was built with pheatmap package. Fisher exact test was used to evaluate statistical significance of tissue categorization. Kaplan–Meier plotter software ( https://kmplot.com/analysis/ ) was used to evaluate the prognostic value of the prioritized targets. TNMplot web tool ( https://tnmplot.com/analysis/ ) was used to compare gene expression in CRC tumour versus normal tissue using RNAseq data and normal tissues near the tumour area.
PDOs reproduce CIN profile of tissues at genome and transcriptome level CytoscanHD was conducted on PDOs and matched fresh tissues to detect CNA (Additional file S1). We included 9 patients/12 PDOs for which we have copy number data, a sample size similar to previous studies . As we previously demonstrated with fewer models , they share the same copy number profile (Fig S1), and in most cases organoids were enriched in gains/losses compared to the tissues. The wGII was employed as a surrogate of CIN, as a wGII > 0·2 indicates the presence of CIN . Across our cohort of twelve matched organoids and tissues (Table S2), 70% of samples displayed CIN (Fig. a-b), consistent with the literature, and PDO CIN had a positive correlation with tissue’ CIN (linear regression, r = 0·867, p -value = 0·001) (Fig. c). Nevertheless, two cases showed substantial discordance. For patient 43, the wGII was much higher in organoid than tissue, while in patient 47, tissue was negative while PDOs had one of the highest wGII, compatible with an extremely low tumour cells percentage . Interestingly, CIN status can be evolutionarily acquired, as shown by patient 24, where organoids generated from two different synchronous liver metastases were CIN-, but the tissue obtained from a progressive brain metastasis was CIN + (Fig. a), indicating the acquisition of a more aggressive phenotype. PDOs generated from different metastases of the same patient show morphological heterogeneity in culture (Fig. d). Using the comprehensive coverage of CytoscanHD which goes up to gene level, we assessed intra-patient copy number heterogeneity from different metastases, detecting significant differences in CNA (Fig. e). Unfortunately, this analysis was precluded for mCTO38S8 and for the brain metastasis of patient 24 where only tissue comparison was possible, due to lack of growth before obtaining sufficient DNA in the first case, and not at all for the second. We also detected mosaicism, confirming heterogeneity in terms of subclonal CNA (Fig. S2). Next, lost/gained genes were matched with the most commonly reported within the TCGA. The gene list was extracted from CRC TCGA (CBioPortal) selecting genes altered in at least 1% of samples. These were compared with our PDOs-tissues cohort (Fig. a). The genes that most frequently present copy number gains/losses across the TCGA were found within our PDOs and confirmed in our tissues. The most frequently gained/lost genes in our cohorts specifically are depicted in Fig. b. Among the amplified genes AURKA was gained in 80% of PDOs and has been associated with CIN as it regulates the function of centrosomes, spindles and kinetochores for mitotic progression , and TOP1 , involved in the stabilization of long chromosomes . Indeed, on histological slides from CIN and non-CIN subtypes, numerous mitotic figures were evident in both groups (between 6 to 18 per 5 high power fields (HPF) (Fig. S3a). However, atypical mitotic figures appeared to be more frequently observed in the CIN subtype (64% vs. 48%), supporting a phenotype indicative of mitotic spindle alterations in CIN PDOs. Among the lost genes were PCM1 , involved in centrosome integrity maintenance, TUSC3 , which can inhibit EMT, and FHIT , related with CIN. When we consider the pathogenic mutations of PDOs , mutant genes tend to have more copy number variations in CIN + organoids (Fig.S3b). We previously showed that PDOs reproduced the transcriptomic profile of their original tissues so here we hypothesized whether PDOs would reflect CIN + in an independent tissues’ cohort at the transcriptomic level. We computed differential expression analysis between all CIN + and CIN- PDOs. GSEA analysis showed that CIN + PDOs present an upregulation of MYC and E2F targets, “protein secretion” and “unfolded protein response” signatures as well as a downregulation of those related with “TNFalfa signaling”, “p53 pathway” and “apoptosis” signatures (Fig. S4). Subsequently, we found that using the 100 most differentially expressed genes (Fig. c; Additional file S2) we obtained a good classification of the COAD PanCancer cohort study in the TCGA (Fig. d; Chi-square test, p < 0·0001, Fig S5). Many of the genes found in the top 100 were previously associated with CIN for their role in the dynamic of mitotic spindle ( NSMCE2 ), the kinetochore-microtubule complex ( FDG1 ) and the regulation of ploidy ( NUAK1 ). Another interesting gene was NPHP4 , that encode a negative Hippo pathway regulator which has been associated with LATS1-induced chromosomal instability . Moreover, we contrast our data with previously described CIN signatures, and found that HET70 CIN signature independently classified our PDOs as CIN + or CIN- (pval < 0.001, Fisher’s exact test) (Fig. S6), aligning with the findings of Sheltzer . An integrative proteotranscriptomic approach to unravel differential processes in CIN+ and CIN- PDOs Since PDOs were a good model of CIN + CRC at genomic and transcriptomic levels, we further investigated the molecular mechanisms underlying CIN in organoids, applying an integrated proteotranscriptomic strategy . As we could perform SWATH-MS for a subset of PDOs, we computed differential gene expression analysis for the same models employed in proteomics (Table S2). We generated a proteomic dataset, identifying 116 proteins (with ≥ two peptides), 62 up- and 54 down-regulated in CIN + PDOs (Additional file S3), while in the transcriptomic dataset 1017 genes were differentially expressed, 644 up- and 373 downregulated (Additional file S2). Among them, 15 were commonly detected in both datasets (Fig. a) with highly correlated log2 FC values, while a modest correlation was observed for proteins showing significant differences only at RNA level (Fig. S7), highlighting the value of integrating both datasets in such studies. As many proteins (101/116) were differentially only at protein level, we performed a functional annotation study of these and found acetylation as the process containing the highest number of proteins [66/101] (Fig. b). These data align with the key role of post-translational modifications (PTMs) in gene expression regulation, with acetylation being particularly relevant in this dataset. Our approach involved extracting common processes through an integrative network analysis using STRING database via Cytoscape . We focused on the largest high confidence (score 0·7) functionally related network obtained (Fig. c). It contained 454 nodes, with 80 proteins, 364 RNAs and 10 differentially expressed genes detected at both protein and RNA level. Functional enrichment confirmed that most gene ontologies (GOs) were represented by both, with four highly connected modules annotated as ‘metabolism’, ‘cytoskeleton organization and extracellular matrix’, ‘gene expression and chromatin’ and ‘signaling’ (Fig. c). A topological clustering algorithm was applied to explore functional interactions between groups of nodes inside modules (Fig. d), with proteins included in the term ‘acetylation’ in panel b highlighted as larger blue nodes. Functional enrichment is shown in Fig. e, where the GO term and/or the first functional term with the lowest FDR has been selected for each cluster. The complete data are in Additional file S3. The ‘metabolic module’, containing seven clusters, showed a notable role for mitochondrial metabolism. Electron transport chain and mitochondrial ATP synthesis processes were enriched in cluster 3, including different subunits of complexes III and IV and the mitochondrial phosphate carrier SLC25A3, an essential component of the ATP synthasome . Several enzymes involved in the tricarboxylic acid cycle (TCA) appeared in cluster 9, as SUCLA2 and SUCLG1, the two subunits of the Succinate-CoA ligase function in the TCA coupling the hydrolysis of succinyl-CoA to the synthesis of ATP. Strikingly, two other clusters (20 and 5) were functionally related to many processes involving acyl-CoA species. Cluster 20 was enriched in genes related to fatty acid beta-oxidation (FAO) and branched-chain amino acids (BCAA) degradation, while cluster 5 was associated with ketone body metabolism, which includes the Coenzyme A Synthase gene COASY . In the ‘cytoskeleton and extracellular matrix’ module nine clusters emerged. Cluster 4 contained a group of differential RNAs involved in actin cytoskeleton organization, integrated due to the upregulated hub protein CDC42, a small GTPase involved in the regulation of signalling pathways that control cell cycle progression, migration and morphology . Cluster 15 was composed by a set of extracellular matrix glycoproteins including three laminins LAMB1, LAMC1 and LAMA1 together with COL18A1, all upregulated in CIN + . Cluster 27, meanwhile, included significantly reduced cell adhesion proteins well known for their involvement in gastrointestinal cancers such as EPCAM and CEACAM5. In the ‘gene expression and chromatin’ module mixed clusters 2, 7 and 8 were formed by RNA-binding proteins and ribosomal proteins involved in modulation of mRNA stability and translation of certain genes. Among these LRPPRC, a potential oncogene in multiple tumour types, detected at both RNA and protein level, which plays a role in mitochondria homeostasis . We also detected upregulation of the two genes, IGF2BP1/2 , that encode oncofoetal IGF2 mRNA-binding proteins, acting as RNA N 6 -methyladenosine modification readers. Recently, IGF2BP2 was shown to promote CRC progression by stabilizing oncogenic mRNAs, including YAP mRNA , a downstream nuclear effector of the Hippo signaling pathway with a role in the development and progression of multiple cancers. Finally, in ‘signaling’ module, cluster 6 showed the integration of genes and proteins related with the regulation of FOXO transcription factors localization, with a significant reduction in YWHAZ and SFN expression. These are members of 14–3-3 protein family, that associate with YAP leading to its translocation to the cytoplasm with consequent inhibition . Finally, cluster 13 contained protein phosphatases involved in chromosome segregation and spindle formation, and cluster 26 showed DNA-PK protein integrated with genes of the DNA repair machinery and related to chromosome organization. Proteomic analysis reveals laminin enrichment in CIN+ PDOs, consistent with an EMT transcriptomic profile Among the clusters of the integrated analysis, cluster 15 drew our attention (Fig. d), as it includes a group of three laminins, LAMB1, LAMC1 and LAMA1, with the highest fold change in CIN (Fig. a). Laminins are key mediators of cell–cell and cell-basement membrane interaction and play a major role in cell adhesion, differentiation, and migration during embryogenesis . Several data highlight their role in EMT induction and in promoting cancer aggressiveness. Furthermore, CIN + PDOs displayed a significant reduction of EPCAM, in accordance with loss of epithelial phenotype. We also detected in CIN + PDOs enhanced expression of CD44 gene, encoding a cell-surface receptor that plays a role in cell–cell interactions, cell adhesion and migration, which is increased in cancer cells with an EMT stem cell-like phenotype . Intriguingly, some mitochondrial enzymes, which according to proteotranscriptomic analysis appeared enriched in CIN + PDOs, might themselves contribute to promote EMT as they are involved in the generation of metabolites (i.e. acetyl-CoA, α-ketoglutarate, succinyl-CoA) that serve as crucial ‘ink’ on epigenetic modifications . Collectively, these data suggested an association between CIN and EMT in our models. However, EMT is a complex, highly regulated and reversible process that requires the activation and silencing of many genes. Therefore, beyond specific changes, we analysed whether the phenotype of our models was globally compatible with an EMT phenotype. An unsupervised hierarchical clustering heatmap interrogating the expression of the MSigDB EMT signature, revealed that CIN + PDOs clustered together among those with the highest expression of EMT related genes (Fig. b). Futhermore, motif analysis via ISMARA software showed that CIN + organoids present a relevant higher activity of motives belonging to transcription factors involved in EMT, such as PRRX1, SMARCC2, ETV4 and ETS2, as well as TEAD3_TEAD1, member of the Hippo-YAP signaling pathway (Fig. c) as compared with CIN- organoids. In addition, ISMARA identified a higher activity in CIN + PDOs of CREB5_CREM_JUNB, CLOCK, NFKB1, NR4A3, GLIS3 which are related with mitochondrial function (fig. S8), as well as motives related with E2F transcription factors and others involved in genome instability (fig. S8). Proteome linked to CIN identify putative biomarkers and novel therapeutic targets To identify proteins and processes that best explain the CIN phenotype and could serve as biomarkers and/or therapeutic targets, we correlated organoid’ proteome with the wGII. We computed a protein-wGII Pearson correlation analysis and selected 147 proteins based on p -value (Additional file S3), 70 were in common with those of the integrated analysis while 77 where newly detected by Pearson (Fig. a). The data confirmed the relevance of laminins, showing a positive correlation with CIN (LAMA1: r 0·84, p < 0·001; LAMB1: r 0·802, p < 0·001; LAM C1: r 0·83, p < 0·001) as well as CDC42 (r 0·52, p < 0·05). In addition, we found new candidates not previously mentioned, such as IPO7 (r 0·53, p < 0·05), detected as differential at both proteomic and transcriptomic level (Fig. a), particularly interesting for its participation in the nuclear import of different proteins including histone H1 and the protein component of human telomerase . IPO7 dominant cargo is YAP, a key regulator of mechanotransduction, which is activated by CDC42 and once in the nucleus can activate the expression of Laminins and other EMT genes . Although not detected by proteomics, YAP1 mRNA levels were higher in CIN + PDOs (Fig. S9). On the other hand, we detected an enrichment of motifs recognised by TEAD transcriptional factors involved in YAP signalling (Fig. c). Taken together, these data strengthen a putative activation of YAP signalling in CIN + models. To support this hypothesis, colocalization analysis of IPO7 and YAP1 showed, not only a higher expression of both proteins in CIN + models (Fig. b-c), but also a significant correlation of their respective nuclear localization, supporting their putative association in CIN organoids (Fig. d). On the other hand, two members of the annexin A (ANXA) calcium-regulated phospholipid-binding protein family, with opposite correlations with CIN, presented the highest correlation: ANXA7 (r 0·91, p < 0·001) and ANXA1 (r −0·83, p < 0·001). ANXA7 was reported to promote EMT, contributing to hepatocellular carcinoma aggressiveness . The role of ANXA1 is unclear, with contradictory reports indicating it as either an inhibitor or an activator of EMT . Another highly correlated protein was the mitochondrial trifunctional enzyme subunit-alpha HADHA (r 0·82, p < 0·001) with a key role in FAO, again suggesting metabolism rewiring in the CIN phenotype. Moving beyond individual proteins, a functional protein–protein interaction network was generated. A main principal network emerged, with two clear large modules related with ‘mitochondrial metabolism’ and ‘cytoskeleton and extracellular matrix’ and two small ones related with ‘endoplasmic reticulum (ER) stress response’ and ‘mRNA processing and translation’. Figure e shows the generation of eight clusters (shaded in different colours) with more than 4 nodes, with colour coding to indicate the degree of positive or negative Pearson correlation. Functional enrichment analysis of topological clustering (detailed in Fig. S10) was performed (Additional file S3) and selected processes that better define the proteins of the cluster are showed in Fig. f. Many clusters reinforced some of the principal features highlighted in the integrative analysis, but with noteworthy new findings. Cluster 1, related with mitochondrial metabolism, contained many proteins linked to acyltransferase activity and FAO, confirming their correlation with CIN but with two new acyltransferases: ACAA2 (r 0·73, p < 0·001), a rate-limiting enzyme catalysing the last step of the mitochondrial beta-oxidation pathway, and ACAT1 (r 0·61, p < 0·01), a key rate-limiting enzyme in ketone body metabolism responsible for recycling ketone bodies into acetyl-CoA, reinforcing a process already highlighted in the integrative analysis. As proof of concept, since previous studies indicated that increased acetyl-CoA coincides with elevated acetylation , we analysed lysine acetylation in proteins as it is the prevalent modification in chromatin, and observed a mostly nuclear signal, more intense in CIN + PDOs, although the difference is only significant in the model with the highest CIN value (Fig S11). The TCA cycle and respiratory electron transport also correlated with CIN, with virtually the same proteins as in integration analysis plus citrate synthase (CS: r 0·59, p < 0·01), pyruvate dehydrogenase (PDHA1: r 0·49, p < 0·05) and ETFA (r 0·502, p < 0·05), a flavoprotein required for electron transfer to the respiratory chain from various acyl-CoA dehydrogenases involved in fatty acid and amino acid oxidation. Glutamine metabolism (with GLS1 and GLUD1: r 0·69, p < 0·01 and r 0·49, p < 0·05, respectively), previously undetected, was found to be correlated with CIN. Further supporting the role of mitocondrial function in CIN was the fact that among the differential proteins whose abundance increases, mitochondrial proteins are significantly over-represented, as can be observed in new Fig. g. Moreover, we analysed the expression levels of genes/proteins related with mitochondrial function using the MitoCarta 3.0 ( https://www.broadinstitute.org/files/shared/metabolism/mitocarta/human.mitocarta3.0.html ). Thanks to this approach we identified a set of proteins that are differentially expressed, showing that CIN + samples present a notably higher expression of mitochondrial proteins, as regards PDOs (Fig. h) as well as an independent CRC tissues dataset (Zhang et al.) (Fig. i). Taken together, these findings suggest an increased mitochondrial function in CIN + CRCs. Cluster 2 highlighted the endoplasmic reticulum (ER) stress response, an adaptative survival mechanism activated by stressful circumstances, such as the unfolded proteins accumulation, exploited by cancer cells. Regarding cell shape and stiffness, cluster 4 (enriched in the organization of extracellular matrix with laminins as prominent members) was detected, with a new protein, HSPG2 (r 0·69, p < 0·01), an extracellular matrix proteoglycan which contributes to invasion, metastasis, and angiogenesis in solid tumours, including CRC . Furthermore, actin cytoskeleton organization was confirmed as an enriched process in clusters 5, 7 and 8, with new components. Among these, alpha- and beta-subunits of the heterodimeric CAPZ protein negatively correlated with CIN (CAPZA1: r −0·58, p < 0·05; CAPZA2: r −0·51, p < 0·05), while there was a positive correlation with CFL1 (r 0·48, p < 0·05), an essential regulator of actin filaments dynamics. The actin-capping protein CAPZ binds to the barbed end of actin, preventing actin filament growth . Interestingly, CAPZA1 was also reported to inhibit EMT in hepatocellular carcinoma by regulating actin cytoskeleton . In contrast, CFL1 was described as crucial in the switch from epithelial to mesenchymal-like morphology, and cell migration and invasion in CRC cells . In the same line, keratins are the main identification markers of circulating tumours cells. In particular, KRT16 (r 0·68, p < 0·01) protein expression was associated with intermediate mesenchymal phenotype with a regulatory effect on EMT . Prioritisation of potential therapeutic targets associated with chromosomal instability in CRC To prioritise proteins or processes found associated with CIN in our organoids we leveraged publicly available databases. We first evaluated the expression of wGII-protein list across an independent proteomic dataset where protein abundance was determined in a similar manner, although CIN was transcriptomically defined. We found that 44 of 147 proteins of our list were differentially expressed between CIN and MSI tissues (Additional file S4), with 30 showing the same change in expression as in our organoid cohort, including, for instance, IPO7, CFL1, LRPPRC, ANXA1, SUCLG2, ACAA2 and HADHA, the last three being acyltransferase enzymes. The acyltransferase ACAT1 was not significantly differentially expressed, but we found a trend towards an increase in its expression in CIN tissues (Fig. S12). Unsupervised clustering heatmap showed that these proteins were able to classify the tissues according to CIN (Fisher exact test, p -value < 0·0001) (Fig. a-b). Next employing functional genomics databases (DepMap) we analysed the role of wGII-proteins in CIN through loss-of-function screens in 15 CRC cell lines compatible with CIN and MSI status . For RNA-i not all genes were present, so we analysed the effect of 130 of them (Supplementary Methods and Additional file S4). Among the genes with stronger dependencies in CIN + cell lines (Fig. c-d) seven encode proteins positively correlated with CIN: two chaperones, related to ‘ER stress response’, HSP90 and P4HB; two mitochondrial enzymes, SUCLG2 and ACAA2, linked to acyl-CoA transference in TCA and FAO metabolic pathways respectively; the mitochondrial phosphate carrier SLC25A3; and two keratins, KRT5 and KRT16, cytoskeleton structural proteins. We also queried DepMap to determine in a large-scale screening those drugs, among the 497 checked, that were more effective in CIN + cell lines (Additional file S4). After eliminating compounds without a defined target and those for whom only one line was tested, we identified 26 of them with a differential effect, 15 significantly more effective in CIN + with respect to MSI cell lines (Fig. e). Two of them caught our attention: the YAP inhibitor CIL56, since our data point to a possible activation of YAP signalling in our models, and CI-976, an inhibitor of cholesterol acyl transferase (ACAT1/SOAT1). Thereafter, we aimed at analysing the prognostic effect of prioritised targets using KM-plot, considering MSS patients as a good approximation of CIN patients, as CIN status was not available and as an inverse relation has been observed between CIN and MSI . Interestingly, we found that, some targets influenced survival. These included, YAP1, IPO7, CDC42, LAMB1, LAMC1, KRT16 and GLS higher expression was associated with lower survival in MSS patients (Fig. S13) (OS: YAP1 HR 1·7, pval = 0·000088; IPO7 HR 1·37, pval = 0·059; CDC42 HR 1·73, pval = 3e-04; LAMB1 HR 2·39, pval = 0·00081; LAMC1 HR 2·37, pval = 0·00052, KRT16 HR 1·58, pval = 0·0038; GLS HR 1·67, pval = 0·00083), while for ACAT1 there was an effect on relapse-free survival in advanced stages (1·51, pval = 0·027) (Fig. S14). Furthermore, all these survival effects were specific for MSS patients as in MSI there was either no significant effect or the opposite effect, as in the case of CDC42 and ACAT1 (Fig. S13-S14). Finally, we analysed the expression of these prioritised targets in CRC versus matched normal mucosa using the TNM-plot tool ( https://tnmplot.com/analysis/ ) and we found that most of them show tumour-specific mRNA expression except for ACAT1 and CDC42, in contrast to the literature (Fig. S15). Table summarizes the prioritised processes and targets identified.
CytoscanHD was conducted on PDOs and matched fresh tissues to detect CNA (Additional file S1). We included 9 patients/12 PDOs for which we have copy number data, a sample size similar to previous studies . As we previously demonstrated with fewer models , they share the same copy number profile (Fig S1), and in most cases organoids were enriched in gains/losses compared to the tissues. The wGII was employed as a surrogate of CIN, as a wGII > 0·2 indicates the presence of CIN . Across our cohort of twelve matched organoids and tissues (Table S2), 70% of samples displayed CIN (Fig. a-b), consistent with the literature, and PDO CIN had a positive correlation with tissue’ CIN (linear regression, r = 0·867, p -value = 0·001) (Fig. c). Nevertheless, two cases showed substantial discordance. For patient 43, the wGII was much higher in organoid than tissue, while in patient 47, tissue was negative while PDOs had one of the highest wGII, compatible with an extremely low tumour cells percentage . Interestingly, CIN status can be evolutionarily acquired, as shown by patient 24, where organoids generated from two different synchronous liver metastases were CIN-, but the tissue obtained from a progressive brain metastasis was CIN + (Fig. a), indicating the acquisition of a more aggressive phenotype. PDOs generated from different metastases of the same patient show morphological heterogeneity in culture (Fig. d). Using the comprehensive coverage of CytoscanHD which goes up to gene level, we assessed intra-patient copy number heterogeneity from different metastases, detecting significant differences in CNA (Fig. e). Unfortunately, this analysis was precluded for mCTO38S8 and for the brain metastasis of patient 24 where only tissue comparison was possible, due to lack of growth before obtaining sufficient DNA in the first case, and not at all for the second. We also detected mosaicism, confirming heterogeneity in terms of subclonal CNA (Fig. S2). Next, lost/gained genes were matched with the most commonly reported within the TCGA. The gene list was extracted from CRC TCGA (CBioPortal) selecting genes altered in at least 1% of samples. These were compared with our PDOs-tissues cohort (Fig. a). The genes that most frequently present copy number gains/losses across the TCGA were found within our PDOs and confirmed in our tissues. The most frequently gained/lost genes in our cohorts specifically are depicted in Fig. b. Among the amplified genes AURKA was gained in 80% of PDOs and has been associated with CIN as it regulates the function of centrosomes, spindles and kinetochores for mitotic progression , and TOP1 , involved in the stabilization of long chromosomes . Indeed, on histological slides from CIN and non-CIN subtypes, numerous mitotic figures were evident in both groups (between 6 to 18 per 5 high power fields (HPF) (Fig. S3a). However, atypical mitotic figures appeared to be more frequently observed in the CIN subtype (64% vs. 48%), supporting a phenotype indicative of mitotic spindle alterations in CIN PDOs. Among the lost genes were PCM1 , involved in centrosome integrity maintenance, TUSC3 , which can inhibit EMT, and FHIT , related with CIN. When we consider the pathogenic mutations of PDOs , mutant genes tend to have more copy number variations in CIN + organoids (Fig.S3b). We previously showed that PDOs reproduced the transcriptomic profile of their original tissues so here we hypothesized whether PDOs would reflect CIN + in an independent tissues’ cohort at the transcriptomic level. We computed differential expression analysis between all CIN + and CIN- PDOs. GSEA analysis showed that CIN + PDOs present an upregulation of MYC and E2F targets, “protein secretion” and “unfolded protein response” signatures as well as a downregulation of those related with “TNFalfa signaling”, “p53 pathway” and “apoptosis” signatures (Fig. S4). Subsequently, we found that using the 100 most differentially expressed genes (Fig. c; Additional file S2) we obtained a good classification of the COAD PanCancer cohort study in the TCGA (Fig. d; Chi-square test, p < 0·0001, Fig S5). Many of the genes found in the top 100 were previously associated with CIN for their role in the dynamic of mitotic spindle ( NSMCE2 ), the kinetochore-microtubule complex ( FDG1 ) and the regulation of ploidy ( NUAK1 ). Another interesting gene was NPHP4 , that encode a negative Hippo pathway regulator which has been associated with LATS1-induced chromosomal instability . Moreover, we contrast our data with previously described CIN signatures, and found that HET70 CIN signature independently classified our PDOs as CIN + or CIN- (pval < 0.001, Fisher’s exact test) (Fig. S6), aligning with the findings of Sheltzer .
Since PDOs were a good model of CIN + CRC at genomic and transcriptomic levels, we further investigated the molecular mechanisms underlying CIN in organoids, applying an integrated proteotranscriptomic strategy . As we could perform SWATH-MS for a subset of PDOs, we computed differential gene expression analysis for the same models employed in proteomics (Table S2). We generated a proteomic dataset, identifying 116 proteins (with ≥ two peptides), 62 up- and 54 down-regulated in CIN + PDOs (Additional file S3), while in the transcriptomic dataset 1017 genes were differentially expressed, 644 up- and 373 downregulated (Additional file S2). Among them, 15 were commonly detected in both datasets (Fig. a) with highly correlated log2 FC values, while a modest correlation was observed for proteins showing significant differences only at RNA level (Fig. S7), highlighting the value of integrating both datasets in such studies. As many proteins (101/116) were differentially only at protein level, we performed a functional annotation study of these and found acetylation as the process containing the highest number of proteins [66/101] (Fig. b). These data align with the key role of post-translational modifications (PTMs) in gene expression regulation, with acetylation being particularly relevant in this dataset. Our approach involved extracting common processes through an integrative network analysis using STRING database via Cytoscape . We focused on the largest high confidence (score 0·7) functionally related network obtained (Fig. c). It contained 454 nodes, with 80 proteins, 364 RNAs and 10 differentially expressed genes detected at both protein and RNA level. Functional enrichment confirmed that most gene ontologies (GOs) were represented by both, with four highly connected modules annotated as ‘metabolism’, ‘cytoskeleton organization and extracellular matrix’, ‘gene expression and chromatin’ and ‘signaling’ (Fig. c). A topological clustering algorithm was applied to explore functional interactions between groups of nodes inside modules (Fig. d), with proteins included in the term ‘acetylation’ in panel b highlighted as larger blue nodes. Functional enrichment is shown in Fig. e, where the GO term and/or the first functional term with the lowest FDR has been selected for each cluster. The complete data are in Additional file S3. The ‘metabolic module’, containing seven clusters, showed a notable role for mitochondrial metabolism. Electron transport chain and mitochondrial ATP synthesis processes were enriched in cluster 3, including different subunits of complexes III and IV and the mitochondrial phosphate carrier SLC25A3, an essential component of the ATP synthasome . Several enzymes involved in the tricarboxylic acid cycle (TCA) appeared in cluster 9, as SUCLA2 and SUCLG1, the two subunits of the Succinate-CoA ligase function in the TCA coupling the hydrolysis of succinyl-CoA to the synthesis of ATP. Strikingly, two other clusters (20 and 5) were functionally related to many processes involving acyl-CoA species. Cluster 20 was enriched in genes related to fatty acid beta-oxidation (FAO) and branched-chain amino acids (BCAA) degradation, while cluster 5 was associated with ketone body metabolism, which includes the Coenzyme A Synthase gene COASY . In the ‘cytoskeleton and extracellular matrix’ module nine clusters emerged. Cluster 4 contained a group of differential RNAs involved in actin cytoskeleton organization, integrated due to the upregulated hub protein CDC42, a small GTPase involved in the regulation of signalling pathways that control cell cycle progression, migration and morphology . Cluster 15 was composed by a set of extracellular matrix glycoproteins including three laminins LAMB1, LAMC1 and LAMA1 together with COL18A1, all upregulated in CIN + . Cluster 27, meanwhile, included significantly reduced cell adhesion proteins well known for their involvement in gastrointestinal cancers such as EPCAM and CEACAM5. In the ‘gene expression and chromatin’ module mixed clusters 2, 7 and 8 were formed by RNA-binding proteins and ribosomal proteins involved in modulation of mRNA stability and translation of certain genes. Among these LRPPRC, a potential oncogene in multiple tumour types, detected at both RNA and protein level, which plays a role in mitochondria homeostasis . We also detected upregulation of the two genes, IGF2BP1/2 , that encode oncofoetal IGF2 mRNA-binding proteins, acting as RNA N 6 -methyladenosine modification readers. Recently, IGF2BP2 was shown to promote CRC progression by stabilizing oncogenic mRNAs, including YAP mRNA , a downstream nuclear effector of the Hippo signaling pathway with a role in the development and progression of multiple cancers. Finally, in ‘signaling’ module, cluster 6 showed the integration of genes and proteins related with the regulation of FOXO transcription factors localization, with a significant reduction in YWHAZ and SFN expression. These are members of 14–3-3 protein family, that associate with YAP leading to its translocation to the cytoplasm with consequent inhibition . Finally, cluster 13 contained protein phosphatases involved in chromosome segregation and spindle formation, and cluster 26 showed DNA-PK protein integrated with genes of the DNA repair machinery and related to chromosome organization.
Among the clusters of the integrated analysis, cluster 15 drew our attention (Fig. d), as it includes a group of three laminins, LAMB1, LAMC1 and LAMA1, with the highest fold change in CIN (Fig. a). Laminins are key mediators of cell–cell and cell-basement membrane interaction and play a major role in cell adhesion, differentiation, and migration during embryogenesis . Several data highlight their role in EMT induction and in promoting cancer aggressiveness. Furthermore, CIN + PDOs displayed a significant reduction of EPCAM, in accordance with loss of epithelial phenotype. We also detected in CIN + PDOs enhanced expression of CD44 gene, encoding a cell-surface receptor that plays a role in cell–cell interactions, cell adhesion and migration, which is increased in cancer cells with an EMT stem cell-like phenotype . Intriguingly, some mitochondrial enzymes, which according to proteotranscriptomic analysis appeared enriched in CIN + PDOs, might themselves contribute to promote EMT as they are involved in the generation of metabolites (i.e. acetyl-CoA, α-ketoglutarate, succinyl-CoA) that serve as crucial ‘ink’ on epigenetic modifications . Collectively, these data suggested an association between CIN and EMT in our models. However, EMT is a complex, highly regulated and reversible process that requires the activation and silencing of many genes. Therefore, beyond specific changes, we analysed whether the phenotype of our models was globally compatible with an EMT phenotype. An unsupervised hierarchical clustering heatmap interrogating the expression of the MSigDB EMT signature, revealed that CIN + PDOs clustered together among those with the highest expression of EMT related genes (Fig. b). Futhermore, motif analysis via ISMARA software showed that CIN + organoids present a relevant higher activity of motives belonging to transcription factors involved in EMT, such as PRRX1, SMARCC2, ETV4 and ETS2, as well as TEAD3_TEAD1, member of the Hippo-YAP signaling pathway (Fig. c) as compared with CIN- organoids. In addition, ISMARA identified a higher activity in CIN + PDOs of CREB5_CREM_JUNB, CLOCK, NFKB1, NR4A3, GLIS3 which are related with mitochondrial function (fig. S8), as well as motives related with E2F transcription factors and others involved in genome instability (fig. S8).
To identify proteins and processes that best explain the CIN phenotype and could serve as biomarkers and/or therapeutic targets, we correlated organoid’ proteome with the wGII. We computed a protein-wGII Pearson correlation analysis and selected 147 proteins based on p -value (Additional file S3), 70 were in common with those of the integrated analysis while 77 where newly detected by Pearson (Fig. a). The data confirmed the relevance of laminins, showing a positive correlation with CIN (LAMA1: r 0·84, p < 0·001; LAMB1: r 0·802, p < 0·001; LAM C1: r 0·83, p < 0·001) as well as CDC42 (r 0·52, p < 0·05). In addition, we found new candidates not previously mentioned, such as IPO7 (r 0·53, p < 0·05), detected as differential at both proteomic and transcriptomic level (Fig. a), particularly interesting for its participation in the nuclear import of different proteins including histone H1 and the protein component of human telomerase . IPO7 dominant cargo is YAP, a key regulator of mechanotransduction, which is activated by CDC42 and once in the nucleus can activate the expression of Laminins and other EMT genes . Although not detected by proteomics, YAP1 mRNA levels were higher in CIN + PDOs (Fig. S9). On the other hand, we detected an enrichment of motifs recognised by TEAD transcriptional factors involved in YAP signalling (Fig. c). Taken together, these data strengthen a putative activation of YAP signalling in CIN + models. To support this hypothesis, colocalization analysis of IPO7 and YAP1 showed, not only a higher expression of both proteins in CIN + models (Fig. b-c), but also a significant correlation of their respective nuclear localization, supporting their putative association in CIN organoids (Fig. d). On the other hand, two members of the annexin A (ANXA) calcium-regulated phospholipid-binding protein family, with opposite correlations with CIN, presented the highest correlation: ANXA7 (r 0·91, p < 0·001) and ANXA1 (r −0·83, p < 0·001). ANXA7 was reported to promote EMT, contributing to hepatocellular carcinoma aggressiveness . The role of ANXA1 is unclear, with contradictory reports indicating it as either an inhibitor or an activator of EMT . Another highly correlated protein was the mitochondrial trifunctional enzyme subunit-alpha HADHA (r 0·82, p < 0·001) with a key role in FAO, again suggesting metabolism rewiring in the CIN phenotype. Moving beyond individual proteins, a functional protein–protein interaction network was generated. A main principal network emerged, with two clear large modules related with ‘mitochondrial metabolism’ and ‘cytoskeleton and extracellular matrix’ and two small ones related with ‘endoplasmic reticulum (ER) stress response’ and ‘mRNA processing and translation’. Figure e shows the generation of eight clusters (shaded in different colours) with more than 4 nodes, with colour coding to indicate the degree of positive or negative Pearson correlation. Functional enrichment analysis of topological clustering (detailed in Fig. S10) was performed (Additional file S3) and selected processes that better define the proteins of the cluster are showed in Fig. f. Many clusters reinforced some of the principal features highlighted in the integrative analysis, but with noteworthy new findings. Cluster 1, related with mitochondrial metabolism, contained many proteins linked to acyltransferase activity and FAO, confirming their correlation with CIN but with two new acyltransferases: ACAA2 (r 0·73, p < 0·001), a rate-limiting enzyme catalysing the last step of the mitochondrial beta-oxidation pathway, and ACAT1 (r 0·61, p < 0·01), a key rate-limiting enzyme in ketone body metabolism responsible for recycling ketone bodies into acetyl-CoA, reinforcing a process already highlighted in the integrative analysis. As proof of concept, since previous studies indicated that increased acetyl-CoA coincides with elevated acetylation , we analysed lysine acetylation in proteins as it is the prevalent modification in chromatin, and observed a mostly nuclear signal, more intense in CIN + PDOs, although the difference is only significant in the model with the highest CIN value (Fig S11). The TCA cycle and respiratory electron transport also correlated with CIN, with virtually the same proteins as in integration analysis plus citrate synthase (CS: r 0·59, p < 0·01), pyruvate dehydrogenase (PDHA1: r 0·49, p < 0·05) and ETFA (r 0·502, p < 0·05), a flavoprotein required for electron transfer to the respiratory chain from various acyl-CoA dehydrogenases involved in fatty acid and amino acid oxidation. Glutamine metabolism (with GLS1 and GLUD1: r 0·69, p < 0·01 and r 0·49, p < 0·05, respectively), previously undetected, was found to be correlated with CIN. Further supporting the role of mitocondrial function in CIN was the fact that among the differential proteins whose abundance increases, mitochondrial proteins are significantly over-represented, as can be observed in new Fig. g. Moreover, we analysed the expression levels of genes/proteins related with mitochondrial function using the MitoCarta 3.0 ( https://www.broadinstitute.org/files/shared/metabolism/mitocarta/human.mitocarta3.0.html ). Thanks to this approach we identified a set of proteins that are differentially expressed, showing that CIN + samples present a notably higher expression of mitochondrial proteins, as regards PDOs (Fig. h) as well as an independent CRC tissues dataset (Zhang et al.) (Fig. i). Taken together, these findings suggest an increased mitochondrial function in CIN + CRCs. Cluster 2 highlighted the endoplasmic reticulum (ER) stress response, an adaptative survival mechanism activated by stressful circumstances, such as the unfolded proteins accumulation, exploited by cancer cells. Regarding cell shape and stiffness, cluster 4 (enriched in the organization of extracellular matrix with laminins as prominent members) was detected, with a new protein, HSPG2 (r 0·69, p < 0·01), an extracellular matrix proteoglycan which contributes to invasion, metastasis, and angiogenesis in solid tumours, including CRC . Furthermore, actin cytoskeleton organization was confirmed as an enriched process in clusters 5, 7 and 8, with new components. Among these, alpha- and beta-subunits of the heterodimeric CAPZ protein negatively correlated with CIN (CAPZA1: r −0·58, p < 0·05; CAPZA2: r −0·51, p < 0·05), while there was a positive correlation with CFL1 (r 0·48, p < 0·05), an essential regulator of actin filaments dynamics. The actin-capping protein CAPZ binds to the barbed end of actin, preventing actin filament growth . Interestingly, CAPZA1 was also reported to inhibit EMT in hepatocellular carcinoma by regulating actin cytoskeleton . In contrast, CFL1 was described as crucial in the switch from epithelial to mesenchymal-like morphology, and cell migration and invasion in CRC cells . In the same line, keratins are the main identification markers of circulating tumours cells. In particular, KRT16 (r 0·68, p < 0·01) protein expression was associated with intermediate mesenchymal phenotype with a regulatory effect on EMT .
To prioritise proteins or processes found associated with CIN in our organoids we leveraged publicly available databases. We first evaluated the expression of wGII-protein list across an independent proteomic dataset where protein abundance was determined in a similar manner, although CIN was transcriptomically defined. We found that 44 of 147 proteins of our list were differentially expressed between CIN and MSI tissues (Additional file S4), with 30 showing the same change in expression as in our organoid cohort, including, for instance, IPO7, CFL1, LRPPRC, ANXA1, SUCLG2, ACAA2 and HADHA, the last three being acyltransferase enzymes. The acyltransferase ACAT1 was not significantly differentially expressed, but we found a trend towards an increase in its expression in CIN tissues (Fig. S12). Unsupervised clustering heatmap showed that these proteins were able to classify the tissues according to CIN (Fisher exact test, p -value < 0·0001) (Fig. a-b). Next employing functional genomics databases (DepMap) we analysed the role of wGII-proteins in CIN through loss-of-function screens in 15 CRC cell lines compatible with CIN and MSI status . For RNA-i not all genes were present, so we analysed the effect of 130 of them (Supplementary Methods and Additional file S4). Among the genes with stronger dependencies in CIN + cell lines (Fig. c-d) seven encode proteins positively correlated with CIN: two chaperones, related to ‘ER stress response’, HSP90 and P4HB; two mitochondrial enzymes, SUCLG2 and ACAA2, linked to acyl-CoA transference in TCA and FAO metabolic pathways respectively; the mitochondrial phosphate carrier SLC25A3; and two keratins, KRT5 and KRT16, cytoskeleton structural proteins. We also queried DepMap to determine in a large-scale screening those drugs, among the 497 checked, that were more effective in CIN + cell lines (Additional file S4). After eliminating compounds without a defined target and those for whom only one line was tested, we identified 26 of them with a differential effect, 15 significantly more effective in CIN + with respect to MSI cell lines (Fig. e). Two of them caught our attention: the YAP inhibitor CIL56, since our data point to a possible activation of YAP signalling in our models, and CI-976, an inhibitor of cholesterol acyl transferase (ACAT1/SOAT1). Thereafter, we aimed at analysing the prognostic effect of prioritised targets using KM-plot, considering MSS patients as a good approximation of CIN patients, as CIN status was not available and as an inverse relation has been observed between CIN and MSI . Interestingly, we found that, some targets influenced survival. These included, YAP1, IPO7, CDC42, LAMB1, LAMC1, KRT16 and GLS higher expression was associated with lower survival in MSS patients (Fig. S13) (OS: YAP1 HR 1·7, pval = 0·000088; IPO7 HR 1·37, pval = 0·059; CDC42 HR 1·73, pval = 3e-04; LAMB1 HR 2·39, pval = 0·00081; LAMC1 HR 2·37, pval = 0·00052, KRT16 HR 1·58, pval = 0·0038; GLS HR 1·67, pval = 0·00083), while for ACAT1 there was an effect on relapse-free survival in advanced stages (1·51, pval = 0·027) (Fig. S14). Furthermore, all these survival effects were specific for MSS patients as in MSI there was either no significant effect or the opposite effect, as in the case of CDC42 and ACAT1 (Fig. S13-S14). Finally, we analysed the expression of these prioritised targets in CRC versus matched normal mucosa using the TNM-plot tool ( https://tnmplot.com/analysis/ ) and we found that most of them show tumour-specific mRNA expression except for ACAT1 and CDC42, in contrast to the literature (Fig. S15). Table summarizes the prioritised processes and targets identified.
This work sheds light on the mechanisms operating in advanced CRC CIN tumours. We demonstrated that PDOs faithfully reproduced the genome, transcriptome and proteome of tissues across independent datasets. Integration of differential RNA and protein expression data of CIN + vs CIN- PDOs allowed us to identify functional modules. Then, we correlated the proteome with the CIN value, highlighting the proteins that best explain CIN phenotype. Finally, using functional genomic databases and patient-tissues datasets we prioritized, in silico, some of the high-confidence CIN features of organoids to be explored in future functional studies. Figure illustrates the experimental workflow. To understand the molecular mechanisms underlying CIN, the use of models as much representative as possible of human tumours is imperative. Recent data addressed CIN in organoids at genome level . We also integrated it with a deep RNA and protein expression characterization. First, we determined wGII as a proxy for CIN in organoids and tissues, showing a significant concordance and dynamic increase, as proven by patient 24. Although CIN generation and tolerance represents an important bottleneck during tumour progression, the ability of chromosomal unstable cells to generate genomic heterogeneity in their progeny allows the tumour to evolve and progress. Accordingly, organoids captured inter and intrapatient heterogeneity. Starting from the most differentially expressed genes in organoids, we were able to classify TCGA tissues, showing that our models were a good phenotypic reproduction of CIN in CRC (Fig. d). The wider effects of CIN on the transcriptome are poorly understood, and even fewer studies addressed its impact on the proteome. Our integrated proteotranscriptomic analysis (Fig. ) enhanced the understanding of the expression patterns associated with CIN in PDOs. Alterations in gene content cause significant energy and proteotoxic stress impairing cell fitness . Accordingly, functional enrichment identified mitochondrial metabolism comprising FAO, TCA cycle and OXPHOS in CIN models. Moreover, mitochondria generate metabolites involved in survival, growth, and gene expression regulation . Indeed, we observed changes in the expression of TCA cycle enzymes involved in the production of oncometabolites that control chromatin epigenetic changes and proteins PTMs (as IDH3A, OGDHL, SUCLA2, SUCLG1 and SUCLG2). Alterations in SUCLA2 expression led to changes in succinyl-CoA levels and global protein succinylation, regulating different mitochondrial metabolic networks like the TCA cycle flux and contributing to different diseases including cancer . Other processes enhanced in CIN would contribute to increasing the cellular pool of acyl-CoA species, such as BCAA, FAO and ketone body metabolic pathways . Recent evidence indicate that nucleus and mitochondria maintain a bidirectional regulation and through “retrograde signaling” mitochondria can regulate the expression of different genes to control cell fate and function Other relevant features associated with CIN were related to ‘extracellular matrix and cytoskeleton organization’, with a clear upregulation of genes involved in EMT and a significant reduction of the epithelial marker EpCAM, one of the most consistently downregulated proteins associated with EMT phenotype in different models . Indeed, we identified a prominent CIN-associated cluster consisting of four laminins, implicated in EMT and aggressive cancer phenotype . Additionally, we observed a notably higher expression of the EMT/stem marker CD44 in CIN PDOs. Interestingly, another EMT inducer Twist1 was shown to downregulate cell cycle checkpoints involved in genome stability and CIN-high PDOs were successfully classified using the transcriptomic EMT signature (Fig. ), as previously reported with CIN cancer cell lines . Motif analysis showed a relevant activity of transcription factors reported to induce EMT, as well as some involved in mitochondrial function. Furthermore, cells undergoing EMT show metabolic changes to balance proliferation versus energy-consuming migration, indicating a crosstalk between EMT and metabolic reprogramming . CIN, as a dynamic process with increasing genomic imbalance, is more accurately characterized as a “rate” rather than a cellular “state”. On the other hand, although CIN shows significant effects on mRNAs abundance, relatively few changes extend to the protein level . Taking these two issues into account, correlation analysis between the degree of CIN (measured as wGII) and the level of protein expression could help to identify candidate driver genes and processes in CIN. A new functional network based on the list of proteins with significant Pearson correlation index was constructed (Fig. ) and GO terms reinforced an enrichment of the ‘mitochondrial metabolism’ with a key role of FAO, as well as ‘extracellular matrix’ with laminins and ‘cytoskeleton organization’ and, a new module related to ‘endoplasmic reticulum (ER) stress response’ one of the most important mechanisms regulating cellular adaptation to adverse cellular conditions, including aneuploidies . In addition, new proteins appeared, notably ACAT1, a key enzyme that allows ketone body re-utilization into acetyl-CoA. Previous studies established that ketone bodies fuel mitochondrial activity, leading to an increase in ATP production in cancer cells , actively promoting tumour growth and metastasis. However, ketone bodies can also act as signalling metabolites, influencing gene expression and PTMs, among other processes. Recently, it was shown that elevated serum β-hydroxybutyrate, a circulating ketone metabolite, accelerates CRC proliferation and metastasis via ACAT1 due to the induction of IDH1 acetylation . Other emerging proteins were GLS1 and GLUD1, involved in glutaminolysis, the process by which glutamine is converted into TCA cycle metabolites. GLS1 is overexpressed in various cancer cells associated with poor prognosis and both were described as EMT inducers in cancer cells . The correlation analysis reinforced the EMT phenotype in CIN PDOs. Indeed, the most positively correlated proteins include laminins, which activate the GTPase CDC42 , itself with a positive correlation. It is involved in the modification of actin cytoskeleton, which is known to activate YAP, connecting nuclear import processes with mechanical extracellular cues and actin cytoskeleton . A YAP activated signature was shown to predict poor outcomes in patients with CRC and this aligns with findings associating high YAP expression and nuclear localization with adverse patient outcomes . Intriguingly, the importin IPO7 was positively correlated with CIN, with YAP its principal cargo. Nuclear YAP can induce the expression of laminins and other EMT-related genes, establishing positive feedback . Although little is known about IPO7 cellular function, this gene is frequently overexpressed in CRC, induced by c-MYC and downregulated by p53 . We showed an increase in active YAP and IPO7 expression and their significant colocalization in CIN + organoids, reinforcing a putative IPO7/YAP axis leading to its activation. If confirmed, this could represent a new putative target for drug development in the context of CIN. To prioritize among the proteome-wGII list, we used a publicly data-driven approach. Since MS-based proteomics has not been implemented in clinical practice, there are not many studies containing both proteome and clinical data. By using the data available at Zhang B. et al . , we found that 30 proteins were able to classify CIN tumours in this cohort, including IPO7, SUCLG2, ACAA2, and HADHA (Fig. a). Furthermore, the interrogation of functional genomic databases revealed some proteins as genetically dependent in CIN cells (Fig. c-d). Two chaperones, HSP90B and P4HB, related with ‘ER stress response’ and, again, two mitochondrial acyl-CoA transferases, ACAA2 and SUCLG2, involved in FAO and TCA. Moreover, drug sensitivity from DepMap pointed to the inhibition of YAP and acyltransferase processes as the most effective targets in CIN + cell lines (Fig. e-f). Finally, despite CIN status was not available and the expression measured at mRNA level, higher gene expression of IPO7 , YAP1, CDC42, LAMB1, LAMC1, KRT16, GLS and ACAT1 is associated with lower survival in MSS but not in MSI patients (Fig. S13-14). There are some limitations in this work that require further studies, starting from in vitro or in vivo mechanistic experiments, necessary to confirm the role of the multiple candidates provided by our study, as putative Achilles’ heels in CIN tumours. In addition, due to the intrinsic epithelial nature of PDOs, we were unable to detect signs of immune involvement, such as cGAS-STING pathway . In addition, the intrinsic limit of our proteomics approach that only captures peptides with canonical sequences, does not allow to capture neoantigens which CIN could generate. In summary, unlike cancer cells, normal cells cannot tolerate errors in chromosome segregation. Understanding how cancer cells cope with the deleterious consequences of CIN could open new therapeutic opportunity. We show the utility of organoids to study CIN in CRC as they recapitulated the genomic and phenotypic features of CIN. The validation of all omics data in independent tissue cohorts in the context of CIN strengthens the generalisability of our findings. The expression patterns identified herein should serve as a useful ex vivo marker for cancer progression and could be exploited to develop new therapeutic strategies selectively targeting high-CIN cells. Furthermore, the primary and processed datasets generated herein could be used for new biological discoveries and therapeutic hypotheses generation.
In conclusion, we demonstrated that PDOs from advanced CRC patients are a good in vitro model of chromosomal instability in terms of genomics, transcriptomics and proteomics. Our findings identify new putative targets that could be exploited in the future to develop new therapeutic strategies selectively targeting high-CIN cells.
Supplementary Material 1: Supplementary methods. Table S1. Reagents and tools. Table S2. Patient characteristics. Fig. S1. Whole genome heatmap representation of copy number gains (red) and losses (blue) across all tissues and PDOs. Fig. S2. Venn diagrams of gene-level copy number alterations depicting subclonal inter-metastasis heterogeneity. Fig. S3. (a) Hematoxilin and eosin staining of representative PDOs showing some examples of atypical mitotic figures in CIN- (A-B) and CIN+ (C-D) PDOs. (b) Heatmap representing the mutational and copy number status of PDOs. Fig. S4. GSEA analysis of CIN+ vs CIN- PDOs. qvalue threshold is set at <0.05. Fig. S5. Chi-square CIN+/- cohort vs clustering based on top 100 gene signature. Fig. S6. Unsupervised hierarchical clustering heatmaps of PDOs Z-score gene expression across different CIN signatures: CIN70 and HET70. Fig S7. Correlation analysis between RNA and protein fold change for those detected as differential by both RNAseq and proteomics (red dots) and those differential at RNA level detected by proteomics. Fig S8. (a) Activity Score by mean of Z-value obtained by ISMARA, sorted by Z-value higher than 1.3. (b) ISMARA transcription-factor- activity plot of the CIN- and CIN+ models. n = 3 biological replicates. Fig. S9. violin plot indicating YAP1 expression level in CIN+ vs CIN- organoids. n=3 biological replicates. Fig. S10. Clustered protein association network derived from Pearson correlation analysis. Clustering was performed using the Markov clustering implementation in the Cytoscape-StringApp. Fig. S11. (a) Confocal imaging of CIN- (upper panel) and CIN+ (lower panel) organoids stained with DAPI (blue) and anti-Acetylated-Lysine (red). Representative images are shown. (b) Single nuclei staining intensity quantification for Acatylated-Lysine. Fig. S12. Protein abundance of ACAT1 across MSI and CIN tissues. Fig. S13. Target’s expression is associated with an impact on OS. Representation of OS Kaplan Meier curves in MSS and MSI-CRC samples from TCGA based on an optimal cut-off. Fig. S14. Target’s expression is associated with an impact on RFS. Representation of RFS Kaplan Meier curves in MSS and MSI-CRC samples from TCGA based on an optimal cut-off. Fig. S15. Box plots of YAP1 and IPO7 gene expression in the TCGA (tumour versus normal mucosa near the tumour area). Supplementary Material 2. Additional File S1. Excel file containing organoids and tissues copy number data. Supplementary Material 3. Additional File S2. Excel file containing organoids gene signature deriving from transcriptomic analysis used to classify TCGA tissues and differential gene expression analysis. Supplementary Material 4. Additional File S3. Excel file containing proteomic dataset and full results of integrative and Pearson analysis. Supplementary Material 5. Additional File S4. Excel file containing public databases analysis.
|
Japanese Society of Neuropsychopharmacology: “Guideline for Pharmacological Therapy of Schizophrenia” | 46b19eb0-6e73-47d5-ad44-6dc46e747ce0 | 8411321 | Pharmacology[mh] | Ryota Hashimoto, Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry Junichi Iga, Department of Neuropsychiatry, Ehime University Graduate School of Medicine Ken Inada, Department of Psychiatry, Tokyo Women’s Medical University Taro Kishi, Department of Psychiatry, Fujita Health University School of Medicine Hiroshi Kimura, Department of Psychiatry, International University of Health and Welfare/Gakuji‐kai Kimura Hospital Yuki Matsuda, Department of Psychiatry, Jikei University School of Medicine Nobumi Miyake, Department of Neuropsychiatry, St. Marianna University School of Medicine Kiyotaka Nemoto, Department of Psychiatry, Faculty of Medicine, University of Tsukuba Shusuke Numata, Department of Psychiatry, Graduate School of Biomedical Science, Tokushima University Shinichiro Ochi, Department of Neuropsychiatry, Molecules and Function, Ehime University Graduate School of Medicine Hideki Sato, National Center Hospital, National Center of Neurology and Psychiatry Seiichiro Tarutani, Department of Psychiatry, Shin‐Abuyama Hospital, Osaka Institute of Clinical Psychiatry Hiroyuki Uchida, Department of Neuropsychiatry, Keio University School of Medicine
Ryota Hashimoto: Received research grants from Otsuka Pharmaceutical Co., Ltd., Japan Tobacco Inc., and Takeda Pharmaceutical Company Ltd.; rewards for lectures from Takeda Pharmaceutical Company Ltd., Lundbeck Japan K.K., Dainippon Sumitomo Pharma Co., Ltd., and Mochida Pharmaceutical.Co., Ltd.; manuscript fees for writing from Dainippon Sumitomo Pharma Co., Ltd. Junchi Iga: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., Meiji Seika Pharma Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Kyowa Pharmaceutical Industry Co. Ltd., Shionogi & Co. Ltd., Mochida Pharmaceutical Co. Ltd., Eisai Co. Ltd., Mylan Inc., Sawai Pharmaceutical Co. Ltd., Novartis Pharma K.K., Eli Lilly Japan K.K., MSD K.K., Ono Pharmaceutical Co. Ltd., Takeda Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., Sanofi K.K., Viatris Inc., and Yoshitomiyakuhin Co. Ken Inada: Received research grants, rewards for lectures, manuscript fees for writing, and donations from Astellas Pharma Inc., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Chugai Pharmaceutical Co. Ltd., Eli Lilly Japan K.K., Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Taro Kishi: Received speaker’s honoraria from Sumitomo Dainippon, Otsuka, Eisai, Daiichi Sankyo, Janssen, Takeda, Kyowa, Kissei, Meiji, Pfizer, Mochida, Eli Lilly, MSD, Janssen, and Tanabe‐Mitsubishi (Yoshitomi); as well as research grants from Eisai, the Japanese Ministry of Health, Labour and Welfare, Grant‐in‐Aid for Scientific Research, and Fujita Health University School of Medicine. Hiroshi Kimura: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., Meiji Seika Pharma Co. Ltd. and Janssen Pharmaceutical K.K., Yuki Matsuda: Received rewards for lectures from Meiji Seika Pharma Co. Ltd., Otsuka Pharmaceutical Co. Ltd., Kyowa Pharmaceutical Industry Co. Ltd., and Sumitomo Dainippon Pharma Co. Ltd. Nobumi Miyake: Received rewards for lectures from Meiji Seika Pharma Co. Ltd., and Sumitomo Dainippon Pharma Co. Ltd. Kiyotaka Nemoto: Received rewards for lectures from Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Eli Lilly Japan K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Lundbeck Japan K.K. Shusuke Numata: Received research grants, rewards for lectures, and donations from Astellas Pharma Inc., Eisai Co. Ltd., Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Eli Lilly Japan K.K., Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., Kyowa Pharmaceutical Industry Co., Ltd., Takeda Pharmaceutical Company Limited., and Yoshitomiyakuhin Co. Shinichiro Ochi: Received rewards for lectures from Dainippon Sumitomo Pharma Co. Ltd., Meiji Seika Pharma Co. Ltd., Mochida Pharmaceutical Co. Ltd., and Kowa Company, Ltd. Hideki Sato: Received research grants from Nippon Boehringer Ingelheim Co. Ltd., Lundbeck Japan K.K., Biogen Japan Ltd., and Mitsubishi Tanabe Pharma Corporation. Seiichiro Tarutani: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Meiji Seika Pharma Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Hiroyuki Uchida: Received grants from Eisai, Otsuka Pharmaceutical, Dainippon‐Sumitomo Pharma, Daiichi Sankyo Company, Mochida Pharmaceutical, and Meiji‐Seika Pharma; speaker’s honoraria from Otsuka Pharmaceutical, Dainippon‐Sumitomo Pharma, Eisai, and Meiji‐Seika Pharma; and advisory panel payments from Dainippon‐Sumitomo Pharma within the past three years.
(For experts, patients, families, and supporters) This guideline was written for specialists in the treatment of schizophrenia, but patients, as well as their families and supporters, may also use this guideline. Therefore, a very simple explanation will first be given on the aims of this guideline. This guideline shows the drug‐type selection criteria for patients with a clear diagnosis of schizophrenia when starting drug treatment. There are a few elements to keep in mind when reading this guideline. The first is that this guideline is for patients with a clear diagnosis of schizophrenia. There are cases in clinical settings where a patient may have similar symptoms but is not affected by schizophrenia, particularly in the early stages of the disease, when it may not be possible to clearly diagnose schizophrenia. This guideline cannot be applied in such cases. There may also be cases where the criteria in this guideline are not applicable due to characteristics of comorbidities, even if schizophrenia was diagnosed. The second point to consider is that this guideline does not indicate that schizophrenia can be treated with pharmacological therapy alone. Schizophrenia is treated through a combination of pharmacological therapy and psychosocial therapy. Either pharmacological therapy or psychosocial therapy may be more effective depending on the types of symptoms and the time of illness. The skillful combination of both therapies can improve cerebral and psychosocial dysfunction in schizophrenia and increase the effectiveness of treatment. For this reason, the combination of pharmacological and psychosocial therapy is a major premise of schizophrenia treatment. Sufficient results cannot be expected from treatment if only one of these therapies is conducted. Furthermore, the sense of security obtained from reliable human relationships and stable living conditions is the basis of this specialized treatment. This aspect will not be repeated again in each guideline to avoid confusion. As a result, one may have the impression from reading the individual texts that it is recommended to treat schizophrenia with pharmacological therapy alone, or that pharmacological therapy has a larger effect than other therapies. This is not the aim of the guideline, and we hope that there is no confusion on this point. The third point is that this guideline discusses general theories. The pathology of schizophrenia varies in each patient. Living conditions also vary for each patient. Furthermore, drug effects and side effects are also very individual. A guideline is the result of averaging this variability. For this reason, there may be situations where the recommendations in this guideline do not apply for the specific individual case. It cannot be determined that a patient receives inappropriate treatment solely based on the fact that the guideline recommendations are not being followed. Specialized decisions on an individual level for each treatment situation are prioritized over following this guideline. The word “guideline” may give the impression that this is a set of rules, but understanding this text in this way is not correct. This guideline is significant and useful in the sense that it is a summary of the experiences of many experts studying and treating many patients, but it is not a set of rules that should be obeyed unconditionally. This guideline has value as a single document to be used as a basis or reference for experts to conduct actual treatment, as well as an opportunity for patients and their families to discuss treatment options together with the expert. Instead of patients and their families unilaterally accepting the guideline or the expert’s decisions, the basis of treatment for mental illness, including schizophrenia, is for both parties to discuss their hopes and thoughts and agree on a treatment policy. Only by using this guideline for this purpose, its true intention can be realized. The objective of schizophrenia treatment is to effectively face the symptoms and illness, strive to achieve a lifestyle that the individual wants, and find a way of life that is unique to that individual through this type of collaboration.
Background of creating the Guideline for Pharmacological Therapy of Schizophrenia Guidelines for schizophrenia treatment have been created in various countries and have been translated and used in Japan as well. However, the available drugs and their administration and the medical system can vary between Japan and other countries. Therefore, a clinical guideline that is aligned with the medical circumstances in Japan was needed. There have been previous clinical guidelines in Japan that relied on expert opinion, but there are none based on scientific evidence. For this reason, a clinical guideline that gathers the findings obtained to date and is based on scientific evidence was needed. With this in mind, the Japanese Society of Neuropsychopharmacology formed a Task Force of Guideline for Pharmacological Therapy of Schizophrenia and create the guideline. It should be stated once again that schizophrenia treatment is not based on pharmacological therapy alone. Comprehensive treatment such as psychosocial therapy and collaborations with medical welfare is necessary. It is self‐evident that creating a comprehensive treatment guideline is desirable, but we have decided to create a guideline for pharmacological therapy for which there is relatively ample evidence as a first step for the creation of a comprehensive guideline. Members of the Japanese Society for Schizophrenia Research also participated in the creation of this guideline and were mainly in charge of writing the “Before reading this guideline” section in the introduction. Members of the task force at the time of the initial creation of the Guideline for Pharmacological Therapy of Schizophrenia at 2015
Guidelines for schizophrenia treatment have been created in various countries and have been translated and used in Japan as well. However, the available drugs and their administration and the medical system can vary between Japan and other countries. Therefore, a clinical guideline that is aligned with the medical circumstances in Japan was needed. There have been previous clinical guidelines in Japan that relied on expert opinion, but there are none based on scientific evidence. For this reason, a clinical guideline that gathers the findings obtained to date and is based on scientific evidence was needed. With this in mind, the Japanese Society of Neuropsychopharmacology formed a Task Force of Guideline for Pharmacological Therapy of Schizophrenia and create the guideline. It should be stated once again that schizophrenia treatment is not based on pharmacological therapy alone. Comprehensive treatment such as psychosocial therapy and collaborations with medical welfare is necessary. It is self‐evident that creating a comprehensive treatment guideline is desirable, but we have decided to create a guideline for pharmacological therapy for which there is relatively ample evidence as a first step for the creation of a comprehensive guideline. Members of the Japanese Society for Schizophrenia Research also participated in the creation of this guideline and were mainly in charge of writing the “Before reading this guideline” section in the introduction.
Chair Person responsible for this guideline Chair/vice‐chair Persons who determine the policy of this guideline. Specifically, the Minds method was used. The overall composition was decided as follows: Introduction, Chapter 1: First‐episode psychosis, Chapter 2: Recurrence and relapse, Chapter 3: Maintenance treatment, Chapter 4: Treatment resistance, Chapter 5: Other clinical problems. The person in charge of each chapter was determined and members for each chapter were approved. Person in charge of each chapter Person responsible for each chapter. A committee was chosen for each chapter. Chapter 1: Taro Kishi, Chapter 2: Masaki Kato, Chapter 3: Taishiro Kishimoto, Chapter 4: Ryota Hashimoto, Chapter 5: Ken Inada. Members for each chapter The clinical questions (CQs) for each chapter were determined by the person responsible for each chapter. Each chapter consists of an introduction, a recommendation, and an explanation. All members (chair, vice‐chair, person in charge of each chapter, members for each chapter) Sufficient discussions were held for each specified CQ and each CQ was approved unanimously as a general rule. CQs with split opinions were put on hold and the person in charge of each chapter created a new proposal that incorporated these opinions, after which a second discussion was held. Each member submitted one vote when no unanimous consensus could be reached and the CQ was approved if over 2/3 of the members voted for it. Cooperation from the Japanese Society of Schizophrenia Research
The Japanese Society of Neuropsychopharmacology has made every effort to avoid actual or potential conflicts of interest, so that the members may prepare this clinical guideline created by the Society with neutrality and fairness. All members creating this guideline have disclosed actual or potential conflict‐of‐interest information. The creation of this guideline was funded by a Ministry of Health, Labor and Welfare Science Research Grant. Conflict‐of‐interest information by the members creating this guideline (as of September 2015) is as follows: Junichi Iga: Received rewards for lectures and manuscript fees for writing from Astellas Pharma Inc., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Eli Lilly Japan K.K., Novartis Pharma K.K., Meiji Seika Pharma Co. Ltd., Mochida Pharmaceutical Co. Ltd., and Janssen Pharmaceutical K.K. Jun Ishigooka: Received research grants, rewards for lectures, manuscript fees for writing, and donations from Arc Medium, Astellas Pharma Inc., AbbVie Inc., Medicine and Drug Journal, Infront Inc., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, CareNet, Inc., Kowa Pharmaceutical Co. Ltd., GMJ Inc., Shionogi & Co. Ltd., National Federation of Association of Families with Mental Illness in Japan, Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Chugai Igakusha, Chugai Pharmaceutical Co. Ltd., Tokyo Institute of Psychiatry, Tochigi Prefecture Mental Illness Support Association, Toppan Printing Co. Ltd., Nanzando, Eli Lilly Japan K.K., Japan Medical Association, Japan Medical Journal, Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Medical Professional Relations K.K., Mebix Inc., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., Yoshitomiyakuhin Co., and Life Medicom Co. Ltd. Koki Ito: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., and Janssen Pharmaceutical K.K. Ken Inada: Received rewards for lectures and manuscript fees for writing from Arc Medium, Astellas Pharma Inc., AbbVie Inc., Igaku‐Shoin Ltd., Medicine and Drug Journal, Eisai Co. Ltd., MSD K.K., M3 Inc., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Seiwa Shoten Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Chugai Pharmaceutical Co. Ltd., Eli Lilly Japan K.K., Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Medical Review Co. Ltd., Mebix Inc., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Nakao Iwata: Received research grants, rewards for lectures, manuscript fees for writing, and donations from Arc Medium, Astellas Pharma Inc., AbbVie Inc., Igaku‐Shoin Ltd., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., JUMPs, Sentan Igaku‐sha, Daiichi Sankyo Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Chugai Pharmaceutical Co. Ltd., Tsumura & Co., Nikkei NP, Eli Lilly Japan K.K., Nihon Medi‐Physics Co. Ltd., Novartis Pharma K.K., Pfizer Inc., For Life Medical Inc., Bracket Co., Meiji Seika Pharma Co. Ltd., Medical Review Co. Ltd., Mebix Inc., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Tetsuro Enomoto: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., Novartis Pharma K.K., Mochida Pharmaceutical Co. Ltd., and Janssen Pharmaceutical K.K. Masaki Kato: Received research grants and rewards for lectures from Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Mitsubishi Tanabe Pharma Co., Eli Lilly Japan K.K., Promotion and Mutual Aid Corporation for Private Schools of Japan, Pfizer Inc., Meiji Seika Pharma Co. Ltd., Ministry of Education, Culture, Sports, Science and Technology Scientific Research Fund, Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Tetsufumi Kanazawa: Received rewards for lectures from Astellas Pharma Inc., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Sumitomo Dainippon Pharma Co. Ltd., Mitsubishi Tanabe Pharma Co., Eli Lilly Japan K.K., Meiji Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Taro Kishi: Received research grants, rewards for lectures, and manuscript fees for writing from Astellas Pharma Inc., AbbVie Inc., Eisai Co. Ltd., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Daiichi Sankyo Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Mitsubishi Tanabe Pharma Co., Tsumura & Co., Japanese Society of Schizophrenia Research (Astellas Pharma Inc.), Eli Lilly Japan K.K., the Japanese Society for the Promotion of Science Grants‐in‐Aid for Scientific Research for Young Researchers B, Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., and Janssen Pharmaceutical K.K. Taishiro Kishimoto: Received rewards for lectures and donations from AbbVie Inc., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., SENSHIN Medical Research Foundation (Mitsubishi Tanabe Pharma Co.), Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Japanese Society of Schizophrenia Research (Astellas Pharma Inc.), Eli Lilly Japan K.K., Novartis Pharma K.K., Pfizer Inc., Pfizer Health Research Foundation, Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Ichiro Kusumi: Received research grants, rewards for lectures, manuscript fees for writing, and donations from Asahi Kasei Pharma Co., Astellas Pharma Inc., AbbVie Inc., Igaku‐Shoin Ltd., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., Ono Pharmaceutical Co. Ltd., Kyowa Kirin Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Synergy Medical Communications, Seiwa Shoten Co. Ltd., Daiichi Sankyo Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Chugai Pharmaceutical Co. Ltd., Eli Lilly Japan K.K., Nippon Chemiphar Co. Ltd., Boehringer Ingelheim Japan, Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Mebix Inc., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Soichiro Sato: Received rewards for lectures and manuscript fees for writing from Astellas Pharma Inc., Medicine and Drug Journal, MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Tominaga Pharmacy Inc., Nanzando, Eli Lilly Japan K.K., Medical Review Co. Ltd., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Taro Suwa: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, and Novartis Pharma K.K. Hiroyoshi Takeuchi: Received manuscript fees for writing from Sumitomo Dainippon Pharma Co. Ltd. Yoshiteru Takekita: Received rewards for lectures from Eisai Co. Ltd., Otsuka Pharmaceutical Co. Ltd., Daiichi Sankyo Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Eli Lilly Japan K.K., Novartis Pharma K.K., Meiji Seika Pharma Co. Ltd., and Janssen Pharmaceutical K.K. Aran Tajika: Received rewards for lectures from Mitsubishi Tanabe Pharma Co., and Eli Lilly Japan K.K. Naohisa Tsujino: Received rewards for lectures and manuscript fees for writing from Astellas Pharma Inc., Igaku‐Shoin Ltd., Omori Medical Association, Kanehara & Co. Ltd., Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Nanzando, Novartis Pharma K.K., Fujifilm RI Pharma Co. Ltd., Meiji Seika Pharma Co. Ltd., Medical View Co. Ltd., Mochida Pharmaceutical Co. Ltd., and Janssen Pharmaceutical K.K. Atsuo Nakagawa: Received research grants, rewards for lectures, and manuscript fees for writing from Arc Medium, Asahi Kasei Pharma Co., Igaku‐Shoin Ltd., NTT Docomo Inc., Otsuka Pharmaceutical Co. Ltd., Kagakuhyoronsya Co. Ltd., National Center of Neurology and Psychiatry, Kongo‐shuppan Co., Shionogi & Co. Ltd., Jiho Inc., Shimane Community Medical Care and Career Support Center, Shinjuku Health Center, Seiwa Shoten Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Eli Lilly Japan K.K., Japanese Association for Acute Medicine, Japan Health and Culture Promotion Center, Japanese Society for Child and Adolescent Psychiatry, Meiji Seika Pharma Co. Ltd., Mochida Pharmaceutical Co. Ltd., Yamanashi Prefectural Kita Hospital, Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Ryota Hashimoto: Received rewards for lectures or donations from Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Daiko Advertising Inc., Sumitomo Dainippon Pharma Co. Ltd., Nippon Zoki Pharmaceutical Co., Novartis Pharma K.K., Hisamitsu Pharmaceutical Co. Ltd., Pfizer Inc., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Akitoyo Hishimoto: Received research grants, rewards for lectures, and manuscript fees for writing from Asubio Pharma Co. Ltd., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Eli Lilly Japan K.K., Nippon Shinyaku Co. Ltd., Pfizer Inc., and Janssen Pharmaceutical K.K. Hikaru Hori: Received rewards for lectures and manuscript fees for writing from Arc Medium, Asahi Kasei Pharma Co., Astellas Pharma Inc., Eisai Co. Ltd., MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Seiwa Shoten Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Nakajima Educational Films Publishing Inc., Eli Lilly Japan K.K., Novartis Pharma K.K., Pfizer Inc., Meiji Seika Pharma Co. Ltd., Medical Review Co. Ltd., Mochida Pharmaceutical Co. Ltd., and Janssen Pharmaceutical K.K. Yuki Matsuda: Received research grants, rewards for lectures, and manuscript fees for writing from Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Seiwa Shoten Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Eli Lilly Japan K.K., the Japanese Society for the Promotion of Science Grants‐in‐Aid for Scientific Research for Young Researchers B, Pfizer Inc., Meiji Seika Pharma Co. Ltd., and Life Science Co. Ltd. Fuminari Misawa: Received rewards for lectures from Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Eli Lilly Japan K.K., Novartis Pharma K.K., and Pfizer Inc. Nobumi Miyake: Received research grants, rewards for lectures, and gifts from Otsuka Pharmaceutical Co. Ltd., Shionogi & Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Japanese Society of Schizophrenia Research (Astellas Pharma Inc.), Eli Lilly Japan K.K., the Japanese Society for the Promotion of Science Grants‐in‐Aid for Scientific Research for Young Researchers B, Novartis Pharma K.K., Meiji Seika Pharma Co. Ltd., and Janssen Pharmaceutical K.K. Ryoji Miyada: Received rewards for lectures and manuscript fees for writing from Astellas Pharma Inc., Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., and Eli Lilly Japan K.K. Seiya Miyamoto: Received rewards for lectures and manuscript fees for writing from Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Mitsubishi Tanabe Pharma Co., Chugai Pharmaceutical Co. Ltd., Eli Lilly Japan K.K., and Janssen Pharmaceutical K.K. Hiroki Yamada: Received rewards for lectures and manuscript fees for writing from MSD K.K., Otsuka Pharmaceutical Co. Ltd., GlaxoSmithKline plc, Sumitomo Dainippon Pharma Co. Ltd., Eli Lilly Japan K.K., Meiji Seika Pharma Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co. Hiroyuki Watanabe: Received rewards for lectures and manuscript fees for writing from Astellas Pharma Inc., Eisai Co. Ltd., Otsuka Pharmaceutical Co. Ltd., Sumitomo Dainippon Pharma Co. Ltd., Takeda Pharmaceutical Co. Ltd., Mitsubishi Tanabe Pharma Co., Eli Lilly Japan K.K., Mochida Pharmaceutical Co. Ltd., Janssen Pharmaceutical K.K., and Yoshitomiyakuhin Co.
November 22, 2014 Nagoya. 24th Japanese Society of Clinical Neuropsychopharmacology/44th Japanese Society of Neuropsychopharmacology Joint Annual Meeting “Objectives and Significance of Treatment Guideline Creation—Japanese Society of Neuropsychopharmacology/Guideline for Pharmacological Therapy of Schizophrenia Task Force Interim Report.”
This guideline was compiled by the members of the Guideline for Pharmacological Therapy of Schizophrenia task force based on available scientific evidence at the time of preparation. Future evidence may result in changes with the conclusions or recommendations stated in this guideline. Please also note that the national health insurance coverage may change in the future. It may be permissible for physicians implementing treatment to deviate from this guideline for specific patients or conditions, and such deviations from the guideline may be valid to adjust treatment at the discretion of the physician. As such, the physician implementing treatment cannot be exempted from liability for negligence by just following this guideline and deviations from this guideline also cannot be seen as negligence. The content of this guideline should not serve as a basis for medical litigation and the physician implementing treatment is responsible for the results of the actual medical practice.
The basic principles of the guideline created here are as follows. Target In the view of the characteristics of schizophrenia, this guideline is based on scientific evidence and was prepared mainly for psychiatric specialists involved in the medical care of patients with schizophrenia. The content of this guideline was created with the objective of supporting the decision‐making of psychiatric specialists in clinical settings, and we hope that it will be used in daily medical care. Basic policy of creation method The basic process of the creation of this guideline is based on the “Minds Clinical Guideline Creation Guide 2014” of the Medical Information Service (Minds). It was also evaluated using AGREE, which is the research/evaluation method of this guideline and an effort was made to meet social demands. The recommendations in this guideline are specific and both the degree of recommendation and the strength of evidence are generally described so as to easily identify important recommendations (Figure, Table). Figure Degree of recommendation TABLE Strength of evidence Revisions This guideline will be updated as appropriate when new important information and appropriate comments are received.
The Task Force of Guideline for Pharmacological Therapy of Schizophrenia determined the scope of this guideline when starting its preparation and determined the CQs based on this scope. Each working group of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia conducted a systematic review for each CQ and evaluated the total body of evidence. Three reference databases (PubMed, Cochrane Library, and Ichushi‐Web) were used to conduct a comprehensive search. Literature searches were conducted until November 2014, but the search database was expanded as needed and international guidelines that have already been published were also referenced. The search formula and range of the literature search was recorded. Each working group of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia created the recommendation drafts for each CQ based on an evaluation of the total body of evidence (eg, summary of the total body of evidence, balance between risks and benefits, and cost/resource utilization). Peer review was conducted by an internal reviewer of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia to ensure the suitability of the systematic reviews of the CQs and drafting of recommendations. Evaluations including AGREE II Evaluation Domain 3 rigor of development were conducted in this peer review. Each working group of Task Force of the Guideline for Pharmacological Therapy of Schizophrenia revised the recommendation drafts. The recommendation drafts for each CQ were examined by members of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia in degree of recommendation decision meetings while taking into consideration the consistency with other guidelines. Recommendation texts were then determined by overall consensus. Approval for CQ1‐1, 1‐2, 1‐3, 5‐1, 5‐2, 5‐3, 5‐6, 5‐7, and 5‐8 was put on hold upon discussion at the final meeting on December 20. All other CQs were approved. Revised drafts which incorporated discussions to date were subsequently submitted for CQs whose approval was put on hold. All CQs were unanimously approved in a mail meeting on July 9, 2015. It was determined in advance that failure to reach unanimous consensus among all members of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia would result in votes being cast, with approval given to those where over 2/3 of the members agreed. A member of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia was designated a representative and entrusted with consensus procedures in cases where another member of the Task Force of Guideline for Pharmacological Therapy of Schizophrenia was absent from a recommendation decision meeting for unavoidable reasons. Public comments from members of the Japanese Society of Neuropsychopharmacology and the general public were also heard as opinions, and revisions incorporating these opinions were made to ensure completeness.
Regarding diagnosis of schizophrenia This guideline assumes that the diagnosis of schizophrenia is confirmed. It is necessary in actual clinical settings that organic diseases as well as other psychiatric disorders (eg, mood disorders) are carefully excluded to diagnose schizophrenia. General theory of pharmacological therapy of schizophrenia Although this is the case for the treatment for all diseases, treatment selection involves considering the balance between the efficacy (benefits) and side effects (harm) of treatment, and the choice is made only if it is determined that the benefits outweigh the harm caused. This guideline is also based on this the theory, with evidence collected based on benefits and harm and recommendations being based on this. Effects need to be confirmed with monotherapy to scientifically evaluate effectiveness because evaluating the balance between benefits and harms is difficult with concomitant therapy with multiple drugs. We confirm again that antipsychotic drug monotherapy is a general rule for this guideline. The limits of evidence Pharmacological therapy in schizophrenia treatment is a relatively well‐established area in terms of evidence. However, there is still a lack of evidence and many unresolved CQs remain. For example, there are still few prospective studies that compare drugs to examine their efficacy and side effects. For this reason, a majority of evidence relating to effectiveness that was examined in this guideline focused on monotherapy which was compared with placebos. There are also only a few clinical studies that limit their subjects to children, seniors, or the Japanese. Little evidence from randomized controlled trials (RCTs) is also available for the fields in Chapter 4, “Treatment resistance,” and Chapter 5, “Other clinical problems.” Holding on evidence for these CQs would create a guideline that is useless in clinical settings. For this reason, the guideline creation team searched a wide range of evidence levels, including case reports, and sought to create a guideline that is useful in clinical practice. It should be understood that there are limits to the descriptions in this guideline for some target groups (eg, children, seniors, and treatment‐resistant cases) and care should be taken to use this guideline appropriately. Use of the latest edition and overall reading The Task Force of Guideline for Pharmacological Therapy of Schizophrenia in the Japanese Society of Neuropsychopharmacology plans to update this guideline as appropriate upon receiving new important information and appropriate comments. Please ensure that the latest edition of this guideline is used. Comprehensive treatment including psychosocial therapy in addition to pharmacological therapy is needed for schizophrenia treatment. Various measures are also needed throughout the course of illness. This guideline describes pharmacological therapy by disease stage. However, please first read the entire guideline rather than reading and using descriptions for a single disease stage when using this guideline. Drug name notation Drug names are written with and without an asterisk for those approved and not approved in Japan, respectively. Notation is written in alphabetical order.
This guideline assumes that the diagnosis of schizophrenia is confirmed. It is necessary in actual clinical settings that organic diseases as well as other psychiatric disorders (eg, mood disorders) are carefully excluded to diagnose schizophrenia.
Although this is the case for the treatment for all diseases, treatment selection involves considering the balance between the efficacy (benefits) and side effects (harm) of treatment, and the choice is made only if it is determined that the benefits outweigh the harm caused. This guideline is also based on this the theory, with evidence collected based on benefits and harm and recommendations being based on this. Effects need to be confirmed with monotherapy to scientifically evaluate effectiveness because evaluating the balance between benefits and harms is difficult with concomitant therapy with multiple drugs. We confirm again that antipsychotic drug monotherapy is a general rule for this guideline.
Pharmacological therapy in schizophrenia treatment is a relatively well‐established area in terms of evidence. However, there is still a lack of evidence and many unresolved CQs remain. For example, there are still few prospective studies that compare drugs to examine their efficacy and side effects. For this reason, a majority of evidence relating to effectiveness that was examined in this guideline focused on monotherapy which was compared with placebos. There are also only a few clinical studies that limit their subjects to children, seniors, or the Japanese. Little evidence from randomized controlled trials (RCTs) is also available for the fields in Chapter 4, “Treatment resistance,” and Chapter 5, “Other clinical problems.” Holding on evidence for these CQs would create a guideline that is useless in clinical settings. For this reason, the guideline creation team searched a wide range of evidence levels, including case reports, and sought to create a guideline that is useful in clinical practice. It should be understood that there are limits to the descriptions in this guideline for some target groups (eg, children, seniors, and treatment‐resistant cases) and care should be taken to use this guideline appropriately.
The Task Force of Guideline for Pharmacological Therapy of Schizophrenia in the Japanese Society of Neuropsychopharmacology plans to update this guideline as appropriate upon receiving new important information and appropriate comments. Please ensure that the latest edition of this guideline is used. Comprehensive treatment including psychosocial therapy in addition to pharmacological therapy is needed for schizophrenia treatment. Various measures are also needed throughout the course of illness. This guideline describes pharmacological therapy by disease stage. However, please first read the entire guideline rather than reading and using descriptions for a single disease stage when using this guideline.
Drug names are written with and without an asterisk for those approved and not approved in Japan, respectively. Notation is written in alphabetical order.
Adherence: Patients actively participate in the treatment policy decisions and receive treatment in accordance with these decisions. Although often confused with compliance in medication in pharmacological therapy, adherence is a more active concept. First‐generation antipsychotics (FGAs) and second‐generation psychotics (SGAs): Antipsychotics are broadly classified between two classes (FGAs and SGAs) based on the time of their development. A large number of studies compare these two groups. This guideline also follows these studies and separately describes FGAs and SGAs for convenience. However, both classes group drugs with different mechanisms of action together, and both include drugs with various evidence levels regarding efficacy and side effects. Therefore, please refer to the explanation and original reference for specific evidence on individual drugs. The main FGAs and SGAs in Japan are as follows: FGAs: chlorpromazine (CP), fluphenazine, haloperidol, etc. SGAs: aripiprazole, blonanserin, clozapine, olanzapine, paliperidone, perospirone, quetiapine, risperidone, etc. Cochrane Review: Systematic review was created by the Cochrane Collaboration. These reviews have a reputation for high‐quality and are published in the Cochrane Library, which is updated quarterly. Intermittent dosing: An administration method in which a drug is withdrawn until recurrence or suspected recurrence. Extended dosing: A regular but extended‐dosing interval. For example, administering a drug that was recommended daily at a frequency of once every two days, or administering a long‐acting injection (LAI) that is to be done over two weeks over a period of four weeks instead. Continuous‐dosing: Administration method where the drug is given regularly at recommended intervals. Systematic review: A thorough search of the literature and high‐quality analysis of research data like RCTs while ceaselessly excluding data biases such as publication bias. Double‐blind study: Trial method where neither the subject (patient) nor the researcher (physician) are aware of the drug that is administered. Efficacy: Effects of pharmacological therapy or other treatment interventions. Effectiveness: Concept which combines benefits (effects) and harms (side effects). Bias risk: The risk (eg, population extraction, trial‐design method) that biases are present in the results (eg, estimated values of treatment effects). Unblinded, open‐label study: Trial in which the subject (patient) and the researcher (physician) are aware of the intervention method. RCT: Trial method for evaluating intervention effects. Subjects are randomly allocated into intervention and control groups, and intervention effects are compared between groups. Meta‐analysis: Statistical method which integrates multiple clinical trial results. A more accurate effect size or differences from comparison groups can be found by integrating multiple trials. This can also be used to analyze differences in trial results. Number needed to treat (NNT): The number of patients that need to be treated for one patient to reach a given target.
ACE blocker:angiotensin‐converting enzyme blocker AIMS:Abnormal Involuntary Movement Scale BAS:Barnes Akathisia Scale BMI:body mass index BPRS:Brief Psychiatric Rating Scale BZ:benzodiazepine CATIE:Clinical Antipsychotic Trials of Intervention Effectiveness CGI‐I:The Clinical Global Impression‐Improvement CK‐MB:creatine kinase MB CP:chlorpromazine CPMS:Clozaril Patient Monitoring Service CQ:clinical question CRP:C‐reactive protein DIEPSS:Drug‐Induced Extrapyramidal Symptoms Scale ECT:electroconvulsive therapy EPS:extrapyramidal symptom Eq:equivalent ESRS:Extrapyramidal Symptom Rating Scale FDA:Food and Drug Administration FGAs:first generation antipsychotics GAF:Global Assessment of Functioning HbA1c:hemoglobin A1c HDL:high density lipoprotein HDRS:Hamilton Rating Scale for Depression LAI:long‐acting injection LDL:low density lipoprotein MD:mean difference m‐ECT:modified electroconvulsive therapy n:number of patients N:number of studies N/A:not applicable Na:natrium NNT:number needed to treat PANSS:Positive and Negative Syndrome Scale PANSS‐EC:Positive and Negative Syndrome Scale Excited Component Q&A:question and answer QOL:quality of life RCT:randomized controlled trial RR:risk ratio SDM:shared decision‐making SGAs:second‐generation antipsychotics SU類: sulfonylurea TD:tardive dyskinesia XR:extended‐release 95%CI:95%confidence interval
September 24, 2015 Publication July 31, 2016 Revision The introduction of each chapter was revised for easier comprehension. Fixed typographical errors and consistency of reference numbers. April 18, 2017 Revision Two papers with deficient ethical and scientific suitability were found in the guideline references. Both papers were removed. May 23, 2017 Revision Consistency issues arose after the implementation of revisions where the papers were deleted on April 18, making a 3rd revision necessary. November 22, 2017 Revision A number of expressions and contents which required revision were found when creating a simplified version of the Guideline for Pharmacological Therapy of Schizophrenia for patients, patients’ families, and medical staff during the first meeting on May 20, 2017, so the revisions were made.
The introduction of each chapter was revised for easier comprehension. Fixed typographical errors and consistency of reference numbers.
Two papers with deficient ethical and scientific suitability were found in the guideline references. Both papers were removed.
Consistency issues arose after the implementation of revisions where the papers were deleted on April 18, making a 3rd revision necessary.
A number of expressions and contents which required revision were found when creating a simplified version of the Guideline for Pharmacological Therapy of Schizophrenia for patients, patients’ families, and medical staff during the first meeting on May 20, 2017, so the revisions were made.
Introduction First‐episode psychosis refers to the state in which a patient presents with significant behavioral disorders, including hallucinations, delusions, agitation, stupor, confusion, and catatonic symptoms for the first time. It is often difficult to differentiate between schizophrenia, schizoaffective disorders, delusional disorders, schizophreniform disorders (symptoms lasting 1‐6 months), and brief psychotic disorders (symptoms lasting <1 month) in clinical settings (Figure ). Previous clinical studies collectively refer to these illnesses and disorders as “first‐episode psychosis.” For this reason, we will discuss antipsychotic treatment for first‐episode psychosis in this chapter. One of the most basic paradigms of treating patients with schizophrenia is to ensure the differentiation of schizophrenia from the various somatological disorders that present with similar symptoms. This differential diagnosis must be conducted with particular care when first‐episode psychosis is involved. A comprehensive treatment includes both pharmacological and non‐pharmacological therapies. The most basic pharmacological approach is to prescribe antipsychotic as monotherapy at an appropriate dose and for an appropriate duration. First‐episode psychosis is highly sensitive to both the desired treatment effects and side effects of antipsychotics. These drugs are known to be effective against first‐episode psychosis at lower doses than doses that are necessary for chronic schizophrenia , , , . Thus, in this chapter, we will discuss the following: in CQ1‐1, evidence relevant to the selection of antipsychotic as monotherapy for the treatment of patients with first‐episode psychosis; in CQ1‐2, appropriate antipsychotic doses for first‐episode psychosis; in CQ1‐3, the appropriate time in which to evaluate the efficacy of an antipsychotic used as a treatment for first‐episode psychosis; and in CQ1‐4, appropriate treatment durations. It is important in each of these CQs to consider the balance of efficacy and safety of each drug in the context of the specific case. Periodic evaluations of the efficacy and safety of antipsychotic treatment should also be conducted. Antipsychotic doses should be increased (while considering the patient’s state) up to an appropriate limit if sufficient treatment effects are not being achieved. Meanwhile, doses must be reduced or a different antipsychotic must be used if side effects prevent the continuation of the antipsychotic. The primary limitation of this chapter is that its scope is limited to first‐episode psychosis. There is less‐evident for the first‐episode psychosis overall in comparison with schizophrenia, which limits the extent of the recommendations contained herein. A summary of the chapter can be found in Table , but please refer to each CQ for more details. References 1 Lehman AF , Lieberman JA , Dixon LB , et al. Practice guideline for the treatment of patients with schizophrenia, second edition . Am J Psychiatry 2004 ; 161 ( 2 Suppl ): 1 – 56 . 2 Salimi K , Jarskog LF , Lieberman JA . Antipsychotic drugs for first‐episode schizophrenia: a comparative review . CNS Drugs . 2009 ; 23 : 837 – 55 . 19739694 3 Buchanan RW , Kreyenbuhl J , Kelly DL , et al. The 2009 schizophrenia PORT psychopharmacological treatment recommendations and summary statements . Schizophr Bull . 2010 ; 36 : 71 – 93 . 19955390 4 Hasan A , Falkai P , Wobrock T , et al. World Federation of Societies of Biological Psychiatry (WFSBP) Guidelines for Biological Treatment of Schizophrenia, part 1: update 2012 on the acute treatment of schizophrenia and the management of treatment resistance . World J Biol Psychiatry . 2012 ; 13 : 318 – 78 . 22834451
First‐episode psychosis refers to the state in which a patient presents with significant behavioral disorders, including hallucinations, delusions, agitation, stupor, confusion, and catatonic symptoms for the first time. It is often difficult to differentiate between schizophrenia, schizoaffective disorders, delusional disorders, schizophreniform disorders (symptoms lasting 1‐6 months), and brief psychotic disorders (symptoms lasting <1 month) in clinical settings (Figure ). Previous clinical studies collectively refer to these illnesses and disorders as “first‐episode psychosis.” For this reason, we will discuss antipsychotic treatment for first‐episode psychosis in this chapter. One of the most basic paradigms of treating patients with schizophrenia is to ensure the differentiation of schizophrenia from the various somatological disorders that present with similar symptoms. This differential diagnosis must be conducted with particular care when first‐episode psychosis is involved. A comprehensive treatment includes both pharmacological and non‐pharmacological therapies. The most basic pharmacological approach is to prescribe antipsychotic as monotherapy at an appropriate dose and for an appropriate duration. First‐episode psychosis is highly sensitive to both the desired treatment effects and side effects of antipsychotics. These drugs are known to be effective against first‐episode psychosis at lower doses than doses that are necessary for chronic schizophrenia , , , . Thus, in this chapter, we will discuss the following: in CQ1‐1, evidence relevant to the selection of antipsychotic as monotherapy for the treatment of patients with first‐episode psychosis; in CQ1‐2, appropriate antipsychotic doses for first‐episode psychosis; in CQ1‐3, the appropriate time in which to evaluate the efficacy of an antipsychotic used as a treatment for first‐episode psychosis; and in CQ1‐4, appropriate treatment durations. It is important in each of these CQs to consider the balance of efficacy and safety of each drug in the context of the specific case. Periodic evaluations of the efficacy and safety of antipsychotic treatment should also be conducted. Antipsychotic doses should be increased (while considering the patient’s state) up to an appropriate limit if sufficient treatment effects are not being achieved. Meanwhile, doses must be reduced or a different antipsychotic must be used if side effects prevent the continuation of the antipsychotic. The primary limitation of this chapter is that its scope is limited to first‐episode psychosis. There is less‐evident for the first‐episode psychosis overall in comparison with schizophrenia, which limits the extent of the recommendations contained herein. A summary of the chapter can be found in Table , but please refer to each CQ for more details.
Recommendations When comparing SGAs and FGAs as treatments for first‐episode psychosis, short‐term studies indicate that SGAs have lower discontinuation rates (because of all reasons, because of side effects, and because of lack of efficacy) and tend to have a higher degree of symptom improvement and treatment response rates A . Long‐term studies indicate that SGAs have lower relapse‐ and side‐effect‐related discontinuation rates and tend to have lower discontinuation rates for all reasons as well A . There are a few reports of RCTs and non‐blind trials of SGAs for first‐episode psychosis, but the evidence is insufficient for an accurate comparison of SGAs. Thus, they cannot be ranked relative to each other D . Thus, SGAs are the better choice for treatment of first‐episode psychosis 2A . It is recommended to choose the specific SGA after considering the individual factors in each case 2D . Explanation A meta‐analysis has shown that antipsychotic use in first‐episode psychosis prevents relapse (N = 8, n = 528) compared to placebos . Thus, the continued administration of antipsychotics is recommended. Another meta‐analysis compared the efficacy and safety of SGAs (clozapine, olanzapine, quetiapine, risperidone, amisulpride*, and ziprasidone*) with FGAs in first‐episode psychosis (N = 13, n = 2, 509) . In terms of safety, SGAs tend to be superior to FGAs with respect to the degree of symptom improvement and treatment response rate as outcomes in short‐term (≤ 13 weeks) trials. Long‐term trials (24–96 weeks) showed no significant differences between the two treatment groups in the degree of symptom improvement or treatment response rate. However, SGAs demonstrated better results in terms of relapse rates than FGAs. Short‐term studies showed that SGAs had lower discontinuation rates (all reasons, side effects, and lack of efficacy). Long‐term studies showed that SGAs had a low discontinuation rate due to side effects and tended to have a lower all‐cause of discontinuation. Next, we investigated which SGA is favorable for first‐episode psychosis. However, there are no meta‐analyses that directly compare SGAs, and it was not possible to strictly define a drug’s relative superiority or inferiority. Therefore, we examined individual RCTs for first‐episode psychosis. Results from open‐label trials were also included since there are only a few RCTs on SGAs that looked at first‐episode psychosis only in Japanese populations. A 52‐week RCT which compared aripiprazole, quetiapine, and ziprasidone* showed that aripiprazole has a significantly lower all‐case of treatment discontinuation rate than quetiapine. No significant differences were observed between the two groups in terms of efficacy, extrapyramidal symptoms (EPSs), weight gain, and the frequency of hyperprolactinemia‐related symptoms. A 52‐week RCT that compared aripiprazole, paliperidone, and ziprasidone* showed that aripiprazole was not as effective as paliperidone. Compared to before the start of the treatment, aripiprazole caused weight gain, blood glucose level increase, HbA1c increase, and triglyceride decrease. In contrast, paliperidone caused no changes in body weight but decreased HDL cholesterol levels and increased triglycerides. Meanwhile, two short‐term (≤ 12 weeks) open‐label trials of aripiprazole in Japanese populations , showed favorable treatment response rates (42% and 78.6%). No significant increases in body weight, blood glucose level, total cholesterol level, LDL cholesterol level, and triglycerides were seen relative to before the start of treatment. Furthermore, a short‐term (eight weeks) cohort study which compared aripiprazole, olanzapine, and risperidone in Japanese populations showed that improvements in the positive syndrome, negative syndrome, and overall psychopathology scores of the Positive and Negative Syndrome Scale (PANSS) were as follows: aripiprazole—23%, 26%, 26%; olanzapine—30%, 28%, 28%; and risperidone—32%, 25%, and 29%. Two RCTs , which compared olanzapine and risperidone showed that there were no differences in efficacy between the two but that olanzapine increased body weight and risperidone tended to cause more EPS. Open‐label trials (two weeks) of risperidone in Japanese populations showed that the treatment response rate was 29%, with EPS observed in 24% of the patients. Open‐label trials (four weeks) of olanzapine showed that the treatment response rate was 71.6%, with significant increases in body weight, triglycerides, and total cholesterol levels observed, though no significant increases in blood glucose were observed. A 52‐week RCT which compared four types of SGAs (including haloperidol, olanzapine, and quetiapine) showed that the dicontinuation rate due to insufficient effects was significantly lower with olanzapine relative to haloperidol, but the dicontinuation rate of quetiapine was similar to that of haloperidol. Olanzapine and quetiapine both had a lower dicontinuation rate than haloperidol due to all reasons and side effects. No differences in the extent of improvements in symptoms were observed in both olanzapine and quetiapine groups, but olanzapine and quetiapine both resulted in significant weight gain. There are no clinical trial reports on first‐episode psychosis for perospirone. There are also no reliable clinical trial reports available on first‐episode psychosis for blonanserin There are RCTs and open‐label trials of SGAs for first‐episode psychosis but it is difficult to establish a ranking among drugs since there are no RCTs or network meta‐analyses which directly compared all SGAs. However, meta‐analyses which examine the efficacy and safety of SGAs and FGAs suggest that SGAs should be prioritized for first‐episode psychosis. Meanwhile, drugs classified as SGAs differ in the extent of risk for individual side effects. Side effects can have a large impact on drug administration adherence, so sufficient care must be taken for the following side effects : (1) EPS (including akathisia, dyskinesia, and dystonia), (2) metabolic syndrome (weight gain, dyslipidemia, and hyperglycemia), (3) endocrine system abnormalities (eg, hyperprolactinemia), and (4) cardiovascular abnormalities (eg, QT prolongation.
When comparing SGAs and FGAs as treatments for first‐episode psychosis, short‐term studies indicate that SGAs have lower discontinuation rates (because of all reasons, because of side effects, and because of lack of efficacy) and tend to have a higher degree of symptom improvement and treatment response rates A . Long‐term studies indicate that SGAs have lower relapse‐ and side‐effect‐related discontinuation rates and tend to have lower discontinuation rates for all reasons as well A . There are a few reports of RCTs and non‐blind trials of SGAs for first‐episode psychosis, but the evidence is insufficient for an accurate comparison of SGAs. Thus, they cannot be ranked relative to each other D . Thus, SGAs are the better choice for treatment of first‐episode psychosis 2A . It is recommended to choose the specific SGA after considering the individual factors in each case 2D .
A meta‐analysis has shown that antipsychotic use in first‐episode psychosis prevents relapse (N = 8, n = 528) compared to placebos . Thus, the continued administration of antipsychotics is recommended. Another meta‐analysis compared the efficacy and safety of SGAs (clozapine, olanzapine, quetiapine, risperidone, amisulpride*, and ziprasidone*) with FGAs in first‐episode psychosis (N = 13, n = 2, 509) . In terms of safety, SGAs tend to be superior to FGAs with respect to the degree of symptom improvement and treatment response rate as outcomes in short‐term (≤ 13 weeks) trials. Long‐term trials (24–96 weeks) showed no significant differences between the two treatment groups in the degree of symptom improvement or treatment response rate. However, SGAs demonstrated better results in terms of relapse rates than FGAs. Short‐term studies showed that SGAs had lower discontinuation rates (all reasons, side effects, and lack of efficacy). Long‐term studies showed that SGAs had a low discontinuation rate due to side effects and tended to have a lower all‐cause of discontinuation. Next, we investigated which SGA is favorable for first‐episode psychosis. However, there are no meta‐analyses that directly compare SGAs, and it was not possible to strictly define a drug’s relative superiority or inferiority. Therefore, we examined individual RCTs for first‐episode psychosis. Results from open‐label trials were also included since there are only a few RCTs on SGAs that looked at first‐episode psychosis only in Japanese populations. A 52‐week RCT which compared aripiprazole, quetiapine, and ziprasidone* showed that aripiprazole has a significantly lower all‐case of treatment discontinuation rate than quetiapine. No significant differences were observed between the two groups in terms of efficacy, extrapyramidal symptoms (EPSs), weight gain, and the frequency of hyperprolactinemia‐related symptoms. A 52‐week RCT that compared aripiprazole, paliperidone, and ziprasidone* showed that aripiprazole was not as effective as paliperidone. Compared to before the start of the treatment, aripiprazole caused weight gain, blood glucose level increase, HbA1c increase, and triglyceride decrease. In contrast, paliperidone caused no changes in body weight but decreased HDL cholesterol levels and increased triglycerides. Meanwhile, two short‐term (≤ 12 weeks) open‐label trials of aripiprazole in Japanese populations , showed favorable treatment response rates (42% and 78.6%). No significant increases in body weight, blood glucose level, total cholesterol level, LDL cholesterol level, and triglycerides were seen relative to before the start of treatment. Furthermore, a short‐term (eight weeks) cohort study which compared aripiprazole, olanzapine, and risperidone in Japanese populations showed that improvements in the positive syndrome, negative syndrome, and overall psychopathology scores of the Positive and Negative Syndrome Scale (PANSS) were as follows: aripiprazole—23%, 26%, 26%; olanzapine—30%, 28%, 28%; and risperidone—32%, 25%, and 29%. Two RCTs , which compared olanzapine and risperidone showed that there were no differences in efficacy between the two but that olanzapine increased body weight and risperidone tended to cause more EPS. Open‐label trials (two weeks) of risperidone in Japanese populations showed that the treatment response rate was 29%, with EPS observed in 24% of the patients. Open‐label trials (four weeks) of olanzapine showed that the treatment response rate was 71.6%, with significant increases in body weight, triglycerides, and total cholesterol levels observed, though no significant increases in blood glucose were observed. A 52‐week RCT which compared four types of SGAs (including haloperidol, olanzapine, and quetiapine) showed that the dicontinuation rate due to insufficient effects was significantly lower with olanzapine relative to haloperidol, but the dicontinuation rate of quetiapine was similar to that of haloperidol. Olanzapine and quetiapine both had a lower dicontinuation rate than haloperidol due to all reasons and side effects. No differences in the extent of improvements in symptoms were observed in both olanzapine and quetiapine groups, but olanzapine and quetiapine both resulted in significant weight gain. There are no clinical trial reports on first‐episode psychosis for perospirone. There are also no reliable clinical trial reports available on first‐episode psychosis for blonanserin There are RCTs and open‐label trials of SGAs for first‐episode psychosis but it is difficult to establish a ranking among drugs since there are no RCTs or network meta‐analyses which directly compared all SGAs. However, meta‐analyses which examine the efficacy and safety of SGAs and FGAs suggest that SGAs should be prioritized for first‐episode psychosis. Meanwhile, drugs classified as SGAs differ in the extent of risk for individual side effects. Side effects can have a large impact on drug administration adherence, so sufficient care must be taken for the following side effects : (1) EPS (including akathisia, dyskinesia, and dystonia), (2) metabolic syndrome (weight gain, dyslipidemia, and hyperglycemia), (3) endocrine system abnormalities (eg, hyperprolactinemia), and (4) cardiovascular abnormalities (eg, QT prolongation.
Recommendation First‐episode psychosis is generally highly sensitive to treatment effects and side effects of antipsychotics C . Risperidone and haloperidol are the only antipsychotics in which there are RCTs that examined the optimal dose of antipsychotics for their effectiveness at fixed doses. Therefore, we examined the optimal doses for each antipsychotic drug while including the results of trials that investigated their effectiveness at variable doses. Aripiprazole has been reported to be effective at 9.9‐20.0 mg/d D Metabolic side effects have been reported with long‐term administration D . Olanzapine has been reported to be effective at 8.7‐17.0 mg/d C . Almost all trials reported weight gain A . Paliperidone has been reported to be effective at 6.4 mg/d but dyslipidemia have also been reported D . Quetiapine has been shown to be effective at 311.4‐506 mg/d C . Treatment discontinuation rates tended to be higher than with other antipsychotics in long‐term administration trials B . Risperidone had similar efficacy at 2 and 4 mg/d but a RCT reported that motor function was better at 2 mg/d C . Haloperidol had similar efficacy at 2 and 8 mg/d but a RCT reported that EPS and hyperprolactinemia were lower at 2 mg/d C . In conclusion, it is desirable for first‐episode psychosis to start treatment at low doses and evaluate its effects 2C . However, it is desirable to consider increasing the dose while paying attention to side effects if the effects are insufficient 2C . Explanation This CQ explains the optimal dose of antipsychotics for first‐episode psychosis. First‐episode psychosis is generally highly sensitive to treatment effects and side effects of antipsychotics, and low doses are here frequently more effective than in chronic schizophrenia , , , C . Therefore, we examined whether low doses are the optimal dose of antipsychotics for first‐episode psychosis. The results showed that risperidone and haloperidol are the only antipsychotics for which there are RCTs exclusively on patients with first‐episode psychosis, which compared the efficacy and safety between low and standard/high doses. No meta‐analyses have been conducted either. With this in mind, we examined the optimal doses for each antipsychotic drug on first‐episode psychosis while including the results of RCTs and open‐label trials which investigated their efficacy and safety at variable doses. There are no RCTs of aripiprazole that compare the doses which were effective for first‐episode psychosis. There are six open‐label trials with variable doses, and the final average dose which showed high effectiveness during short‐term (4‐12 weeks) administration was 16.8‐20.0 mg/d in three trials. , , However, there is one small‐scale trial (n = 19) which showed efficacy at a comparatively low dose of 9.9 mg/d on average. However, even this dose was not well tolerated in this trial. There are two open‐label one‐year RCTs , , which showed a high final average dose of 11.6‐14.5 mg/d. One of these trials showed significantly higher blood glucose levels and HbA1c than baseline values. There are no RCTs of olanzapine which compare the doses which are effective for first‐episode psychosis. There are 11 RCTs with variable doses. The average dose in six trials , , , , , which showed efficacy over short/medium‐term (4‐16 weeks) administration was 9.1‐17.0 mg/d. The average dose in five trials , , , , which showed efficacy over long‐term (1‐3 years) administration was 8.7‐12.6 mg/d. These doses were lower than the average olanzapine dose of 20.1 mg/d as shown in the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), which was a 1.5‐year large‐scale RCT of chronic schizophrenia patients. Significant increases in body weight were seen with olanzapine relative to baseline levels in almost all trials. There are no RCTs of paliperidone that compared the doses which are effective for first‐episode psychosis. There is one one‐year open‐label RCT with variable doses that showed high efficacy with a final average dose of 6.4 mg/d but also dyslipidemia. There are no clinical trial reports of perospirone in first‐episode psychosis and there are no reliable clinical trial reports of blonanserin in first‐episode psychosis. There are no RCTs of quetiapine that compared the doses which were effective for first‐episode psychosis. There are five RCTs with variable doses. The average dose in two trials , which showed efficacy with short‐term (6‐12 weeks) administration was 358.3‐413.8 mg/d. The average dose in three trials , , which showed efficacy with long‐term (one year) administration was 311.4‐506 mg/d. These doses are lower than the average quetiapine dose of 543.4 mg/d in CATIE. The all‐cause of treatment discontinuation rate of quetiapine in long‐term trials was 53%‐82.3% and tended to be higher than for other antipsychotics. There is one eight‐week RCT (n = 49) which compared the effects of fixed doses (2 or 4 mg/d) of risperidone on first‐episode psychosis. The study revealed similar efficacy but superior motor function at 2 mg/d. There are ten RCTs with variable doses. The average dose in six trials , , , , , which showed efficacy with short/medium‐term (4‐16 weeks) administration was 3.6‐6.1 mg/d. The average dose in four trials , , , which showed efficacy with long‐term (1‐3 years) administration was 2.4‐3.6 mg/d. These doses tended to be similar or slightly lower than the average risperidone dose of 3.9 mg/d in CATIE. Comparisons of low risperidone doses at <6 mg/d and high doses of over 6 mg/d in a post hoc analysis of a six‐week RCT (n = 183) showed that the efficacy was similar but superior safety was seen in the low‐dose group. Comparisons between low doses of 1‐4 mg/d and high doses of 5‐8 mg/d in a one‐year open‐label trial (n = 74) with variable doses showed that efficacy and tolerability were higher in the low‐dose group. Comparisons between the effectiveness of fixed doses of 2 mg/d and variable doses of 2‐4 mg/d in an eight‐week open‐label trial (n = 96) showed that low doses of 2 mg/d had high efficacy and tolerability that was virtually identical to those for below 4 mg/d. Haloperidol is the most extensively studied FGA. One six‐week RCT (n = 40) compared the efficacy and safety of fixed haloperidol doses (2 or 8 mg/d) for first‐episode psychosis and showed that efficacy was the same for both doses, but EPS and hyperprolactinemia were significantly lower in the low‐dose group. There are nine RCTs with variable doses. The average dose in four trials , , , which showed efficacy of short‐term (6‐12 weeks) administration was 4.2‐15.6 mg/d. The average dose in five trials , , , , which showed efficacy of long‐term (1‐3 years) administration was a low dose of 2.9‐4.8 mg/d. Of the long‐term trials, three , , showed that the discontinuation rate of treatment in the haloperidol administration group was significantly higher than in other SGA administration groups. The results described above provide weak evidence that low doses of risperidone and haloperidol have high efficacy and tolerability C . It has also been reported that aripiprazole is effective even at low doses but has a low tolerability D . SGAs with the exception of risperidone require further research to investigate their effective doses. As such, it is desirable to first start treatment at low doses and evaluate the effects for first‐episode psychosis 2C . However, it is desirable to consider increasing the dose while paying attention to side effects if the effects are insufficient 2C .
First‐episode psychosis is generally highly sensitive to treatment effects and side effects of antipsychotics C . Risperidone and haloperidol are the only antipsychotics in which there are RCTs that examined the optimal dose of antipsychotics for their effectiveness at fixed doses. Therefore, we examined the optimal doses for each antipsychotic drug while including the results of trials that investigated their effectiveness at variable doses. Aripiprazole has been reported to be effective at 9.9‐20.0 mg/d D Metabolic side effects have been reported with long‐term administration D . Olanzapine has been reported to be effective at 8.7‐17.0 mg/d C . Almost all trials reported weight gain A . Paliperidone has been reported to be effective at 6.4 mg/d but dyslipidemia have also been reported D . Quetiapine has been shown to be effective at 311.4‐506 mg/d C . Treatment discontinuation rates tended to be higher than with other antipsychotics in long‐term administration trials B . Risperidone had similar efficacy at 2 and 4 mg/d but a RCT reported that motor function was better at 2 mg/d C . Haloperidol had similar efficacy at 2 and 8 mg/d but a RCT reported that EPS and hyperprolactinemia were lower at 2 mg/d C . In conclusion, it is desirable for first‐episode psychosis to start treatment at low doses and evaluate its effects 2C . However, it is desirable to consider increasing the dose while paying attention to side effects if the effects are insufficient 2C .
This CQ explains the optimal dose of antipsychotics for first‐episode psychosis. First‐episode psychosis is generally highly sensitive to treatment effects and side effects of antipsychotics, and low doses are here frequently more effective than in chronic schizophrenia , , , C . Therefore, we examined whether low doses are the optimal dose of antipsychotics for first‐episode psychosis. The results showed that risperidone and haloperidol are the only antipsychotics for which there are RCTs exclusively on patients with first‐episode psychosis, which compared the efficacy and safety between low and standard/high doses. No meta‐analyses have been conducted either. With this in mind, we examined the optimal doses for each antipsychotic drug on first‐episode psychosis while including the results of RCTs and open‐label trials which investigated their efficacy and safety at variable doses. There are no RCTs of aripiprazole that compare the doses which were effective for first‐episode psychosis. There are six open‐label trials with variable doses, and the final average dose which showed high effectiveness during short‐term (4‐12 weeks) administration was 16.8‐20.0 mg/d in three trials. , , However, there is one small‐scale trial (n = 19) which showed efficacy at a comparatively low dose of 9.9 mg/d on average. However, even this dose was not well tolerated in this trial. There are two open‐label one‐year RCTs , , which showed a high final average dose of 11.6‐14.5 mg/d. One of these trials showed significantly higher blood glucose levels and HbA1c than baseline values. There are no RCTs of olanzapine which compare the doses which are effective for first‐episode psychosis. There are 11 RCTs with variable doses. The average dose in six trials , , , , , which showed efficacy over short/medium‐term (4‐16 weeks) administration was 9.1‐17.0 mg/d. The average dose in five trials , , , , which showed efficacy over long‐term (1‐3 years) administration was 8.7‐12.6 mg/d. These doses were lower than the average olanzapine dose of 20.1 mg/d as shown in the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), which was a 1.5‐year large‐scale RCT of chronic schizophrenia patients. Significant increases in body weight were seen with olanzapine relative to baseline levels in almost all trials. There are no RCTs of paliperidone that compared the doses which are effective for first‐episode psychosis. There is one one‐year open‐label RCT with variable doses that showed high efficacy with a final average dose of 6.4 mg/d but also dyslipidemia. There are no clinical trial reports of perospirone in first‐episode psychosis and there are no reliable clinical trial reports of blonanserin in first‐episode psychosis. There are no RCTs of quetiapine that compared the doses which were effective for first‐episode psychosis. There are five RCTs with variable doses. The average dose in two trials , which showed efficacy with short‐term (6‐12 weeks) administration was 358.3‐413.8 mg/d. The average dose in three trials , , which showed efficacy with long‐term (one year) administration was 311.4‐506 mg/d. These doses are lower than the average quetiapine dose of 543.4 mg/d in CATIE. The all‐cause of treatment discontinuation rate of quetiapine in long‐term trials was 53%‐82.3% and tended to be higher than for other antipsychotics. There is one eight‐week RCT (n = 49) which compared the effects of fixed doses (2 or 4 mg/d) of risperidone on first‐episode psychosis. The study revealed similar efficacy but superior motor function at 2 mg/d. There are ten RCTs with variable doses. The average dose in six trials , , , , , which showed efficacy with short/medium‐term (4‐16 weeks) administration was 3.6‐6.1 mg/d. The average dose in four trials , , , which showed efficacy with long‐term (1‐3 years) administration was 2.4‐3.6 mg/d. These doses tended to be similar or slightly lower than the average risperidone dose of 3.9 mg/d in CATIE. Comparisons of low risperidone doses at <6 mg/d and high doses of over 6 mg/d in a post hoc analysis of a six‐week RCT (n = 183) showed that the efficacy was similar but superior safety was seen in the low‐dose group. Comparisons between low doses of 1‐4 mg/d and high doses of 5‐8 mg/d in a one‐year open‐label trial (n = 74) with variable doses showed that efficacy and tolerability were higher in the low‐dose group. Comparisons between the effectiveness of fixed doses of 2 mg/d and variable doses of 2‐4 mg/d in an eight‐week open‐label trial (n = 96) showed that low doses of 2 mg/d had high efficacy and tolerability that was virtually identical to those for below 4 mg/d. Haloperidol is the most extensively studied FGA. One six‐week RCT (n = 40) compared the efficacy and safety of fixed haloperidol doses (2 or 8 mg/d) for first‐episode psychosis and showed that efficacy was the same for both doses, but EPS and hyperprolactinemia were significantly lower in the low‐dose group. There are nine RCTs with variable doses. The average dose in four trials , , , which showed efficacy of short‐term (6‐12 weeks) administration was 4.2‐15.6 mg/d. The average dose in five trials , , , , which showed efficacy of long‐term (1‐3 years) administration was a low dose of 2.9‐4.8 mg/d. Of the long‐term trials, three , , showed that the discontinuation rate of treatment in the haloperidol administration group was significantly higher than in other SGA administration groups. The results described above provide weak evidence that low doses of risperidone and haloperidol have high efficacy and tolerability C . It has also been reported that aripiprazole is effective even at low doses but has a low tolerability D . SGAs with the exception of risperidone require further research to investigate their effective doses. As such, it is desirable to first start treatment at low doses and evaluate the effects for first‐episode psychosis 2C . However, it is desirable to consider increasing the dose while paying attention to side effects if the effects are insufficient 2C .
Recommendation Approximately 60%‐70% of patients may respond to treatment by 2‐4 weeks after starting treatment with antipsychotics for first‐episode psychosis D but patients may respond after this period as well D . Therefore, it is desirable to wait at least 2‐4 weeks after the start of treatment to determine the response to treatment 2D . However, increasing the dose while paying attention to side effects may be considered before the 2‐4‐week period if the response to treatment is insufficient at low doses 2D . Explanation This CQ describes the optimal treatment response decision period after starting treatment with antipsychotics for first‐episode psychosis. The time period over which the first effects of antipsychotics appear is an extremely important aspect in antipsychotic treatment when considering changes to administered doses and antipsychotics. Cases where treatment effects are observed shortly after starting treatment or where no clinically problematic side effects are expressed will usually result in the continuation of that antipsychotic at the same dose. In contrast, cases where no or insufficient effects are seen will likely need increased doses up to the optimal dose (⇒ see pg. 27, CQ1‐2). A challenge in clinical practice is how long one should wait until switching antipsychotics when no problematic side effects occur but poor treatment response is observed even when increasing to an optimal dose. Many treatment guidelines for acute schizophrenia with multiple episodes currently recommend an effect decision period of 4‐6 weeks. , This guideline also recommends an observation period of 2‐4 weeks for recurrent/relapsing cases (see CQ2‐1 ⇒ pg. 39). Meanwhile, the number of RCTs that examine the optimal treatment response decision period for first‐episode psychosis is limited and no meta‐analyses have been conducted. With this in mind, this CQ examined studies and reports that explored the treatment response decision period. This CQ used the most common definition of treatment response, which is when the total PANSS score improves by over 20% relative to baseline values. Treatment response by two weeks after starting treatment has been suggested to be a predictor for response after 12 weeks in first‐episode psychosis, similar to recurrent/relapsing cases. Emsley et al. (n = 522) reported that 76.6% met the definition of treatment response within the trial period, with 35.6% and 59.4% of patients responding by two and four weeks, respectively. Schennach‐Wolff et al. (n = 188) reported that 72% of patients had a treatment response by two weeks. Based on these reports, there is a possibility that ~60%‐70% of patients with first‐episode psychosis may have a treatment response by 2‐4 weeks D . Meanwhile, it is clinically established that there are many patients with first‐episode psychosis whose treatment response takes time D. Studies on whether early treatment response is a predictor of long‐term remission or recovery showed that treatment response by the six‐week mark in patients was a predictor for subsequent remission. Furthermore, patients who did not show treatment response at six weeks could also meet the definition for subsequent remission , D . There are currently no clinical trials that compared the effectiveness of switching and continuing antipsychotics based on the treatment response at 2‐4 weeks for first‐episode psychosis, and further research is needed. In conclusion, it is desirable to wait at least 2‐4 weeks after starting treatment to determine the response to treatment 2D . However, regarding the effect decision at low doses described in CQ1‐2, increasing the dose while paying attention to side effects may be considered before the 2‐4‐week period if the response to treatment is insufficient at low doses 2D .
Approximately 60%‐70% of patients may respond to treatment by 2‐4 weeks after starting treatment with antipsychotics for first‐episode psychosis D but patients may respond after this period as well D . Therefore, it is desirable to wait at least 2‐4 weeks after the start of treatment to determine the response to treatment 2D . However, increasing the dose while paying attention to side effects may be considered before the 2‐4‐week period if the response to treatment is insufficient at low doses 2D .
This CQ describes the optimal treatment response decision period after starting treatment with antipsychotics for first‐episode psychosis. The time period over which the first effects of antipsychotics appear is an extremely important aspect in antipsychotic treatment when considering changes to administered doses and antipsychotics. Cases where treatment effects are observed shortly after starting treatment or where no clinically problematic side effects are expressed will usually result in the continuation of that antipsychotic at the same dose. In contrast, cases where no or insufficient effects are seen will likely need increased doses up to the optimal dose (⇒ see pg. 27, CQ1‐2). A challenge in clinical practice is how long one should wait until switching antipsychotics when no problematic side effects occur but poor treatment response is observed even when increasing to an optimal dose. Many treatment guidelines for acute schizophrenia with multiple episodes currently recommend an effect decision period of 4‐6 weeks. , This guideline also recommends an observation period of 2‐4 weeks for recurrent/relapsing cases (see CQ2‐1 ⇒ pg. 39). Meanwhile, the number of RCTs that examine the optimal treatment response decision period for first‐episode psychosis is limited and no meta‐analyses have been conducted. With this in mind, this CQ examined studies and reports that explored the treatment response decision period. This CQ used the most common definition of treatment response, which is when the total PANSS score improves by over 20% relative to baseline values. Treatment response by two weeks after starting treatment has been suggested to be a predictor for response after 12 weeks in first‐episode psychosis, similar to recurrent/relapsing cases. Emsley et al. (n = 522) reported that 76.6% met the definition of treatment response within the trial period, with 35.6% and 59.4% of patients responding by two and four weeks, respectively. Schennach‐Wolff et al. (n = 188) reported that 72% of patients had a treatment response by two weeks. Based on these reports, there is a possibility that ~60%‐70% of patients with first‐episode psychosis may have a treatment response by 2‐4 weeks D . Meanwhile, it is clinically established that there are many patients with first‐episode psychosis whose treatment response takes time D. Studies on whether early treatment response is a predictor of long‐term remission or recovery showed that treatment response by the six‐week mark in patients was a predictor for subsequent remission. Furthermore, patients who did not show treatment response at six weeks could also meet the definition for subsequent remission , D . There are currently no clinical trials that compared the effectiveness of switching and continuing antipsychotics based on the treatment response at 2‐4 weeks for first‐episode psychosis, and further research is needed. In conclusion, it is desirable to wait at least 2‐4 weeks after starting treatment to determine the response to treatment 2D . However, regarding the effect decision at low doses described in CQ1‐2, increasing the dose while paying attention to side effects may be considered before the 2‐4‐week period if the response to treatment is insufficient at low doses 2D .
Recommendation Continued administration of antipsychotics reduces the recurrence rate for at least one year A . Continuing the administration of antipsychotics for at least one year is recommended for preventing the recurrence of first‐episode psychosis 1A . Explanation This CQ describes the optimal duration for antipsychotic treatment continuation for first‐episode psychosis. The optimal duration for antipsychotic treatment continuation indicates how long antipsychotic treatment should be continued in cases where remission or recovery of symptoms is seen. Leucht et al. examined the recurrence prevention effects of antipsychotics in a Cochrane Review using 65 RCTs and reported that antipsychotics had a significant recurrence prevention effect relative to placebos over a half‐year to one‐year timespan (see CQ3‐1 ⇒ p. 54). A sensitivity analysis which was performed separately for first‐episode cases and others also showed similar recurrence prevention effects A . Gitlin et al. reported that relapse/recurrence was observed at a rate of 78% and 98% after one and two years, respectively, in schizophrenia patients who underwent the treatment according to their consent and subsequently suspended treatment within two years. This indicates that antipsychotics have a clear recurrence prevention effect, so it is desirable with continuous administration for as long as possible. However, most clinical trials were implemented over a duration of <2 years and longer‐term treatment effects are unknown. There are also a limited number of RCTs that examined the optimal treatment continuation duration of SGAs for only first‐episode psychosis, and there are no meta‐analyses that examined the recurrence risks of treatment continuation and reduced/intermittent/suspended doses. Wunderink et al. conducted one RCT on 131 first‐episode psychosis patients where half of a year has passed since remission to compare their recurrence rate and social/professional functions 1.5 years after continuing treatment or decreasing/suspending doses. The results showed that the recurrence rate in the reduced/suspended‐dose group was approximately two times higher and that there were no benefits in this group over that group who continued treatment. However, a subsequent seven‐year follow‐up study showed that the decreased/suspended‐dose group had a recovery rate that was significantly higher (approx. by a factor of two) than in the treatment continuation group C. This was the first high‐quality clinical report that indicated long‐term benefits of decreasing/suspending antipsychotics in first‐episode psychosis patients in remission. However, many individuals in the reduced/suspended‐dose group are limited to reduced doses, and this can be interpreted that the prognosis of patients whose severity allows for a reduced dose is favorable. Further controlled studies are necessary to analyze this in detail. In conclusion, it is desirable to continue antipsychotic treatment for as long as possible when remission of symptoms is seen in first‐episode psychosis, but it is preferable to make a decision after fully sharing the risks and benefits of reducing/suspending doses with the patient.
Continued administration of antipsychotics reduces the recurrence rate for at least one year A . Continuing the administration of antipsychotics for at least one year is recommended for preventing the recurrence of first‐episode psychosis 1A .
This CQ describes the optimal duration for antipsychotic treatment continuation for first‐episode psychosis. The optimal duration for antipsychotic treatment continuation indicates how long antipsychotic treatment should be continued in cases where remission or recovery of symptoms is seen. Leucht et al. examined the recurrence prevention effects of antipsychotics in a Cochrane Review using 65 RCTs and reported that antipsychotics had a significant recurrence prevention effect relative to placebos over a half‐year to one‐year timespan (see CQ3‐1 ⇒ p. 54). A sensitivity analysis which was performed separately for first‐episode cases and others also showed similar recurrence prevention effects A . Gitlin et al. reported that relapse/recurrence was observed at a rate of 78% and 98% after one and two years, respectively, in schizophrenia patients who underwent the treatment according to their consent and subsequently suspended treatment within two years. This indicates that antipsychotics have a clear recurrence prevention effect, so it is desirable with continuous administration for as long as possible. However, most clinical trials were implemented over a duration of <2 years and longer‐term treatment effects are unknown. There are also a limited number of RCTs that examined the optimal treatment continuation duration of SGAs for only first‐episode psychosis, and there are no meta‐analyses that examined the recurrence risks of treatment continuation and reduced/intermittent/suspended doses. Wunderink et al. conducted one RCT on 131 first‐episode psychosis patients where half of a year has passed since remission to compare their recurrence rate and social/professional functions 1.5 years after continuing treatment or decreasing/suspending doses. The results showed that the recurrence rate in the reduced/suspended‐dose group was approximately two times higher and that there were no benefits in this group over that group who continued treatment. However, a subsequent seven‐year follow‐up study showed that the decreased/suspended‐dose group had a recovery rate that was significantly higher (approx. by a factor of two) than in the treatment continuation group C. This was the first high‐quality clinical report that indicated long‐term benefits of decreasing/suspending antipsychotics in first‐episode psychosis patients in remission. However, many individuals in the reduced/suspended‐dose group are limited to reduced doses, and this can be interpreted that the prognosis of patients whose severity allows for a reduced dose is favorable. Further controlled studies are necessary to analyze this in detail. In conclusion, it is desirable to continue antipsychotic treatment for as long as possible when remission of symptoms is seen in first‐episode psychosis, but it is preferable to make a decision after fully sharing the risks and benefits of reducing/suspending doses with the patient.
Introduction Schizophrenia is a chronic disease and many patients who have become stable with treatment can relapse or experience acute exacerbations. The main causes of relapse/acute exacerbation are lack of adherence to antipsychotics or major life events such as stress but many patients experience relapse/acute exacerbations acute exacerbations as a natural course of schizophrenia even if continuing pharmacological treatment. This chapter describes the pharmacological treatment for acute psychotic symptoms other than those for first‐episode psychosis. Since there is no fixed definition for “relapse” and “acute exacerbations,” the meanings of both are slightly different between each report. Thus, these definitions are broadly applied in this chapter to patients who exhibit exacerbations on evaluation scales such as Positive and Negative Syndrome Scale (PANSS) or the Brief Psychiatric Rating Scale (BPRS) after 3‐6 months have passed since stabilizing with remission or partial remission. There are no meta‐analyses that are restricted to recurrence/relapse cases, and we evaluated evidence based on RCTs. Therefore, it is difficult to present clear‐cut results, but we believe that knowing the efficacy and safety of each treatment based on their respective evidence and applying the most useful option for patients is helpful. It is especially important to pay attention to the side effects that occur with long‐term medication, as well as the short‐term side effects that the therapist, can easily understand . The limitation of evidence reviewed in this chapter is that there are no studies limited to the elderly or children, and there is little evidence on just recurrent/relapsing cases. Further research is needed to fill these gaps in the literature. The significance of each CQ examined in this chapter is shown below, and a summary of the chapter is shown in Table . Please refer to each CQ, including explanations, for specific content. At recurrence/relapse of psychotic symptoms during continued treatment, clinicians often find it difficult to judge whether to increase dose or switch to another antipsychotic. This topic was set as CQ2‐1. Furthermore, it is always a clinical question which of the available antipsychotics should be selected and which dose is appropriate. This topic was set as CQ2‐2. The question of whether antipsychotic monotherapy or combination therapy with two or more antipsychotics is useful was set as CQ2‐3. The question of the usefulness of concomitant therapy of psychotropic drugs other than antipsychotics was set as CQ2‐4. It is important to understand the evidence regarding the usefulness of antipsychotic monotherapy and concomitant therapy. CQ2‐1 Which is more appropriate, increasing the dose of the current antipsychotic or switching to another one? Recommendation Confirming whether the currently administered dose, duration, and adherence of the antipsychotic is appropriate is recommended before considering switching or increasing the antipsychotic dose 1D . At the time of recurrence/relapse due to discontinuation of medication, selecting an antipsychotic to restart in consideration of the response of past antipsychotics, including side effects, is recommended 1D . If medication adherence is good and blood concentrations are in the effective range but there is poor response, switching to another antipsychotic is recommended. However, if there is room to increase the dose and there is no problem with tolerability, increasing the dose is desirable. 2D . Switching to another antipsychotic is desirable if observed for 2‐4 weeks after increasing the dose but there is no response after eight weeks at the latest 2C . Selecting antipsychotics which blood concentrations can be measured (eg, haloperidol) or long‐acting injection (LAI) is desirable to exclude non‐adherence, increased drug metabolism, and impaired absorption 2D . There is little evidence that rapidly increasing doses or exceeding the recommended doses are effective. It is desirable to avoid both since there is also the risk that side effects may enhance . 2D . In conclusion, it is desirable to attempt to increase the dose rather than switching the antipsychotic during at the time of the recurrence/relapse of schizophrenia 2D . Explanation There are no trials that included only recurrent/relapsing cases of schizophrenia and that compared the effectiveness of switching or increasing doses. It has been reported that there were no differences in response rate between switching and increasing doses for acute schizophrenia , but many guidelines recommend increasing doses to the maximum level while confirming adherence and side effects and observing for a sufficient period before switching. Many guidelines also recommend selecting an antipsychotic drug to restart in recurrent cases due to the discontinuation of medication by referring to the effectiveness and tolerability of the previously used antipsychotics. , , , Clinical trial results of psychiatric acute treatment showed that improvements two weeks after administration were greater than during any subsequent period, and almost all improvements obtained in the year after starting drug administration were observed within one month of starting medication. Poor response in the first two weeks of medication was a predictor of a lack of subsequent improvement with a probability of about 80%. Therefore, the possibility that subsequent responses would be seen is low if improvements in symptoms by 20%‐25% were not observed within two weeks at an appropriate dose. , , , , , Other reports showed that the response within the 2‐6‐week observation period accurately reflected subsequent response and remission, , , , , but there were no reports with an observation period of more than eight weeks. Confirmation of blood antipsychotic concentrations or the use of LAI was useful to exclude “ pseudo‐resistance”. , , , A few reports showed that rapidly increased doses were effective and safe, such as when quetiapine was increased to 800 mg/d within four days or when patients with a history of clozapine medication had their clozapine doses increased to an average of 353 mg/d in about. four days. There is one case report that showed that rapid increases in quetiapine resulted in hypokalemia. And, rapid increases in clozapine resulted in an increased risk of myocarditis, therefore increasing doses should be avoided to prevent side effects. , There is little evidence that increasing doses that exceed recommended doses are effective and they may exacerbate side effects. , , References 1 Kinon BJ , Kane JM , Johns C , et al. Treatment of neuroleptic‐resistant schizophrenic relapse . Psychopharmacol Bull . 1993 ; 29 : 309 – 14 . 7904762 2 Lehman AF , Lieberman JA , Dixon LB , et al. Practice guideline for the treatment of patients with schizophrenia, second edition . Am J Psychiatry 2004 ; 161 ( 2 Suppl ): 1 – 56 . 3 Psychosis and schizophrenia in adults: treatment and management . NICE clinical guideline 178, 2014 . 4 Hasan A , Falkai P , Wobrock T , et al. World Federation of Societies of Biological Psychiatry (WFSBP) Guidelines for Biological Treatment of Schizophrenia, part 1: update 2012 on the acute treatment of schizophrenia and the management of treatment resistance . World J Biol Psychiatry . 2012 ; 13 : 318 – 78 . 22834451 5 Barnes TR . Evidence‐based guidelines for the pharmacological treatment of schizophrenia: recommendations from the British Association for Psychopharmacology . J Psychopharmacol . 2011 ; 25 : 567 – 620 . 21292923 6 Agid O , Kapur S , Arenovich T , et al. Delayed‐onset hypothesis of antipsychotic action: a hypothesis tested and rejected . Arch Gen Psychiatry . 2003 ; 60 : 1228 – 35 . 14662555 7 Agid O , Seeman P , Kapur S . The “delayed onset” of antipsychotic action–an idea whose time has come and gone . J Psychiatry Neurosci . 2006 ; 31 : 93 – 100 . 16575424 8 Correll CU , Malhotra AK , Kaushik S , et al. Early prediction of antipsychotic response in schizophrenia . Am J Psychiatry . 2003 ; 160 : 2063 – 5 . 14594760 9 Chang YC , Lane HY , Yang KH , et al. Optimizing early prediction for antipsychotic response in schizophrenia . J Clin Psychopharmacol . 2006 ; 26 : 554 – 9 . 17110810 10 Leucht S , Busch R , Kissling W , et al. Early prediction of antipsychotic nonresponse among patients with schizophrenia . J Clin Psychiatry . 2007 ; 68 : 352 – 60 . 17388703 11 Lin CH , Chou LS , Lin CH , et al. Early prediction of clinical response in schizophrenia patients receiving the atypical antipsychotic zotepine . J Clin Psychiatry . 2007 ; 68 : 1522 – 7 . 17960966 12 Ascher‐Svanum H , Nyhuis AW , Faries DE , et al. Clinical, functional, and economic ramifications of early nonresponse to antipsychotics in the naturalistic treatment of schizophrenia . Schizophr Bull . 2008 ; 34 : 1163 – 71 . 18156640 13 Kinon BJ , Chen L , Ascher‐Svanum H , et al. Predicting response to atypical antipsychotics based on early response in the treatment of schizophrenia . Schizophr Res . 2008 ; 102 : 230 – 40 . 18423985 14 Lambert M , Schimmelmann BG , Naber D , et al. Early‐ and delayed antipsychotic response and prediction of outcome in 528 severely impaired patients with schizophrenia treated with amisulpride . Pharmacopsychiatry . 2009 ; 42 : 277 – 83 . 19924588 15 Derks EM , Fleischhacker WW , Boter H , et al. Antipsychotic drug treatment in first‐episode psychosis: should patients be switched to a different antipsychotic drug after 2, 4, or 6 weeks of nonresponse? J Clin Psychopharmacol . 2010 ; 30 : 176 – 80 . 20520291 16 Kinon BJ , Chen L , Ascher‐Svanum H , et al. Early response to antipsychotic drug therapy as a clinical marker of subsequent response in the treatment of schizophrenia . Neuropsychopharmacology . 2010 ; 35 : 581 – 90 . 19890258 17 Hatta K , Otachi T , Sudo Y , et al. Difference in early prediction of antipsychotic non‐response between risperidone and olanzapine in the treatment of acute‐phase schizophrenia . Schizophr Res . 2011 ; 128 : 127 – 35 . 21420283 18 Levine SZ , Leucht S . Early symptom response to antipsychotic medication as a marker of subsequent symptom change: an eighteen‐month follow‐up study of recent episode schizophrenia . Schizophr Res . 2012 ; 141 : 168 – 72 . 22995933 19 Midha KK , Hubbard JW , Marder SR , et al. Impact of clinical pharmacokinetics on neuroleptic therapy in patients with schizophrenia . J Psychiatry Neurosci . 1994 ; 19 : 254 – 64 . 7918346 20 Ulrich S , Wurthmann C , Brosz M , et al. The relationship between serum concentration and therapeutic effect of haloperidol in patients with acute schizophrenia . Clin Pharmacokinet . 1998 ; 34 : 227 – 63 . 9533984 21 Schulte P . What is an adequate trial with clozapine?: therapeutic drug monitoring and time to response in treatment‐refractory schizophrenia . Clin Pharmacokinet . 2003 ; 42 : 607 – 18 . 12844323 22 Kane JM , Garcia‐Ribera C . Clinical guideline recommendations for antipsychotic long‐acting injections . Br J Psychiatry Suppl . 2009 ; 52 : S63 – 7 . 19880920 23 Peuskens J , Devoitille JM , Kusters J , et al. An open multicentre pilot study examining the safety, efficacy and tolerability of fast titrated (800 mg/day by day 4) quetiapine in the treatment of schizophrenia/schizoaffective disorder . Int J Psychiatry Clin Pract . 2008 ; 12 : 261 – 7 . 24937712 24 Ifteni P , Nielsen J , Burtea V , et al. Effectiveness and safety of rapid clozapine titration in schizophrenia . Acta Psychiatr Scand . 2014 ; 130 : 25 – 9 . 24354448 25 Lin YC , Chen HZ , Chang TJ , et al. Hypokalemia following rapid titration of quetiapine treatment . J Clin Psychiatry . 2008 ; 69 : 165 – 6 . 18312056 26 Ronaldson KJ , Fitzgerald PB , Taylor AJ , et al. Rapid clozapine dose titration and concomitant sodium valproate increase the risk of myocarditis with clozapine: a case‐control study . Schizophr Res . 2012 ; 141 : 173 – 8 . 23010488 27 Davis JM , Chen N . Dose response and dose equivalence of antipsychotics . J Clin Psychopharmacol . 2004 ; 24 : 192 – 208 . 15206667 28 Kinon BJ , Volavka J , Stauffer V , et al. Standard and higher dose of olanzapine in patients with schizophrenia or schizoaffective disorder: a randomized, double‐blind, fixed‐dose study . J Clin Psychopharmacol . 2008 ; 28 : 392 – 400 . 18626265 29 CADTH . A Systematic Review of Combination and High‐Dose Atypical Antipsychotic Therapy in Patients with . Schizophrenia . 2011 .
Schizophrenia is a chronic disease and many patients who have become stable with treatment can relapse or experience acute exacerbations. The main causes of relapse/acute exacerbation are lack of adherence to antipsychotics or major life events such as stress but many patients experience relapse/acute exacerbations acute exacerbations as a natural course of schizophrenia even if continuing pharmacological treatment. This chapter describes the pharmacological treatment for acute psychotic symptoms other than those for first‐episode psychosis. Since there is no fixed definition for “relapse” and “acute exacerbations,” the meanings of both are slightly different between each report. Thus, these definitions are broadly applied in this chapter to patients who exhibit exacerbations on evaluation scales such as Positive and Negative Syndrome Scale (PANSS) or the Brief Psychiatric Rating Scale (BPRS) after 3‐6 months have passed since stabilizing with remission or partial remission. There are no meta‐analyses that are restricted to recurrence/relapse cases, and we evaluated evidence based on RCTs. Therefore, it is difficult to present clear‐cut results, but we believe that knowing the efficacy and safety of each treatment based on their respective evidence and applying the most useful option for patients is helpful. It is especially important to pay attention to the side effects that occur with long‐term medication, as well as the short‐term side effects that the therapist, can easily understand . The limitation of evidence reviewed in this chapter is that there are no studies limited to the elderly or children, and there is little evidence on just recurrent/relapsing cases. Further research is needed to fill these gaps in the literature. The significance of each CQ examined in this chapter is shown below, and a summary of the chapter is shown in Table . Please refer to each CQ, including explanations, for specific content. At recurrence/relapse of psychotic symptoms during continued treatment, clinicians often find it difficult to judge whether to increase dose or switch to another antipsychotic. This topic was set as CQ2‐1. Furthermore, it is always a clinical question which of the available antipsychotics should be selected and which dose is appropriate. This topic was set as CQ2‐2. The question of whether antipsychotic monotherapy or combination therapy with two or more antipsychotics is useful was set as CQ2‐3. The question of the usefulness of concomitant therapy of psychotropic drugs other than antipsychotics was set as CQ2‐4. It is important to understand the evidence regarding the usefulness of antipsychotic monotherapy and concomitant therapy.
Recommendation Confirming whether the currently administered dose, duration, and adherence of the antipsychotic is appropriate is recommended before considering switching or increasing the antipsychotic dose 1D . At the time of recurrence/relapse due to discontinuation of medication, selecting an antipsychotic to restart in consideration of the response of past antipsychotics, including side effects, is recommended 1D . If medication adherence is good and blood concentrations are in the effective range but there is poor response, switching to another antipsychotic is recommended. However, if there is room to increase the dose and there is no problem with tolerability, increasing the dose is desirable. 2D . Switching to another antipsychotic is desirable if observed for 2‐4 weeks after increasing the dose but there is no response after eight weeks at the latest 2C . Selecting antipsychotics which blood concentrations can be measured (eg, haloperidol) or long‐acting injection (LAI) is desirable to exclude non‐adherence, increased drug metabolism, and impaired absorption 2D . There is little evidence that rapidly increasing doses or exceeding the recommended doses are effective. It is desirable to avoid both since there is also the risk that side effects may enhance . 2D . In conclusion, it is desirable to attempt to increase the dose rather than switching the antipsychotic during at the time of the recurrence/relapse of schizophrenia 2D . Explanation There are no trials that included only recurrent/relapsing cases of schizophrenia and that compared the effectiveness of switching or increasing doses. It has been reported that there were no differences in response rate between switching and increasing doses for acute schizophrenia , but many guidelines recommend increasing doses to the maximum level while confirming adherence and side effects and observing for a sufficient period before switching. Many guidelines also recommend selecting an antipsychotic drug to restart in recurrent cases due to the discontinuation of medication by referring to the effectiveness and tolerability of the previously used antipsychotics. , , , Clinical trial results of psychiatric acute treatment showed that improvements two weeks after administration were greater than during any subsequent period, and almost all improvements obtained in the year after starting drug administration were observed within one month of starting medication. Poor response in the first two weeks of medication was a predictor of a lack of subsequent improvement with a probability of about 80%. Therefore, the possibility that subsequent responses would be seen is low if improvements in symptoms by 20%‐25% were not observed within two weeks at an appropriate dose. , , , , , Other reports showed that the response within the 2‐6‐week observation period accurately reflected subsequent response and remission, , , , , but there were no reports with an observation period of more than eight weeks. Confirmation of blood antipsychotic concentrations or the use of LAI was useful to exclude “ pseudo‐resistance”. , , , A few reports showed that rapidly increased doses were effective and safe, such as when quetiapine was increased to 800 mg/d within four days or when patients with a history of clozapine medication had their clozapine doses increased to an average of 353 mg/d in about. four days. There is one case report that showed that rapid increases in quetiapine resulted in hypokalemia. And, rapid increases in clozapine resulted in an increased risk of myocarditis, therefore increasing doses should be avoided to prevent side effects. , There is little evidence that increasing doses that exceed recommended doses are effective and they may exacerbate side effects. , ,
Confirming whether the currently administered dose, duration, and adherence of the antipsychotic is appropriate is recommended before considering switching or increasing the antipsychotic dose 1D . At the time of recurrence/relapse due to discontinuation of medication, selecting an antipsychotic to restart in consideration of the response of past antipsychotics, including side effects, is recommended 1D . If medication adherence is good and blood concentrations are in the effective range but there is poor response, switching to another antipsychotic is recommended. However, if there is room to increase the dose and there is no problem with tolerability, increasing the dose is desirable. 2D . Switching to another antipsychotic is desirable if observed for 2‐4 weeks after increasing the dose but there is no response after eight weeks at the latest 2C . Selecting antipsychotics which blood concentrations can be measured (eg, haloperidol) or long‐acting injection (LAI) is desirable to exclude non‐adherence, increased drug metabolism, and impaired absorption 2D . There is little evidence that rapidly increasing doses or exceeding the recommended doses are effective. It is desirable to avoid both since there is also the risk that side effects may enhance . 2D . In conclusion, it is desirable to attempt to increase the dose rather than switching the antipsychotic during at the time of the recurrence/relapse of schizophrenia 2D .
There are no trials that included only recurrent/relapsing cases of schizophrenia and that compared the effectiveness of switching or increasing doses. It has been reported that there were no differences in response rate between switching and increasing doses for acute schizophrenia , but many guidelines recommend increasing doses to the maximum level while confirming adherence and side effects and observing for a sufficient period before switching. Many guidelines also recommend selecting an antipsychotic drug to restart in recurrent cases due to the discontinuation of medication by referring to the effectiveness and tolerability of the previously used antipsychotics. , , , Clinical trial results of psychiatric acute treatment showed that improvements two weeks after administration were greater than during any subsequent period, and almost all improvements obtained in the year after starting drug administration were observed within one month of starting medication. Poor response in the first two weeks of medication was a predictor of a lack of subsequent improvement with a probability of about 80%. Therefore, the possibility that subsequent responses would be seen is low if improvements in symptoms by 20%‐25% were not observed within two weeks at an appropriate dose. , , , , , Other reports showed that the response within the 2‐6‐week observation period accurately reflected subsequent response and remission, , , , , but there were no reports with an observation period of more than eight weeks. Confirmation of blood antipsychotic concentrations or the use of LAI was useful to exclude “ pseudo‐resistance”. , , , A few reports showed that rapidly increased doses were effective and safe, such as when quetiapine was increased to 800 mg/d within four days or when patients with a history of clozapine medication had their clozapine doses increased to an average of 353 mg/d in about. four days. There is one case report that showed that rapid increases in quetiapine resulted in hypokalemia. And, rapid increases in clozapine resulted in an increased risk of myocarditis, therefore increasing doses should be avoided to prevent side effects. , There is little evidence that increasing doses that exceed recommended doses are effective and they may exacerbate side effects. , ,
Recommendation The evidence for each antipsychotic drug when compared to placebos is described below, but there is insufficient evidence regarding comparisons between each antipsychotic drug. The factors for each case must be considered individually for drug selection; therefore no recommendation for specific drugs is provided. Aripiprazole has both high efficacy A and tolerability A at over10 mg/d. Blonanserin is effective at either 2.5, 5, or 10 mg/d B . Haloperidol is effective at over 10 mg/d A or over 4 mg/d B but the incidence of EPS is high at either dose A . Olanzapine is effective at over 10 mg/d C but caution is required because of potential weight gain A . Quetiapine is effective at over 250 mg/d B and may be effective even at over 150 mg/d C . Certainity of evidence for efficacy is weak to moderate, but tolerability is high A . Risperidone is effective at over 2 mg/d A but increases in prolactin levels A , and drug‐induced Parkinsonism B are common. Caution is required because of side effects. Zotepine is effective at over 150 mg/d C . Explanation This CQ examines double‐blind RCTs which focused only on recurrent/relapsing cases with schizophrenia and describes antipsychotcs for which evidence has been obtained. As such, no evaluation or explanations are provided on antipsychotics that have not undergone double‐blind RCTs that are restricted to recurrent/relapsing cases. However, this does not indicate that the non‐mentioned drugs are not useful. There are four placebo‐controlled RCTs (total n = 1402) , , , on aripiprazole that all showed the efficacy of aripiprazole. The dose setting ranged from 2‐30 mg/d but efficacy was observed at 10 mg/d or higher. There were no major differences from the placebo for the incidence of side effects in all trials and the tolerability was high. There are four placebo‐controlled RCTs (total n = 1671) , , , of quetiapine, of which one showed higher efficacy than placebos at doses over 150 mg/d and one at over 250 mg/d . However, the remaining two trials did not show any differences from placebos at doses of 300‐800 mg/d. , Although it was reported that the most frequent side effect was agitation, it was also reported that irritability was lower than with the placebo, no consensus has been reached. Tolerability in both reports was high. Of the two trials on quetiapine extended‐release, one trial showed significant effects at only 600 mg/d when compared to a placebo. There are two RCTs (total n = 450) , of olanzapine and placebos. One of these trials showed that doses over 7.5 mg/d were more effective than placebo, but the other trial showed no significant effects at doses of 15 mg/d when compared to placebo. Patients taking olanzapine showed significant weight gain relative to those taking placebos in both trials. , There are two placebo‐controlled RCTs (total n = 386) , of risperidone. Both trials showed that risperidone was effective at a dose between 2‐8 mg/d. Abnormal increases levels of prolactin were common among those who took risperidone in both trials, and drug‐induced Parkinsonism and trial discontinuation rates were significantly higher in one trial. There is one placebo‐controlled RCT (n = 247) of blonanserin, and the results showed that blonanserin had higher efficacy than placebos at doses over 2.5 mg/d but that efficacy was even higher in the 10 mg/d group compared to the 2.5 mg/d group. There were no differences in efficacy between 5 and 10 mg/d but the expression of EPS was higher at 10 mg/d compared to other doses. There are no placebo‐controlled RCTs of paliperidone or perospirone. Haloperidol has the greatest number of reported RCTs among those which investigated first generation antipsychotics (FGAs) in recurrent/relapsing cases. There are five reports that compared haloperidol with placebos , , , , with moderate sample sizes of 100‐200. Most doses were set at 10‐20 mg/d but one trial was set relatively low at 4 mg/d. Haloperidol was effective relative to placebos in all reports. However, haloperidol had a high expression of EPS, which was seen even at a comparatively low‐dose setting of 4 mg/d. Chlorpromazine* (CP) had the second‐largest number of reports after haloperidol with three reports. , , Of these, one showed significant differences when compared with a placebo at a dose of 1000 mg/d. One report showed significant trends, but this trial was extremely small in scale with a total sample size of 19 people after including both groups. Of these three trials, the trial with the largest sample size of 106 subjects did not report any efficacy. Reports on other FGAs include one each for fluphenazine and zotepine. There is only a report from 1971 which studied the effectiveness of fluphenazine in recurrent/relapsing cases. That study showed superiority relative to placebo at a level similar to chlorpromazine but the reliability of the results is low due to its small scale. There is one report on zotepine which used chlorpromazine and a placebo as control groups that showed that zotepine was effective. The expression of EPS was low compared to chlorpromazine. The four antipsychotics undergoing clinical trials in Japan as of December 2014 are asenapine, , cariprazine, lurasidone, and ziprasidone*. , , The effectiveness of each drug in recurrent/relapsing cases has been confirmed but details cannot be given since they have not yet been approved in Japan. Head‐to‐head analysis of antipsychotics Ten RCTs compared second‐generation antipsychotics (SGAs) and FGAs. The control FGA in all trials was haloperidol. The breakdown of SGAs is as follows: aripiprazole, one trial ; asenapine, one trial; blonanserin, one trial; olanzapine, two trials; , quetiapine, two trials; , risperidone, one trial; and ziprasidone, two trials. , These SGAs had similar efficacy as the FGA haloperidol A , a lower frequency of EPS expression in terms of tolerability A , and smaller increases in prolactin levels A . Therefore, SGAs are more useful than FGAs A . As for RCTs comparing SGAs, there were two that compared aripiprazole and risperidone, , one which compared aripiprazole and olanzapine (only compared tolerability), and one which compared risperidone and ziprasidone*. Aripiprazole had similar efficacy as risperidone A , and risperidone had higher increases in prolactin levels A and EPS C in terms of tolerability. Weight gain due to olanzapine was 7% higher than with aripiprazole B and lipid metabolism disorders were also higher with olanzapine B .
The evidence for each antipsychotic drug when compared to placebos is described below, but there is insufficient evidence regarding comparisons between each antipsychotic drug. The factors for each case must be considered individually for drug selection; therefore no recommendation for specific drugs is provided. Aripiprazole has both high efficacy A and tolerability A at over10 mg/d. Blonanserin is effective at either 2.5, 5, or 10 mg/d B . Haloperidol is effective at over 10 mg/d A or over 4 mg/d B but the incidence of EPS is high at either dose A . Olanzapine is effective at over 10 mg/d C but caution is required because of potential weight gain A . Quetiapine is effective at over 250 mg/d B and may be effective even at over 150 mg/d C . Certainity of evidence for efficacy is weak to moderate, but tolerability is high A . Risperidone is effective at over 2 mg/d A but increases in prolactin levels A , and drug‐induced Parkinsonism B are common. Caution is required because of side effects. Zotepine is effective at over 150 mg/d C .
This CQ examines double‐blind RCTs which focused only on recurrent/relapsing cases with schizophrenia and describes antipsychotcs for which evidence has been obtained. As such, no evaluation or explanations are provided on antipsychotics that have not undergone double‐blind RCTs that are restricted to recurrent/relapsing cases. However, this does not indicate that the non‐mentioned drugs are not useful. There are four placebo‐controlled RCTs (total n = 1402) , , , on aripiprazole that all showed the efficacy of aripiprazole. The dose setting ranged from 2‐30 mg/d but efficacy was observed at 10 mg/d or higher. There were no major differences from the placebo for the incidence of side effects in all trials and the tolerability was high. There are four placebo‐controlled RCTs (total n = 1671) , , , of quetiapine, of which one showed higher efficacy than placebos at doses over 150 mg/d and one at over 250 mg/d . However, the remaining two trials did not show any differences from placebos at doses of 300‐800 mg/d. , Although it was reported that the most frequent side effect was agitation, it was also reported that irritability was lower than with the placebo, no consensus has been reached. Tolerability in both reports was high. Of the two trials on quetiapine extended‐release, one trial showed significant effects at only 600 mg/d when compared to a placebo. There are two RCTs (total n = 450) , of olanzapine and placebos. One of these trials showed that doses over 7.5 mg/d were more effective than placebo, but the other trial showed no significant effects at doses of 15 mg/d when compared to placebo. Patients taking olanzapine showed significant weight gain relative to those taking placebos in both trials. , There are two placebo‐controlled RCTs (total n = 386) , of risperidone. Both trials showed that risperidone was effective at a dose between 2‐8 mg/d. Abnormal increases levels of prolactin were common among those who took risperidone in both trials, and drug‐induced Parkinsonism and trial discontinuation rates were significantly higher in one trial. There is one placebo‐controlled RCT (n = 247) of blonanserin, and the results showed that blonanserin had higher efficacy than placebos at doses over 2.5 mg/d but that efficacy was even higher in the 10 mg/d group compared to the 2.5 mg/d group. There were no differences in efficacy between 5 and 10 mg/d but the expression of EPS was higher at 10 mg/d compared to other doses. There are no placebo‐controlled RCTs of paliperidone or perospirone. Haloperidol has the greatest number of reported RCTs among those which investigated first generation antipsychotics (FGAs) in recurrent/relapsing cases. There are five reports that compared haloperidol with placebos , , , , with moderate sample sizes of 100‐200. Most doses were set at 10‐20 mg/d but one trial was set relatively low at 4 mg/d. Haloperidol was effective relative to placebos in all reports. However, haloperidol had a high expression of EPS, which was seen even at a comparatively low‐dose setting of 4 mg/d. Chlorpromazine* (CP) had the second‐largest number of reports after haloperidol with three reports. , , Of these, one showed significant differences when compared with a placebo at a dose of 1000 mg/d. One report showed significant trends, but this trial was extremely small in scale with a total sample size of 19 people after including both groups. Of these three trials, the trial with the largest sample size of 106 subjects did not report any efficacy. Reports on other FGAs include one each for fluphenazine and zotepine. There is only a report from 1971 which studied the effectiveness of fluphenazine in recurrent/relapsing cases. That study showed superiority relative to placebo at a level similar to chlorpromazine but the reliability of the results is low due to its small scale. There is one report on zotepine which used chlorpromazine and a placebo as control groups that showed that zotepine was effective. The expression of EPS was low compared to chlorpromazine. The four antipsychotics undergoing clinical trials in Japan as of December 2014 are asenapine, , cariprazine, lurasidone, and ziprasidone*. , , The effectiveness of each drug in recurrent/relapsing cases has been confirmed but details cannot be given since they have not yet been approved in Japan.
Ten RCTs compared second‐generation antipsychotics (SGAs) and FGAs. The control FGA in all trials was haloperidol. The breakdown of SGAs is as follows: aripiprazole, one trial ; asenapine, one trial; blonanserin, one trial; olanzapine, two trials; , quetiapine, two trials; , risperidone, one trial; and ziprasidone, two trials. , These SGAs had similar efficacy as the FGA haloperidol A , a lower frequency of EPS expression in terms of tolerability A , and smaller increases in prolactin levels A . Therefore, SGAs are more useful than FGAs A . As for RCTs comparing SGAs, there were two that compared aripiprazole and risperidone, , one which compared aripiprazole and olanzapine (only compared tolerability), and one which compared risperidone and ziprasidone*. Aripiprazole had similar efficacy as risperidone A , and risperidone had higher increases in prolactin levels A and EPS C in terms of tolerability. Weight gain due to olanzapine was 7% higher than with aripiprazole B and lipid metabolism disorders were also higher with olanzapine B .
Recommendation Antipsychotic combination therapy can be more effective than monotherapy but its effects are unclear and it may increase side effects C . Therefore, it is desirable not to conduct Antipsychotic combination therapy at the time of recurrence/relapse 2C . Explanation There are no trials that included only recurrent/relapsing schizophrenia cases and that compared monotherapy and combination therapy. Meta‐analyses that compared monotherapy and combination therapy during acute phase of schizophrenia showed that combination therapy such as combinations with clozapine or the concomitant use of FGAs and SGAs may be more effective than monotherapy under specific conditions, but side effects have been examined only insufficiently. Additionally, possible effects of publication bias or subject heterogeneity may exist. The combination therapy of olanzapine and risperidone may be more effective than monotherapy for psychiatric symptoms but it was also shown that concomitant therapy of risperidone or quetiapine with aripiprazole was ineffective. It was suggested that different effects may occur depending on the drug combination. Reports have indicated combination therapy with aripiprazole improved negative symptoms, improved hyperprolactinemia with risperidone, ,and improved weight gain by clozapine. There are positive reasons for pursuing Antipsychotic combination therapy, such as more rapid and powerful expression of effects, improvement of various symptoms (eg, irritability, cognitive impairments, and negative symptoms), and improvement of comorbid symptoms (insomnia, anxiety, and depression). On the other hand, there are also potential negative reasons such as suspension of switching drugs and prescription habits of physicians. , , Risks of combination therapy include increases over the total, necessary dose, increases in acute or delayed side effects, unpredictable drug interactions, difficulty in identifying the antipsychotics that cause effects or side effects, decreased adherence, increased mortality rate, and medical costs. , , In clinical practice, the frequency of antipsychotic combination therapy is high worldwide, including in Japan. , , Therefore, clozapine monotherapy, whose evidence of effects and side effects toward treatment‐resistant schizophrenia is established, is prioritized over combination therapy after considering the risks of combination therapy and the uncertainty of effects (see Chapter 4 (⇒ pg. 67)). Combination therapy should be used with caution only in severe cases which have a poor response to monotherapy, including to clozapine. , ,
Antipsychotic combination therapy can be more effective than monotherapy but its effects are unclear and it may increase side effects C . Therefore, it is desirable not to conduct Antipsychotic combination therapy at the time of recurrence/relapse 2C .
There are no trials that included only recurrent/relapsing schizophrenia cases and that compared monotherapy and combination therapy. Meta‐analyses that compared monotherapy and combination therapy during acute phase of schizophrenia showed that combination therapy such as combinations with clozapine or the concomitant use of FGAs and SGAs may be more effective than monotherapy under specific conditions, but side effects have been examined only insufficiently. Additionally, possible effects of publication bias or subject heterogeneity may exist. The combination therapy of olanzapine and risperidone may be more effective than monotherapy for psychiatric symptoms but it was also shown that concomitant therapy of risperidone or quetiapine with aripiprazole was ineffective. It was suggested that different effects may occur depending on the drug combination. Reports have indicated combination therapy with aripiprazole improved negative symptoms, improved hyperprolactinemia with risperidone, ,and improved weight gain by clozapine. There are positive reasons for pursuing Antipsychotic combination therapy, such as more rapid and powerful expression of effects, improvement of various symptoms (eg, irritability, cognitive impairments, and negative symptoms), and improvement of comorbid symptoms (insomnia, anxiety, and depression). On the other hand, there are also potential negative reasons such as suspension of switching drugs and prescription habits of physicians. , , Risks of combination therapy include increases over the total, necessary dose, increases in acute or delayed side effects, unpredictable drug interactions, difficulty in identifying the antipsychotics that cause effects or side effects, decreased adherence, increased mortality rate, and medical costs. , , In clinical practice, the frequency of antipsychotic combination therapy is high worldwide, including in Japan. , , Therefore, clozapine monotherapy, whose evidence of effects and side effects toward treatment‐resistant schizophrenia is established, is prioritized over combination therapy after considering the risks of combination therapy and the uncertainty of effects (see Chapter 4 (⇒ pg. 67)). Combination therapy should be used with caution only in severe cases which have a poor response to monotherapy, including to clozapine. , ,
Recommendation At the time of recurrence/relapse of schizophrenia, although it is effective the concomitant use of benzodiazepines (BZs) D . Concomitant use is not desirable for long periods because of potential side effects and dependency 2D . The concomitant use of valproic acid during the recurrence/relapse of schizophrenia is effective if only for <3 weeks D but negative symptoms worsen for longer periods C . From the view point of tolerability D , it is desirable not to implement long‐term administration 2D . The effectiveness of the concomitant therapy of antidepressants and other mood stabilizers at the time of recurrence/relapse of schizophrenia is not clear D , Therefore, it is desirable not to use them in concomitant 2D . Explanation Concomitant therapy with antipsychotics and other psychotropic drugs may be applied to the pharmacological therapy for acute phase of schizophrenia. However, few clinical trials examined whether the concomitant use of antipsychotics and other psychotropic drugs is effective during recurrence or relapse. Psychotropic drugs used concomitantly include at the time of recurrence/relapse, BZ drugs, mood stabilizers, and antidepressants. There is only one RCT which examined whether the concomitant use of BZs is effective during recurrence/relapse. This study specifically looked at the concomitant use of haloperidol and alprazolam and was a small‐scale (n = 28) study that evaluated effects during an extremely short observation period of 72 hours. The results showed that concomitant use of alprazolam was effective over short periods for patients with high irritability. However, there was no evidence of the effectiveness of the concomitant use of BZ other than alprazolam or on concomitant use with SGAs. BZ drugs were often used in actual clinical settings for short and long periods. However, these should not be used since these drugs have dependency and possibly increase the mortality rate. There are three RCTs on the effectiveness of concomitant therapy of mood stabilizers at the time of recurrence/relapse but all of these studied the effectiveness of concomitant therapy of valproic acid and antipsychotics (risperidone, olanzapine, and haloperidol). , , Different trial designs and results were shown for each trial. Short‐term trials with an observation period of <1 month showed significant improvements in the concomitant use group up to 21 days after starting concomitant therapy, , but there were no differences overall between the two groups by the 28th day of medication. However, an 84‐week follow‐up study of an SGA+valproic acid group and SGA monotherapy group showed that concomitant therapy was not superior to the monotherapy group, while the antipsychotic monotherapy group showed significant improvements in negative symptoms. There were also no differences in tolaribility between the two groups. However, thrombocytopenia, liver dysfunction, weight gain, and increased LDL cholesterol values were observed in the SGAplus valproic acid group. It may be possible to expect improvement effects in short‐term concomitant use within three weeks, but negative symptoms may also even worsen in the long term. Other mood stabilizers may have similar effects and potential exacerbation effects but no clinical trials have been conducted. Carbamazepine is licensed in Japan for psychomotor agitation in schizophrenia, but there is little evidence in clinical practice on whether carbamazepine is effective at the time of recurrence/relapse schizophrenia. There is a one systematic review that showed negative results for using carbamazepine for schizophrenia, although this was not restricted to recurrent/relapsing cases. There are also no RCTs that showed the effectiveness of the concomitant use of lithium at the time of recurrence/relapse. As for RCTs which examined whether the concomitant use of antidepressants is effective during recurrence/relapse, there is one study that compared concomitant therapy of olanzapine and fluvoxamine (50 mg/d) and olanzapine monotherapy. The results showed that the olanzapine plus concomitant therapy group had significantly improved symptoms. However, the number of patients in this study was extremely small with just 12 patients. And other than the concomitant effects of antidepressants, the concomitant use of olanzapine and fluvoxamine may be linked to clinical effects as a result of increasing blood olanzapine concentrations. In addition, this study lacks descriptions on tolerability and was not clearly indicated. Thus, there is unclear evidence on the concomitant use of antipsychotics and antidepressants at the time of recurrence/relapse at present. As a result, concomitant therapy is not recommended at the time of recurrence/relapse.
At the time of recurrence/relapse of schizophrenia, although it is effective the concomitant use of benzodiazepines (BZs) D . Concomitant use is not desirable for long periods because of potential side effects and dependency 2D . The concomitant use of valproic acid during the recurrence/relapse of schizophrenia is effective if only for <3 weeks D but negative symptoms worsen for longer periods C . From the view point of tolerability D , it is desirable not to implement long‐term administration 2D . The effectiveness of the concomitant therapy of antidepressants and other mood stabilizers at the time of recurrence/relapse of schizophrenia is not clear D , Therefore, it is desirable not to use them in concomitant 2D .
Concomitant therapy with antipsychotics and other psychotropic drugs may be applied to the pharmacological therapy for acute phase of schizophrenia. However, few clinical trials examined whether the concomitant use of antipsychotics and other psychotropic drugs is effective during recurrence or relapse. Psychotropic drugs used concomitantly include at the time of recurrence/relapse, BZ drugs, mood stabilizers, and antidepressants. There is only one RCT which examined whether the concomitant use of BZs is effective during recurrence/relapse. This study specifically looked at the concomitant use of haloperidol and alprazolam and was a small‐scale (n = 28) study that evaluated effects during an extremely short observation period of 72 hours. The results showed that concomitant use of alprazolam was effective over short periods for patients with high irritability. However, there was no evidence of the effectiveness of the concomitant use of BZ other than alprazolam or on concomitant use with SGAs. BZ drugs were often used in actual clinical settings for short and long periods. However, these should not be used since these drugs have dependency and possibly increase the mortality rate. There are three RCTs on the effectiveness of concomitant therapy of mood stabilizers at the time of recurrence/relapse but all of these studied the effectiveness of concomitant therapy of valproic acid and antipsychotics (risperidone, olanzapine, and haloperidol). , , Different trial designs and results were shown for each trial. Short‐term trials with an observation period of <1 month showed significant improvements in the concomitant use group up to 21 days after starting concomitant therapy, , but there were no differences overall between the two groups by the 28th day of medication. However, an 84‐week follow‐up study of an SGA+valproic acid group and SGA monotherapy group showed that concomitant therapy was not superior to the monotherapy group, while the antipsychotic monotherapy group showed significant improvements in negative symptoms. There were also no differences in tolaribility between the two groups. However, thrombocytopenia, liver dysfunction, weight gain, and increased LDL cholesterol values were observed in the SGAplus valproic acid group. It may be possible to expect improvement effects in short‐term concomitant use within three weeks, but negative symptoms may also even worsen in the long term. Other mood stabilizers may have similar effects and potential exacerbation effects but no clinical trials have been conducted. Carbamazepine is licensed in Japan for psychomotor agitation in schizophrenia, but there is little evidence in clinical practice on whether carbamazepine is effective at the time of recurrence/relapse schizophrenia. There is a one systematic review that showed negative results for using carbamazepine for schizophrenia, although this was not restricted to recurrent/relapsing cases. There are also no RCTs that showed the effectiveness of the concomitant use of lithium at the time of recurrence/relapse. As for RCTs which examined whether the concomitant use of antidepressants is effective during recurrence/relapse, there is one study that compared concomitant therapy of olanzapine and fluvoxamine (50 mg/d) and olanzapine monotherapy. The results showed that the olanzapine plus concomitant therapy group had significantly improved symptoms. However, the number of patients in this study was extremely small with just 12 patients. And other than the concomitant effects of antidepressants, the concomitant use of olanzapine and fluvoxamine may be linked to clinical effects as a result of increasing blood olanzapine concentrations. In addition, this study lacks descriptions on tolerability and was not clearly indicated. Thus, there is unclear evidence on the concomitant use of antipsychotics and antidepressants at the time of recurrence/relapse at present. As a result, concomitant therapy is not recommended at the time of recurrence/relapse.
The disease stages of schizophrenia can be classified into the acute phase, stabilization phase, and stable phase. There are no guidelines or algorithms which strictly define these disease stages but there is a broad consensus that the acute phase is when symptoms are active and the condition is unstable, the stabilization phase is when symptoms have improved and the condition is stabilizing, and the stable phase is when symptoms have disappeared and the condition is stable. The stabilization phase and stable phase are often defined together as the maintenance phase. This chapter describes treatment during this maintenance phase. Relapse is the largest factor that inhibits recovery in schizophrenia patients. Observational research on first‐episode schizophrenia showed that the relapse rate within five years among first‐episode patients was 81.9%. Repeated relapse further exacerbates psychiatric symptoms and decreases social functioning. For this reason, prevention of relapse is one of the most important issues in the maintenance treatment of schizophrenia patients. The topic of each CQ examined in this chapter is shown below, and a summary of this chapter is shown in Table . Please refer to the explanations of each CQ for the specific content. CQ3‐1 addresses the possibility of whether suspending drug administration in the maintenance phase who have stabilized with acute treatment or who have reached remission is possible with the aim of recurrence prevention or recovery. The continuation of antipsychotic treatment requires to investigate the balance between effects and side effects due to differences in pharmacological profiles such as the in vivo half‐life of antipsychotics or the affinity for receptors. Drug selection is another critical issue, so the question of which drug is favorable for continuing antipsychotic treatment is addressed in CQ3‐2. Decreases in drug administration adherence are frequently a problem when treating patients in the maintenance phase. Long‐acting injection (LAI) is a treatment administered by injection at two‐ to four‐week intervals, which does not necessarily require daily oral antispychotics. CQ3‐3 examines whether LAI is more effective compared to orally administered drugs. There are many patients in the maintenance phase who wish to decrease the dose of antipsychotics but continued drug administration has been shown to be necessary for recurrence prevention. In CQ3‐4, the available clinical information is summarized and information is presented on whether decreased doses of antipsychotics are useful in the maintenance phase. Furthermore, continuous administration (ie, continuous dosing) is generally required to maintain stable blood concentrations of antipsychotics but intermittent administration methods have also been investigated from the perspective of recurrence prevention effects or reduced side effects. Therefore, CQ3‐5 examines the appropriate administration interval for the treatment in the maintenance phase. References 1 Takeuchi H , Suzuki T , Uchida H , et al. Antipsychotic treatment for schizophrenia in the maintenance phase: a systematic review of the guidelines and algorithms . Schizophr Res . 2012 ; 134 : 219 – 25 . 22154594 2 Robinson D , Woerner MG , Alvir JM , et al. Predictors of relapse following response from a first episode of schizophrenia or schizoaffective disorder . Arch Gen Psychiatry . 1999 ; 56 : 241 – 7 . 10078501 3 Lieberman JA . Atypical antipsychotic drugs as a first‐line treatment of schizophrenia: a rationale and hypothesis . J Clin Psychiatry . 1996 ; 57 ( Suppl 11 ): 68 – 71 . 8941173 4 Kane JM , Kishimoto T , Correll CU . Non‐adherence to medication in patients with psychotic disorders: epidemiology, contributing factors and management strategies . World Psychiatry . 2013 ; 12 : 216 – 26 . 24096780
Recommendation Continuous administration of antipsychotics in patients in the maintenance phase decreases recurrence rates A and the number of hospitalizations A . Furthermore, continuous administration of antipsychotics decreases the mortality rate C and prevents decreases in the quality of life (QOL) C . Therefore, continuous administration of antipsychotics is recommended in the maintenance phase 1A . Explanation Whether antipsychotic treatment can be suspended in the maintenance phase where active symptoms of schizophrenia are stable is an important question not only for patients but also for psychiatrists. A meta‐analysis based on a total of 65 RCTs on patients in the maintenance phasewhich compared to continuous antipsychotic administration with those of placebos was reported in 2012. According to this meta‐analysis, the continuous administration of antipsychotics reduced the relapse rate (27% vs 64%, risk ratio (RR) of 0.4) between 7‐12 months after the start of the study and the rehospitalization rate (10% vs 26%, RR of 0.38). Furthermore, there were no significant differences between continuous antipsychotic administration and those of placebos for discontinuation from the study due to side effects or outcomes of at least one side effect being reported. This meta‐analysis by Leucht et al. of the mortality rate showed no significant differences between the continuous administration of antipsychotics and of placebos. Khan et al. reported which compiled new drug approval information by the U.S. Food and Drug Administration (FDA) showed that the mortality rate of patients who were assigned to antipsychotic groups was significantly lower than those in placebo groups. Furthermore, a long‐term, large‐scale cohort follow‐up study in Finland showed that long‐term antipsychotic treatment decreased the mortality rate when compared to patients with no antipsychotic treatment (hazard ratio of 0.81). Only some antipsychotics have been studied with regards to the QOL and evidence is limited. However, reports have indicated that continuous antipsychotic administration is useful in the improvement and maintenance of patient QOL. , Given these studies, the discontinuation of antipsychotics is not recommended in all eight international guidelines and algorithms published since 2000 which mentioned the possibility of suspending antipsychotic administration. This guideline also recommends the continuous administration of antipsychotics.
Continuous administration of antipsychotics in patients in the maintenance phase decreases recurrence rates A and the number of hospitalizations A . Furthermore, continuous administration of antipsychotics decreases the mortality rate C and prevents decreases in the quality of life (QOL) C . Therefore, continuous administration of antipsychotics is recommended in the maintenance phase 1A .
Whether antipsychotic treatment can be suspended in the maintenance phase where active symptoms of schizophrenia are stable is an important question not only for patients but also for psychiatrists. A meta‐analysis based on a total of 65 RCTs on patients in the maintenance phasewhich compared to continuous antipsychotic administration with those of placebos was reported in 2012. According to this meta‐analysis, the continuous administration of antipsychotics reduced the relapse rate (27% vs 64%, risk ratio (RR) of 0.4) between 7‐12 months after the start of the study and the rehospitalization rate (10% vs 26%, RR of 0.38). Furthermore, there were no significant differences between continuous antipsychotic administration and those of placebos for discontinuation from the study due to side effects or outcomes of at least one side effect being reported. This meta‐analysis by Leucht et al. of the mortality rate showed no significant differences between the continuous administration of antipsychotics and of placebos. Khan et al. reported which compiled new drug approval information by the U.S. Food and Drug Administration (FDA) showed that the mortality rate of patients who were assigned to antipsychotic groups was significantly lower than those in placebo groups. Furthermore, a long‐term, large‐scale cohort follow‐up study in Finland showed that long‐term antipsychotic treatment decreased the mortality rate when compared to patients with no antipsychotic treatment (hazard ratio of 0.81). Only some antipsychotics have been studied with regards to the QOL and evidence is limited. However, reports have indicated that continuous antipsychotic administration is useful in the improvement and maintenance of patient QOL. , Given these studies, the discontinuation of antipsychotics is not recommended in all eight international guidelines and algorithms published since 2000 which mentioned the possibility of suspending antipsychotic administration. This guideline also recommends the continuous administration of antipsychotics.
Recommendation SGAs are superior to FGAs in terms of relapse prevention B but have no clear differences to FGAs in terms of treatment discontinuation for all reasons B . Therefore, it is desirable to select SGAs over FGAs 2B . There is insufficient evidence regarding comparisons between SGAs. No recommendations are made for drug selection since factors of each case need to be considered. Explanation Kishimoto et al. reported a meta‐analysis that compared the relapse prevention effects of FGAs and SGAs. The inclusion criteria for this meta‐analysis included patients who were followed up for over six months in RCTs of FGAs and SGAs (average duration 61.9 ± 22.4 weeks). The primary outcome was relapse and the secondary outcome included relapse at 3/6/12 months, hospitalization, and treatment failure (discontinuation due to all reasons and relapse). Twenty‐three trials (total n = 4, 504) were analyzed. The number of trials for each antipsychotic is as follows: for SGAs—amisulpride*, 3; aripiprazole, 2; clozapine, 4; iloperidone*, 3; olanzapine, 6; quetiapine, 1; risperidone, 6; sertindole*, 1; and ziprasidone*, 1; and for FGAs—21 out of 23 trials were for haloperidol. The analysis showed that the differences in significance were small [number needed to treat:NNT = 17] but SGAs overall had a significantly lower relapse rate compared to FGAs (29.0% vs 37.5%, RR of 0.80; P = 0.0007). Secondary outcomes also showed that SGAs were significantly superior to FGAs in terms of relapse at 3/6/12 months, treatment failure, and rehospitalization. There were no significant differences between the two groups for discontinuation due to all reasons, discontinuation due to side effects, and adherence. However, SGAs tended to show superior values for discontinuation due to all reasons and discontinuations due to side effects. There are only a few RCTs that directly compared individual SGAs, and there is little evidence on which drug is superior. A study, which randomly assigned 133 obese patients who received olanzapine and were in remission, to olanzapine and quetiapine groups and observed them for 24 weeks, showed no significant differences between the two groups for the duration until relapse but olanzapine had a superior treatment continuation rate (70.6% vs 43.1%, P = 0.002). Meanwhile, olanzapine was inferior to quetiapine in terms of weight gain. A study which randomly allocated 86 schizophrenia patients who were treated with FGAs, to olanzapine and quetiapine groups and investigated improvements in cognitive function and QOL (observation duration of one year) showed that quetiapine was superior in improving tolerability and subjective cognitive function relative to olanzapine, but olanzapine had superior stability of symptoms and treatment continuation rate than quetiapine. In summary, the drugs had inconsistent relative superiority depending on the outcome even when comparing specific combinations of antipsychotics, and there is also insufficient information on other drugs. The prevention and treatment of EPS such as tardive dyskinesia, hyperprolactinemia, body weight gain, hyperglycemia, metabolic/heart disease, and metabolic syndrome are needed since long‐term antipsychotic treatment is needed for maintenance treatment. Therefore, it is desirable to select the optimal SGA for individual patients while considering side effects of antipsychotic treatment in the maintenance phase. However, as mentioned above, there is insufficient evidence on the superiority of individual SGAs and the factors for each case need to be investigated. In conclusion, no recommendations for specific drugs are provided.
SGAs are superior to FGAs in terms of relapse prevention B but have no clear differences to FGAs in terms of treatment discontinuation for all reasons B . Therefore, it is desirable to select SGAs over FGAs 2B . There is insufficient evidence regarding comparisons between SGAs. No recommendations are made for drug selection since factors of each case need to be considered.
Kishimoto et al. reported a meta‐analysis that compared the relapse prevention effects of FGAs and SGAs. The inclusion criteria for this meta‐analysis included patients who were followed up for over six months in RCTs of FGAs and SGAs (average duration 61.9 ± 22.4 weeks). The primary outcome was relapse and the secondary outcome included relapse at 3/6/12 months, hospitalization, and treatment failure (discontinuation due to all reasons and relapse). Twenty‐three trials (total n = 4, 504) were analyzed. The number of trials for each antipsychotic is as follows: for SGAs—amisulpride*, 3; aripiprazole, 2; clozapine, 4; iloperidone*, 3; olanzapine, 6; quetiapine, 1; risperidone, 6; sertindole*, 1; and ziprasidone*, 1; and for FGAs—21 out of 23 trials were for haloperidol. The analysis showed that the differences in significance were small [number needed to treat:NNT = 17] but SGAs overall had a significantly lower relapse rate compared to FGAs (29.0% vs 37.5%, RR of 0.80; P = 0.0007). Secondary outcomes also showed that SGAs were significantly superior to FGAs in terms of relapse at 3/6/12 months, treatment failure, and rehospitalization. There were no significant differences between the two groups for discontinuation due to all reasons, discontinuation due to side effects, and adherence. However, SGAs tended to show superior values for discontinuation due to all reasons and discontinuations due to side effects. There are only a few RCTs that directly compared individual SGAs, and there is little evidence on which drug is superior. A study, which randomly assigned 133 obese patients who received olanzapine and were in remission, to olanzapine and quetiapine groups and observed them for 24 weeks, showed no significant differences between the two groups for the duration until relapse but olanzapine had a superior treatment continuation rate (70.6% vs 43.1%, P = 0.002). Meanwhile, olanzapine was inferior to quetiapine in terms of weight gain. A study which randomly allocated 86 schizophrenia patients who were treated with FGAs, to olanzapine and quetiapine groups and investigated improvements in cognitive function and QOL (observation duration of one year) showed that quetiapine was superior in improving tolerability and subjective cognitive function relative to olanzapine, but olanzapine had superior stability of symptoms and treatment continuation rate than quetiapine. In summary, the drugs had inconsistent relative superiority depending on the outcome even when comparing specific combinations of antipsychotics, and there is also insufficient information on other drugs. The prevention and treatment of EPS such as tardive dyskinesia, hyperprolactinemia, body weight gain, hyperglycemia, metabolic/heart disease, and metabolic syndrome are needed since long‐term antipsychotic treatment is needed for maintenance treatment. Therefore, it is desirable to select the optimal SGA for individual patients while considering side effects of antipsychotic treatment in the maintenance phase. However, as mentioned above, there is insufficient evidence on the superiority of individual SGAs and the factors for each case need to be investigated. In conclusion, no recommendations for specific drugs are provided.
Recommendation Studies in which many patients with good adherence showed no significant differences in relapse prevention effects, treatment continuation rates, and side effects between LAI and oral drugs A . Meanwhile, clinical data for which might include patients with poor adherence showed that LAI had an extremely strong hospitalization prevention effect compared to oral drugs C . Therefore, LAI is desirable in patients where relapse is a problem due to improper intake of the prescribed drug 2C . Furthermore, LAI is recommended in patients who request it 1C . Explanation Many RCTs have been reported on the relapse prevention effects of oral drugs and LAI for antipsychotics. A report by Kishimoto et al. based on 21 RCTs (total n = 5176) that followed up with in the maintenance phase for over 24 weeks showed no significant differences in relapse prevention effects between LAI and oral drugs. This lack of superiority of LAI over oral drugs was seen in all secondary outcomes related to relapse, which were specifically relapse rate at 3/6/12/18/24 months, discontinuation from the trial due to all reasons, discontinuation due to side effects, and hospitalization. Furthermore, the effects of LAI and oral medication were similar even when specified trial designs or subject patient data were extracted. However, as discussed in this report, sufficient attention must be given to the issue of whether RCTs had an appropriate trial design when comparing the relapse prevention rates of LAI and oral drugs. Selection bias (subjects participating in RCTs were properly taking their drugs and were cooperative with treatment and examination) possibly have reduced the effects of LAI since patients in RCTs are different from the patient groups that use LAI in daily clinical practice. The fact that participation in trials itself produces conditions that are considerably different from the normal clinical settings must also be considered. Various factors such as reminders for the next consultation, rewards for participation in the trial, and evaluations relating to administration status may encourage drug administration, and make it more challenging to detect different effects of LAI and oral drugs. Taking into account the previously mentioned limitations of the RCT, Kishimoto et al. conducted a meta‐analysis targeting mirror‐image trials as data that more closely reflect the effects of LAI in clinical settings. Mirror‐image trials compare outcomes of a given treatment introduced for the same length of time before and after the introduction of the treatment. In these trials, each individual patient is their own control group and the point of treatment introduction is the boundary. A total of 25 mirror‐image studies (total n = 5940) were included in this analysis. Some of these trials had a follow‐up duration of over six months each for LAI and oral drugs. The analysis showed that LAI was highly superior compared to oral drugs for preventing hospitalizations and decreasing the number of hospitalizations. However, caution is required for the interpretation of mirror‐image research results due to expectation bias (symptoms are more likely to improve due to the expectation that new treatment will be received and all trials included in the analysis, in particular, were shifts from oral drugs to LAI), the natural course of conditions, and the effects of time (susceptible to policy effects such as deinstitutionalization). Mirror‐image studies should be considered as a collection of cohort studies of specific populations (or as follow‐up data of patients who switched from oral drugs to LAI) and case series, and the strength of evidence was set as C . It was reported that side effects in the injection site and EPS were higher for the LAI group, but there were also many reports which showed no distinct differences compared to oral drugs. , , , RCT‐based meta‐analyses also showed no significant differences to oral drugs in terms of “discontinuation from trials due to side effects”. Paliperidone palmitate became commercially available in Japan from November 2013 and post‐marketing surveys of the drug showed that 32 deaths from ~11 000 users (~0.29%) were confirmed from April‐June 2014. This was reported by various media sources. However, post‐marketing surveys are based on spontaneous unregistered reports and it is necessary to consider the characteristics of data whose sensitivity increases as increased attention is given to the subject. In fact, results of Phase I‐III trials (Japanese and international trials), where the actual number used was registered, showed no clear differences compared to other drugs. Thus, there is no established evidence at the present time which indicates that the risks of death due to this drug are particularly high compared to other drugs. However, it should be taken into consideration that post‐market surveys seek to detect rare side effects which may not be easy to detect at the clinical trial stage. Therefore, it should be noted during usage to follow the dosage and usage guidelines and not to administer excessive doses or multiple drugs. Based on the above evidence, the recommendation of this guideline is that it is desirable to use LAI based on patient consent with shared decision‐making (SDM) in cases of repeated relapse due to inadequate drug administration 2C . Additionally, LAI is recommended for patients who request LAI (for example, due to being released from daily drug administration) given the possibility that LAI may have higher effects than oral drugs in terms of relapse prevention 1C .
Studies in which many patients with good adherence showed no significant differences in relapse prevention effects, treatment continuation rates, and side effects between LAI and oral drugs A . Meanwhile, clinical data for which might include patients with poor adherence showed that LAI had an extremely strong hospitalization prevention effect compared to oral drugs C . Therefore, LAI is desirable in patients where relapse is a problem due to improper intake of the prescribed drug 2C . Furthermore, LAI is recommended in patients who request it 1C .
Many RCTs have been reported on the relapse prevention effects of oral drugs and LAI for antipsychotics. A report by Kishimoto et al. based on 21 RCTs (total n = 5176) that followed up with in the maintenance phase for over 24 weeks showed no significant differences in relapse prevention effects between LAI and oral drugs. This lack of superiority of LAI over oral drugs was seen in all secondary outcomes related to relapse, which were specifically relapse rate at 3/6/12/18/24 months, discontinuation from the trial due to all reasons, discontinuation due to side effects, and hospitalization. Furthermore, the effects of LAI and oral medication were similar even when specified trial designs or subject patient data were extracted. However, as discussed in this report, sufficient attention must be given to the issue of whether RCTs had an appropriate trial design when comparing the relapse prevention rates of LAI and oral drugs. Selection bias (subjects participating in RCTs were properly taking their drugs and were cooperative with treatment and examination) possibly have reduced the effects of LAI since patients in RCTs are different from the patient groups that use LAI in daily clinical practice. The fact that participation in trials itself produces conditions that are considerably different from the normal clinical settings must also be considered. Various factors such as reminders for the next consultation, rewards for participation in the trial, and evaluations relating to administration status may encourage drug administration, and make it more challenging to detect different effects of LAI and oral drugs. Taking into account the previously mentioned limitations of the RCT, Kishimoto et al. conducted a meta‐analysis targeting mirror‐image trials as data that more closely reflect the effects of LAI in clinical settings. Mirror‐image trials compare outcomes of a given treatment introduced for the same length of time before and after the introduction of the treatment. In these trials, each individual patient is their own control group and the point of treatment introduction is the boundary. A total of 25 mirror‐image studies (total n = 5940) were included in this analysis. Some of these trials had a follow‐up duration of over six months each for LAI and oral drugs. The analysis showed that LAI was highly superior compared to oral drugs for preventing hospitalizations and decreasing the number of hospitalizations. However, caution is required for the interpretation of mirror‐image research results due to expectation bias (symptoms are more likely to improve due to the expectation that new treatment will be received and all trials included in the analysis, in particular, were shifts from oral drugs to LAI), the natural course of conditions, and the effects of time (susceptible to policy effects such as deinstitutionalization). Mirror‐image studies should be considered as a collection of cohort studies of specific populations (or as follow‐up data of patients who switched from oral drugs to LAI) and case series, and the strength of evidence was set as C . It was reported that side effects in the injection site and EPS were higher for the LAI group, but there were also many reports which showed no distinct differences compared to oral drugs. , , , RCT‐based meta‐analyses also showed no significant differences to oral drugs in terms of “discontinuation from trials due to side effects”. Paliperidone palmitate became commercially available in Japan from November 2013 and post‐marketing surveys of the drug showed that 32 deaths from ~11 000 users (~0.29%) were confirmed from April‐June 2014. This was reported by various media sources. However, post‐marketing surveys are based on spontaneous unregistered reports and it is necessary to consider the characteristics of data whose sensitivity increases as increased attention is given to the subject. In fact, results of Phase I‐III trials (Japanese and international trials), where the actual number used was registered, showed no clear differences compared to other drugs. Thus, there is no established evidence at the present time which indicates that the risks of death due to this drug are particularly high compared to other drugs. However, it should be taken into consideration that post‐market surveys seek to detect rare side effects which may not be easy to detect at the clinical trial stage. Therefore, it should be noted during usage to follow the dosage and usage guidelines and not to administer excessive doses or multiple drugs. Based on the above evidence, the recommendation of this guideline is that it is desirable to use LAI based on patient consent with shared decision‐making (SDM) in cases of repeated relapse due to inadequate drug administration 2C . Additionally, LAI is recommended for patients who request LAI (for example, due to being released from daily drug administration) given the possibility that LAI may have higher effects than oral drugs in terms of relapse prevention 1C .
Recommendation Studies on reducing doses of antipsychotics in the maintenance phase had variable research designs, and there are no consistent results on aspects like relapse, treatment continuation, exacerbation of psychiatric symptoms, and improvement in side effects D . Therefore, it is not possible to conclude at this time whether reducing doses of antipsychotics is useful in the maintenance phase. The advantages and disadvantages of reducing doses need to be clinically decided according to the symptoms and side effects in individual patients (no recommendation D ). Explanation Evidence of decreased doses of antipsychotics in schizophrenia during maintenance treatment with normal doses of antipsychotics is explained separately between FGAs and SGAs. FGAs The studies described in the following section are double‐blind RCTs. Kane et al. studied 126 patients undergoing treatment with LAI of fluphenazine (12.5‐50 mg/2 weeks) and compared a group whose doses were reduced to 1/10 and a continuation group. The results over one year showed that the relapse rate (56% vs 7%) was significantly higher in the reduced‐dose group, while no significant differences in side effects (tardive dyskinesia) were observed. Johnson et al. studied 59 stable patients undergoing treatment with LAI of flupenthixol (<40 mg/2 weeks) and compared groups whose doses were reduced to half and a continuation group. The results over one year showed that the relapse rate (32% vs 10%) was significantly higher in the reduced‐dose group. No significant differences were seen in side effects (EPS). .Hogarty et al. studied 70 stable patients undergoing LAI of fluphenazine (average of 21.5 mg/2 weeks) and compared a group who reduced doses to 1/5 (average of 3.8 mg/2 weeks) and a continuation group. The results over two years revealed no significant differences in relapse rate (30% vs 24%) and treatment discontinuation rate. Faraone et al. studied 29 patients undergoing treatment with various FGAs and compared a group whose doses were reduced to 1/5 and a continuation group. The results over six months showed that significantly higher tendencies were seen in relapse rate (36% vs 0%) in the reduced‐dose group. Inderbitzin et al. studied 43 patients undergoing treatment with LAI of fluphenazine (average of 23 mg/2 weeks) and compared a group whose doses were reduced to half and a continuation group. The results over one year showed no significant differences in relapse rate (25% vs 24%), treatment continuation rate, and psychiatric symptoms. However, EPS significantly improved in the reduced‐dose group compared to the continuation group. Schooler et al. studied 213 stable patients undergoing treatment with LAI of fluphenazine (12.5‐25 mg/2 weeks) and compared a group whose doses were reduced to 1/5 and a continuation group. The results over two years showed no significant differences in rehospitalization rate (25% vs 25%). In summary, a majority of reports regarding reduced doses of FGAs were related to LAI, reduced doses varied from half to 1/10, and there were inconsistent results regarding improvements in relapse and side effects (there were no clear descriptions in most of the reports regarding treatment continuation and psychiatric symptoms). SGAs No double‐blind RCTs have been conducted on SGAs in patients in the maintenance phase to date, therefore only open‐label RCT results are described below. Rouillon et al. studied 97 stable patients undergoing treatment with olanzapine and compared a reduced‐dose group (average of 17.6 to 13.3 mg/d) and a continuation group (average of 18.1 mg/d). The results over six months showed no significant differences in relapse rate (8% vs 6%), treatment continuation rate, psychiatric symptoms, and side effects (EPS and weight gain). Wang et al. compared the results among a group of individuals whose dose reductions started four weeks after becoming stable with risperidone treatment to half of the initial dose level (average of 4.4 to 2.2 mg/d), a group whose dose reductions started 26 weeks to half of the initial dose level (average of 4.2‐2.1 mg/d), and a continuation group (average of 4.3 mg/d). Results over one year showed that both groups whose doses were reduced had significantly higher recurrence rates compared to the continuation group (24%, 16%, and 8%, respectively). There were also significant differences in psychiatric symptoms among the three groups but there were no significant differences in treatment continuation rate and side effects (EPS and weight gain). Takeuchi et al. studied 61 stable patients undergoing treatment with either risperidone or olanzapine and compared a group whose doses were reduced to half (risperidone: average of 3.7‐2.1 mg/d; olanzapine: average of 13.8 to 7.1 mg/d) and a continuation group (risperidone: average of 4.5 mg/d; olanzapine: average of 14.1 mg/d). The results over six months showed no significant differences in recurrence rate (3% vs 3%) and treatment continuation rate but significant improvements in side effects (EPS and cognitive dysfunction) in the reduced‐dose group compared to the continuation group. Overall, with just three open‐label RCTs, there is insufficient evidence on reduced doses of SGAs and there are no consistent results on improvements in recurrence, exacerbation of psychiatric symptoms, and side effects (no significant differences between the reduced‐dose group and continuation group for treatment continuation rate). Based on the current evidence, the guideline/algorithm recommendations for whether antipsychotic doses required for acute treatment should be continued even during maintenance treatment vary by country and no unified consensus has been reached. Accordingly, no conclusion on whether administration of reduced doses of antipsychotics for patients in the maintenance phase is useful can be made in this guideline as well.
Studies on reducing doses of antipsychotics in the maintenance phase had variable research designs, and there are no consistent results on aspects like relapse, treatment continuation, exacerbation of psychiatric symptoms, and improvement in side effects D . Therefore, it is not possible to conclude at this time whether reducing doses of antipsychotics is useful in the maintenance phase. The advantages and disadvantages of reducing doses need to be clinically decided according to the symptoms and side effects in individual patients (no recommendation D ).
Evidence of decreased doses of antipsychotics in schizophrenia during maintenance treatment with normal doses of antipsychotics is explained separately between FGAs and SGAs. FGAs The studies described in the following section are double‐blind RCTs. Kane et al. studied 126 patients undergoing treatment with LAI of fluphenazine (12.5‐50 mg/2 weeks) and compared a group whose doses were reduced to 1/10 and a continuation group. The results over one year showed that the relapse rate (56% vs 7%) was significantly higher in the reduced‐dose group, while no significant differences in side effects (tardive dyskinesia) were observed. Johnson et al. studied 59 stable patients undergoing treatment with LAI of flupenthixol (<40 mg/2 weeks) and compared groups whose doses were reduced to half and a continuation group. The results over one year showed that the relapse rate (32% vs 10%) was significantly higher in the reduced‐dose group. No significant differences were seen in side effects (EPS). .Hogarty et al. studied 70 stable patients undergoing LAI of fluphenazine (average of 21.5 mg/2 weeks) and compared a group who reduced doses to 1/5 (average of 3.8 mg/2 weeks) and a continuation group. The results over two years revealed no significant differences in relapse rate (30% vs 24%) and treatment discontinuation rate. Faraone et al. studied 29 patients undergoing treatment with various FGAs and compared a group whose doses were reduced to 1/5 and a continuation group. The results over six months showed that significantly higher tendencies were seen in relapse rate (36% vs 0%) in the reduced‐dose group. Inderbitzin et al. studied 43 patients undergoing treatment with LAI of fluphenazine (average of 23 mg/2 weeks) and compared a group whose doses were reduced to half and a continuation group. The results over one year showed no significant differences in relapse rate (25% vs 24%), treatment continuation rate, and psychiatric symptoms. However, EPS significantly improved in the reduced‐dose group compared to the continuation group. Schooler et al. studied 213 stable patients undergoing treatment with LAI of fluphenazine (12.5‐25 mg/2 weeks) and compared a group whose doses were reduced to 1/5 and a continuation group. The results over two years showed no significant differences in rehospitalization rate (25% vs 25%). In summary, a majority of reports regarding reduced doses of FGAs were related to LAI, reduced doses varied from half to 1/10, and there were inconsistent results regarding improvements in relapse and side effects (there were no clear descriptions in most of the reports regarding treatment continuation and psychiatric symptoms). SGAs No double‐blind RCTs have been conducted on SGAs in patients in the maintenance phase to date, therefore only open‐label RCT results are described below. Rouillon et al. studied 97 stable patients undergoing treatment with olanzapine and compared a reduced‐dose group (average of 17.6 to 13.3 mg/d) and a continuation group (average of 18.1 mg/d). The results over six months showed no significant differences in relapse rate (8% vs 6%), treatment continuation rate, psychiatric symptoms, and side effects (EPS and weight gain). Wang et al. compared the results among a group of individuals whose dose reductions started four weeks after becoming stable with risperidone treatment to half of the initial dose level (average of 4.4 to 2.2 mg/d), a group whose dose reductions started 26 weeks to half of the initial dose level (average of 4.2‐2.1 mg/d), and a continuation group (average of 4.3 mg/d). Results over one year showed that both groups whose doses were reduced had significantly higher recurrence rates compared to the continuation group (24%, 16%, and 8%, respectively). There were also significant differences in psychiatric symptoms among the three groups but there were no significant differences in treatment continuation rate and side effects (EPS and weight gain). Takeuchi et al. studied 61 stable patients undergoing treatment with either risperidone or olanzapine and compared a group whose doses were reduced to half (risperidone: average of 3.7‐2.1 mg/d; olanzapine: average of 13.8 to 7.1 mg/d) and a continuation group (risperidone: average of 4.5 mg/d; olanzapine: average of 14.1 mg/d). The results over six months showed no significant differences in recurrence rate (3% vs 3%) and treatment continuation rate but significant improvements in side effects (EPS and cognitive dysfunction) in the reduced‐dose group compared to the continuation group. Overall, with just three open‐label RCTs, there is insufficient evidence on reduced doses of SGAs and there are no consistent results on improvements in recurrence, exacerbation of psychiatric symptoms, and side effects (no significant differences between the reduced‐dose group and continuation group for treatment continuation rate). Based on the current evidence, the guideline/algorithm recommendations for whether antipsychotic doses required for acute treatment should be continued even during maintenance treatment vary by country and no unified consensus has been reached. Accordingly, no conclusion on whether administration of reduced doses of antipsychotics for patients in the maintenance phase is useful can be made in this guideline as well.
Recommendation Continuous, maintained antipsychotic therapy, which regularly administer the drug daily, significantly reduce recurrence and rehospitalization and significantly increase treatment continuation compared to intermittent dosing, which suspends drug administration and restarts it when recurrence is suspected A . There is insufficient evidence for the extended‐dosing method, in which drugs continue to be administered regularly but at extended‐dosing intervals longer than usual. Therefore, continuous‐dosing methods that involve regular administration every day are recommended 1A . Explanation Intermittent Drug Technique instead of Continuous, maintained antipsychotic therapy have been attempted with the objective of reducing side effects. Here, we describe appropriate dosing intervals of antipsychotics for patients in the maintenance phase where active symptoms in the acute phase have become stable. A meta‐analysis on intermittent dosing of antipsychotics (N = 17, n = 2, 252) was reported in 2013 and examined whether intermittent dosing was more useful than continuous‐dosing methods with daily regular administration in terms of outcomes like recurrence and rehospitalization. This meta‐analysis showed that ① various types of intermittent dosing showed significantly higher short‐term (<12 weeks), medium‐term (13–25 weeks) and long‐term (over 26 weeks) relapse risks [N = 4, 5, 7; RR = 1.68, 2.41, 2.46, respectively] compared with continuous dosing. Long‐term rehospitalization risk was significantly higher (N = 5, RR = 1.65) and long‐term treatment continuation was significantly lower (N = 10, RR = 1.63). The meta‐analysis also classified the ① various types of intermittent dosing, in particular, ② suspending continuous administration and restarting when recurrence is suspected (early‐based), ③ suspending continuous administration and restarting when recurrence has clearly occurred (crisis intervention), ④ methods which increase the drug administration intervals (ie, gradually increased drug‐free periods), and ⑤ methods which assign drug holidays for fixed intervals (several days a week or for several weeks continuously). These types were then compared to continuous‐dosing methods. No effectiveness was found in the intermittent‐dosing method, even with these subtypes, compared to the continuous‐dosing method. Intermittent dosing had a higher risk of recurrence and rehospitalization in many comparisons. Table shows an excerpt of the results of this meta‐analysis. As for side effects, some trials have shown that the EPS score was lower with intermittent dosing than with the continuous‐dosing method, , but no significant differences to the continuous‐dosing method were seen for tardive dyskinesia (N = 3) from the previously mentioned meta‐analysis. Based on these data, all nine international guidelines and algorithms published since 2000 which mentioned the possibility of suspending antipsychotic administration did not recommend intermittent dosing. However, although the methods are broadly referred to as intermittent dosing, there are large differences between methods of suspended drug administration (drug administration restarted when recurrence is suspected, drug administration restarted when recurrence is clear, and drug administration which extends non‐administered periods) and those which extend dosing intervals but which regularly continue drug administration (drug administration with drug holidays or the extended‐dosing method discussed later). For example, one RCT that compared the extended‐dosing method (drugs that were initially taken every day were taken once every two days) and the continuous‐dosing method showed that there were no significant differences in recurrence and rehospitalization risk. Overall, there is insufficient evidence for the effectiveness of intermittent dosing. In conclusion, Continuous, maintained antipsychotic therapy in which drugs are regularly administered every day are recommended for patients in the maintenance phase.
Continuous, maintained antipsychotic therapy, which regularly administer the drug daily, significantly reduce recurrence and rehospitalization and significantly increase treatment continuation compared to intermittent dosing, which suspends drug administration and restarts it when recurrence is suspected A . There is insufficient evidence for the extended‐dosing method, in which drugs continue to be administered regularly but at extended‐dosing intervals longer than usual. Therefore, continuous‐dosing methods that involve regular administration every day are recommended 1A .
Intermittent Drug Technique instead of Continuous, maintained antipsychotic therapy have been attempted with the objective of reducing side effects. Here, we describe appropriate dosing intervals of antipsychotics for patients in the maintenance phase where active symptoms in the acute phase have become stable. A meta‐analysis on intermittent dosing of antipsychotics (N = 17, n = 2, 252) was reported in 2013 and examined whether intermittent dosing was more useful than continuous‐dosing methods with daily regular administration in terms of outcomes like recurrence and rehospitalization. This meta‐analysis showed that ① various types of intermittent dosing showed significantly higher short‐term (<12 weeks), medium‐term (13–25 weeks) and long‐term (over 26 weeks) relapse risks [N = 4, 5, 7; RR = 1.68, 2.41, 2.46, respectively] compared with continuous dosing. Long‐term rehospitalization risk was significantly higher (N = 5, RR = 1.65) and long‐term treatment continuation was significantly lower (N = 10, RR = 1.63). The meta‐analysis also classified the ① various types of intermittent dosing, in particular, ② suspending continuous administration and restarting when recurrence is suspected (early‐based), ③ suspending continuous administration and restarting when recurrence has clearly occurred (crisis intervention), ④ methods which increase the drug administration intervals (ie, gradually increased drug‐free periods), and ⑤ methods which assign drug holidays for fixed intervals (several days a week or for several weeks continuously). These types were then compared to continuous‐dosing methods. No effectiveness was found in the intermittent‐dosing method, even with these subtypes, compared to the continuous‐dosing method. Intermittent dosing had a higher risk of recurrence and rehospitalization in many comparisons. Table shows an excerpt of the results of this meta‐analysis. As for side effects, some trials have shown that the EPS score was lower with intermittent dosing than with the continuous‐dosing method, , but no significant differences to the continuous‐dosing method were seen for tardive dyskinesia (N = 3) from the previously mentioned meta‐analysis. Based on these data, all nine international guidelines and algorithms published since 2000 which mentioned the possibility of suspending antipsychotic administration did not recommend intermittent dosing. However, although the methods are broadly referred to as intermittent dosing, there are large differences between methods of suspended drug administration (drug administration restarted when recurrence is suspected, drug administration restarted when recurrence is clear, and drug administration which extends non‐administered periods) and those which extend dosing intervals but which regularly continue drug administration (drug administration with drug holidays or the extended‐dosing method discussed later). For example, one RCT that compared the extended‐dosing method (drugs that were initially taken every day were taken once every two days) and the continuous‐dosing method showed that there were no significant differences in recurrence and rehospitalization risk. Overall, there is insufficient evidence for the effectiveness of intermittent dosing. In conclusion, Continuous, maintained antipsychotic therapy in which drugs are regularly administered every day are recommended for patients in the maintenance phase.
Many patients with schizophrenia as a first episode or in relapse do not respond, even with sufficient treatment with antipsychotics. This chapter describes treatment‐resistant schizophrenia (Figure ). A broad definition of treatment‐resistant schizophrenia includes patients who show no improvement even with antipsychotics from different chemical classes at sufficient doses for an adequate trial duration. There are various definitions of “antipsychotics from different chemical classes,” “sufficient doses”, “adequate trial duration”, and “no response”, but in Japan, “treatment resistance” is defined as a patient having “never reached 41 points or more on the Global Assessment of Functioning (GAF)” despite “at least two antipsychotics medication” administered at a dose of “over 600 mg/d of chlorpromazine*” for “over four weeks” (Table ). This guideline also defines treatment‐resistant schizophrenia as stated above for application in clinical practice in Japan. Please refer to CQ5‐6 (⇒ pg. 117) for treatment‐resistant schizophrenia due to poor tolerance criteria (when doses cannot be sufficiently increased due to EPS; Table ). The significance of each CQ examined in this chapter is described below, and a summary of this chapter is provided in Table . Please refer to each CQ and its explanations for specific content. Clozapine is the only drug that has been shown to be effective worldwide for patients with treatment‐resistant schizophrenia. There is a large amount of high‐quality evidence demonstrating that clozapine is more effective than other treatments. The current guidelines in each country thus recommend clozapine treatment for treatment‐resistant schizophrenia. , , This chapter first addresses clozapine treatment as a CQ, describing its usefulness (CQ4‐1), side effects (CQ4‐2), and concomitant therapy (CQ4‐3). A serious side effect of clozapine is agranulocytosis, and a monitoring system for the development of agranulocytosis among clozapine‐treated patients is needed. Clozapine was introduced to Japan in 2009, but there are not so many facilities that have been given approval for this drug. Its introduction in Japan is extremely delayed relative to other countries. Of the 700 000‐800 000 schizophrenia patients in Japan, 20%‐30% are estimated to be resistant to treatment, and thus the predicted number of schizophrenia patients in Japan with treatment‐resistant schizophrenia is ~150 000‐250 000 people. However, only ~3400 patients in Japan are receiving clozapine treatment; that is, only 1%‐2% of patients with treatment‐resistant schizophrenia are receiving clozapine. The introduction of clozapine as a general medical treatment for treatment‐resistant schizophrenia is, therefore, an urgent issue. There is little evidence for the effectiveness of other treatment methods, and only clozapine treatment is recommended for treatment‐resistant schizophrenia. The other treatment methods are discussed in subsequent CQs. Prior to the introduction of clozapine treatment in Japan, modified electroconvulsive therapy (m‐ECT) was often used for treatment‐resistant schizophrenia; m‐ECT is discussed in CQ4‐4, and all of the other treatment methods are discussed in CQ4‐5. General information about m‐ECT methods, risk assessment, and contra‐indications are not mentioned in this guideline due to space constraints. Please refer to the Pulse Wave ECT Handbook and the literatures recommended by the Japanese Society of Psychiatry and Neurology for details. In clinical settings, there are many schizophrenia patients with "pseudo‐resistance" who do not receive antipsychotic treatment but still meet the definition of treatment‐resistant schizophrenia; these patients are equivalent to those with treatment‐resistant schizophrenia on symptomatic and social‐function levels. There is no significant evidence about treatment methods for this population of patients; only reviews and case reports are available. , , This guideline, therefore, does not address the clinically important and urgent question of “what are useful treatment methods for pseudo‐resistance?" since no significant evidence from controlled studies is available. This is a field that requires further research, but it should be emphasized that patients whose treatment has been unsuccessful should be considered for antipsychotic treatment following the above‐noted definition of treatment‐resistant schizophrenia and begin treatment with clozapine, with a consideration of the items in Table . References 1 Novartis Pharma K.K . Clozaril package insert . 2013 . 2 American Psychiatric Association . Practice Guideline for the Treatment of Patients With Schizophrenia , 2nd edn . American Psychiatric Association ; 2004 . 3 Psychosis and schizophrenia in adults: treatment and management . NICE clinical guideline 178, 2014 . 4 Taylor D , Paton C , Kapur S , editors. The Maudsley Prescribing Guidelines in Psychiatry , 11th edn . UK : Wiley‐Blackwell ; 2012 . 5 Mankad MV , Beyer JL , Weiner RD , et al. Clinical Manual of Electroconvulsive Therapy . American Psychiatric Publishing , Washington DC , 2010 (Motohashi N, Ueda S (trans.): Pulse Wave ECT Handbook. Igaku Shoin, Tokyo, 2012). 6 Motohashi N , Awata S , Isse K , et al. Recommendations for ECT Practice . Second Edition. Psychiatria et Neurologia Japonica . 2013 ; 115 : 580 – 600 . 7 Hashimoto R , Yamamori H , Yasuda Y , et al. Usefulness of clozapine for treatment‐resistant schizophrenia in schizophrenia hospitalization programs . Jpn J Clin Psychopharmacol . 2012 ; 15 : 1841 – 55 . 8 Yamaji K , Hashimoto R , Ohi K , et al. Case where blonanserin was effective due to a schizophrenia hospitalization program . Jpn J Clin Psychopharmacol . 2012 ; 15 : 1213 – 9 . 9 Hashimoto R , Yasuda Y , Yamamori H , et al. Treatment strategies and pathology research for treatment‐resistant schizophrenia — true treatment‐resistant schizophrenia and apparent treatment‐resistant schizophrenia . Jpn J Clin Psychopharmacol . 2014 ; 17 : 1595 – 604 .
Recommendation Clozapine has not been shown to be superior to other second‐generation antipsychotics (SGAs) for the improvement of psychiatric symptoms, but clozapine has been shown to be superior to first‐generation antipsychotics (FGAs) B . The risk of death associated with clozapine treatment is low, and its suicide‐prevention effects are particularly high B . In addition, the treatment continuity of clozapine is higher than those of other drugs A . In terms of side effects, the incidence of EPS is low, but caution is required in light of side effects such as agranulocytosis A . In conclusion, clozapine treatment for treatment‐resistant schizophrenia requires attention to side effects like agranulocytosis, but this SGA is useful and recommended 1A . Explanation Blinded randomized‐controlled trials (RCTs) have been conducted to examine the usefulness of clozapine for treatment‐resistant schizophrenia. However, the patient cohorts in blinded RCTs are patients who can consent to participate in a trial and whose severity is such that they can participate in the trial, and such patients do not reflect the actual clinical settings for treatment‐resistant schizophrenia. The examinations described in this CQ, therefore, include results from large‐scale cohort studies, which are thought to more accurately reflect actual clinical settings. The results of many Blinded RCTs have indicated that clozapine is superior to FGAs in terms of improvements in psychiatric symptoms. , , , , , , Blinded RCTs comparing clozapine with other SGAs such as risperidone and olanzapine have also been conducted, but the results have not been consistent. , , , , , However, in a large‐scale cohort study, clozapine was observed to be superior to risperidone and quetiapine for improving psychiatric symptoms. Another large‐scale cohort study revealed that clozapine treatment had the lowest risk of death compared to other antipsychotics. Additional cohort studies showed that the risk of suicide from clozapine treatment is particularly decreased, , and a blinded RCT demonstrated that clozapine was superior to olanzapine in preventing suicidal behavior in patients with schizophrenia at high risk for suicide. The results of a blinded RCT indicated that the rate of continuation of treatment with clozapine was higher than that of treatment with haloperidol in a one‐year trial period, but other trials reported no significant differences in the continuation rate between clozapine and other antipsychotics. , , , , , , , , , , , , , , , , , Large‐scale cohort studies indicated that clozapine had a low discontinuation risk, a high treatment continuation rate, and low relapse/rehospitalization risk. , , With regard to the risks of side effects, the risk of EPS associated with clozapine was shown to be low compared to that with FGAs or SGAs, but the risk of agranulocytosis associated with clozapine was high, and the overall risk of side effect occurrence was reported to be high. , Appropriate monitoring and early intervention are thus needed for clozapine side effects (see CQ4‐2 for details ⇒ pg. 76). In conclusion, blinded RCTs did not establish that clozapine was superior to other SGAs in terms of improving psychiatric symptoms, but there was sufficient evidence of its superiority in comparisons with FGAs. Clozapine was shown to have a high suicide‐prevention effect as well. Clozapine is recommended for the treatment of individuals with treatment‐resistant schizophrenia given that multiple large‐scale cohort studies have demonstrated its effectiveness, though caution is required because of side effects such as agranulocytosis.
Clozapine has not been shown to be superior to other second‐generation antipsychotics (SGAs) for the improvement of psychiatric symptoms, but clozapine has been shown to be superior to first‐generation antipsychotics (FGAs) B . The risk of death associated with clozapine treatment is low, and its suicide‐prevention effects are particularly high B . In addition, the treatment continuity of clozapine is higher than those of other drugs A . In terms of side effects, the incidence of EPS is low, but caution is required in light of side effects such as agranulocytosis A . In conclusion, clozapine treatment for treatment‐resistant schizophrenia requires attention to side effects like agranulocytosis, but this SGA is useful and recommended 1A .
Blinded randomized‐controlled trials (RCTs) have been conducted to examine the usefulness of clozapine for treatment‐resistant schizophrenia. However, the patient cohorts in blinded RCTs are patients who can consent to participate in a trial and whose severity is such that they can participate in the trial, and such patients do not reflect the actual clinical settings for treatment‐resistant schizophrenia. The examinations described in this CQ, therefore, include results from large‐scale cohort studies, which are thought to more accurately reflect actual clinical settings. The results of many Blinded RCTs have indicated that clozapine is superior to FGAs in terms of improvements in psychiatric symptoms. , , , , , , Blinded RCTs comparing clozapine with other SGAs such as risperidone and olanzapine have also been conducted, but the results have not been consistent. , , , , , However, in a large‐scale cohort study, clozapine was observed to be superior to risperidone and quetiapine for improving psychiatric symptoms. Another large‐scale cohort study revealed that clozapine treatment had the lowest risk of death compared to other antipsychotics. Additional cohort studies showed that the risk of suicide from clozapine treatment is particularly decreased, , and a blinded RCT demonstrated that clozapine was superior to olanzapine in preventing suicidal behavior in patients with schizophrenia at high risk for suicide. The results of a blinded RCT indicated that the rate of continuation of treatment with clozapine was higher than that of treatment with haloperidol in a one‐year trial period, but other trials reported no significant differences in the continuation rate between clozapine and other antipsychotics. , , , , , , , , , , , , , , , , , Large‐scale cohort studies indicated that clozapine had a low discontinuation risk, a high treatment continuation rate, and low relapse/rehospitalization risk. , , With regard to the risks of side effects, the risk of EPS associated with clozapine was shown to be low compared to that with FGAs or SGAs, but the risk of agranulocytosis associated with clozapine was high, and the overall risk of side effect occurrence was reported to be high. , Appropriate monitoring and early intervention are thus needed for clozapine side effects (see CQ4‐2 for details ⇒ pg. 76). In conclusion, blinded RCTs did not establish that clozapine was superior to other SGAs in terms of improving psychiatric symptoms, but there was sufficient evidence of its superiority in comparisons with FGAs. Clozapine was shown to have a high suicide‐prevention effect as well. Clozapine is recommended for the treatment of individuals with treatment‐resistant schizophrenia given that multiple large‐scale cohort studies have demonstrated its effectiveness, though caution is required because of side effects such as agranulocytosis.
Recommendation/explanation Because it acts on various types of receptors, clozapine can cause a wide range of side effects, which include agranulocytosis, leukocytopenia, myocarditis/myocardiopathy, seizure, constipation/ileuses, weight gain, impaired glucose tolerance, and hypersalivation. Agranulocytosis and myocarditis can occur at any point during clozapine treatment. In particular, agranulocytosis often occurs within the first 18 weeks of treatment with clozapine, and myocarditis often occurs within the first three weeks. , As with other drugs, it is recommended that the clozapine dose be decreased when one or more clozapine‐related side effects occur and that the treatment be temporarily suspended when the side effect(s) are serious 1D . However, there are cases in which clozapine treatment at a given dose should be continued even with side effects if the clozapine is improving the patient's psychiatric symptoms. This CQ describes how to manage such situations. There is a very limited number of RCTs that suggest the efficacy of clozapine in combination with pharmacological therapies that address its side effects. There is a Cochrane Review for hypersalivation, but there are no high‐quality RCTs, and in Japan there are few drugs which can be used for such purpose. Most reports referring to a combination of clozapine with another pharmacological therapy for side effects associated with clozapine are either case reports or reviews of accumulated case reports and observational studies. For this reason, our investigation in this CQ focuses on case reports and observational studies. It should always be kept in mind that there is a possibility that other side effects may occur depending on the pharmacological therapy that addresses the side effects of clozapine. Hematological side effects Benign neutropenia occurs when the proportion of neutrophils attached to the vascular endothelium is lower than those freely circulating in the blood vessels. Benign neutropenia may be seen in an early morning blood collection. For this reason, same‐day reinspections are recommended if the result of a blood test indicates leukocytopenia (neutropenia) 1D . Mild exercise such as walking can be effective for benign neutropenia 2D . Lithium is suggested as a pharmacological therapy for leukocytopenia (neutropenia) , , , , , , , , , , 2C . However, agranulocytosis cannot be prevented even with a concomitant use of lithium. , Clozapine should be suspended according to the package insert, and a consultation with a hematologist is recommended when agranulocytosis occurs. Myocarditis/myocardiopathy The early detection of myocarditis is crucial. Cold‐like symptoms (chills, fever, headache, myalgia, and general malaise) and gastrointestinal symptoms such as loss of appetite, nausea, vomiting, and diarrhea appear first; cardiac symptoms appear several hours to several days later. The cardiac symptoms include persistent tachycardia at rest, palpitations, arrhythmia, and signs or symptoms of chest pain or heart failure (eg, unexplained fatigue, dyspnea, or tachypnea). A consultation with a cardiologist is recommended when these types of cardiac symptoms are observed 1D . Electrocardiography (ECG) usually reveals some abnormal findings during the course of illness. Myocardial constituent proteins (myocardial troponin T or creatine kinase myocardial band [CK‐MB]) can be detected in the serum. Increases in the C‐reactive protein (CRP) levels and the white blood cell count are also observed. The early detection of troponin T using whole blood is particularly useful. In conclusion, it is recommended that prior to the initiation of clozapine treatment, ECG and the measurement of troponin T and CRP should be conducted, and these should be reported every week for four weeks after the start of clozapine administration 2C . Seizure When seizures occur, the possibility that they were caused by factors other than clozapine treatment should be considered; these factors include alcohol withdrawal, benzodiazepine drug withdrawal symptoms, and electrolyte abnormalities from water intoxication 1D . For clozapine‐induced seizures, it is recommended that anticonvulsants be selected according to the seizure type 1D . Valproic acid , , , is often used as a first‐choice treatment, but caution is required as this can increase the risk of myocarditis in the early stages of its administration. Lamotrigine, , topiramate, , and gabapentine , may also be selected but the guideline suggests that the use of carbamazepine, phenytoin, and phenobarbital be avoided 2D . Constipation There is no specific treatment or management for clozapine‐induced constipation, and caution is required since this can develop into an ileus. Simply asking a patient about bowel movements may be insufficient. Palpations and auscultations of the abdomen should be conducted, supplemented with X‐ray imaging as needed, and a regular confirmation of the patient's defecation status in this manner is recommended 1D . The use of laxatives such as magnesium oxide and stimulant laxatives such as senna is the first‐choice treatment for clozapine‐induced constipation. Clozapine‐induced constipation presents a high risk of worsening into an ileus with a potential life‐threatening result. Consultation with a gastroenterologist is recommended if there are moderate or higher levels of abdominal pain, distension, or vomiting 1D . Weight gain/impaired glucose tolerance Dietary guidance (eg, carbohydrate restrictions ) and exercise guidance are recommended for treating a patient's weight gain and impaired glucose tolerance 1D . Metformin may be useful as a drug to be used concomitantly with clozapine, , but metformin has not demonstrated to significantly reduce the risk of diabetes. There is a report that a concomitant use of aripiprazole with clozapine results in significant weight loss but it is not recommended in Japan, where clozapine monotherapy is a general rule. Consultation with a diabetic specialist is recommended if diabetes is strongly suspected 1D . Other side effects Among other side effects of clozapine, hypersalivation occurs the most frequently. Hypersalivation often has a tendency to gradually improve even with a continued use of clozapine, and the guideline thus suggests that follow‐up observations be conducted before taking further steps 2D . Hypersalivation is frequently a problem at night; it can be addressed by laying a towel on the patient's pillow. There have been reports on biperiden , and butylscopolammonium bromide , as pharmacological therapies that have shown some degree of improvement of hypersalivation, but attention should be paid to the side effects of anticholinergic drugs. A review by Raja covers the overall side effects of clozapine, and references include the Maudsley Prescribing Guidelines in Psychiatry and 100 Q&As for Clozapine. Please refer to guidelines from the respective societies , for details on pharmacological therapies for diabetes and epilepsy. Safety information about Clozaril ® , available on the Novartis Pharma website for healthcare professionals, is a reference for post‐marketing side effects in Japan.
Because it acts on various types of receptors, clozapine can cause a wide range of side effects, which include agranulocytosis, leukocytopenia, myocarditis/myocardiopathy, seizure, constipation/ileuses, weight gain, impaired glucose tolerance, and hypersalivation. Agranulocytosis and myocarditis can occur at any point during clozapine treatment. In particular, agranulocytosis often occurs within the first 18 weeks of treatment with clozapine, and myocarditis often occurs within the first three weeks. , As with other drugs, it is recommended that the clozapine dose be decreased when one or more clozapine‐related side effects occur and that the treatment be temporarily suspended when the side effect(s) are serious 1D . However, there are cases in which clozapine treatment at a given dose should be continued even with side effects if the clozapine is improving the patient's psychiatric symptoms. This CQ describes how to manage such situations. There is a very limited number of RCTs that suggest the efficacy of clozapine in combination with pharmacological therapies that address its side effects. There is a Cochrane Review for hypersalivation, but there are no high‐quality RCTs, and in Japan there are few drugs which can be used for such purpose. Most reports referring to a combination of clozapine with another pharmacological therapy for side effects associated with clozapine are either case reports or reviews of accumulated case reports and observational studies. For this reason, our investigation in this CQ focuses on case reports and observational studies. It should always be kept in mind that there is a possibility that other side effects may occur depending on the pharmacological therapy that addresses the side effects of clozapine. Hematological side effects Benign neutropenia occurs when the proportion of neutrophils attached to the vascular endothelium is lower than those freely circulating in the blood vessels. Benign neutropenia may be seen in an early morning blood collection. For this reason, same‐day reinspections are recommended if the result of a blood test indicates leukocytopenia (neutropenia) 1D . Mild exercise such as walking can be effective for benign neutropenia 2D . Lithium is suggested as a pharmacological therapy for leukocytopenia (neutropenia) , , , , , , , , , , 2C . However, agranulocytosis cannot be prevented even with a concomitant use of lithium. , Clozapine should be suspended according to the package insert, and a consultation with a hematologist is recommended when agranulocytosis occurs. Myocarditis/myocardiopathy The early detection of myocarditis is crucial. Cold‐like symptoms (chills, fever, headache, myalgia, and general malaise) and gastrointestinal symptoms such as loss of appetite, nausea, vomiting, and diarrhea appear first; cardiac symptoms appear several hours to several days later. The cardiac symptoms include persistent tachycardia at rest, palpitations, arrhythmia, and signs or symptoms of chest pain or heart failure (eg, unexplained fatigue, dyspnea, or tachypnea). A consultation with a cardiologist is recommended when these types of cardiac symptoms are observed 1D . Electrocardiography (ECG) usually reveals some abnormal findings during the course of illness. Myocardial constituent proteins (myocardial troponin T or creatine kinase myocardial band [CK‐MB]) can be detected in the serum. Increases in the C‐reactive protein (CRP) levels and the white blood cell count are also observed. The early detection of troponin T using whole blood is particularly useful. In conclusion, it is recommended that prior to the initiation of clozapine treatment, ECG and the measurement of troponin T and CRP should be conducted, and these should be reported every week for four weeks after the start of clozapine administration 2C . Seizure When seizures occur, the possibility that they were caused by factors other than clozapine treatment should be considered; these factors include alcohol withdrawal, benzodiazepine drug withdrawal symptoms, and electrolyte abnormalities from water intoxication 1D . For clozapine‐induced seizures, it is recommended that anticonvulsants be selected according to the seizure type 1D . Valproic acid , , , is often used as a first‐choice treatment, but caution is required as this can increase the risk of myocarditis in the early stages of its administration. Lamotrigine, , topiramate, , and gabapentine , may also be selected but the guideline suggests that the use of carbamazepine, phenytoin, and phenobarbital be avoided 2D . Constipation There is no specific treatment or management for clozapine‐induced constipation, and caution is required since this can develop into an ileus. Simply asking a patient about bowel movements may be insufficient. Palpations and auscultations of the abdomen should be conducted, supplemented with X‐ray imaging as needed, and a regular confirmation of the patient's defecation status in this manner is recommended 1D . The use of laxatives such as magnesium oxide and stimulant laxatives such as senna is the first‐choice treatment for clozapine‐induced constipation. Clozapine‐induced constipation presents a high risk of worsening into an ileus with a potential life‐threatening result. Consultation with a gastroenterologist is recommended if there are moderate or higher levels of abdominal pain, distension, or vomiting 1D . Weight gain/impaired glucose tolerance Dietary guidance (eg, carbohydrate restrictions ) and exercise guidance are recommended for treating a patient's weight gain and impaired glucose tolerance 1D . Metformin may be useful as a drug to be used concomitantly with clozapine, , but metformin has not demonstrated to significantly reduce the risk of diabetes. There is a report that a concomitant use of aripiprazole with clozapine results in significant weight loss but it is not recommended in Japan, where clozapine monotherapy is a general rule. Consultation with a diabetic specialist is recommended if diabetes is strongly suspected 1D . Other side effects Among other side effects of clozapine, hypersalivation occurs the most frequently. Hypersalivation often has a tendency to gradually improve even with a continued use of clozapine, and the guideline thus suggests that follow‐up observations be conducted before taking further steps 2D . Hypersalivation is frequently a problem at night; it can be addressed by laying a towel on the patient's pillow. There have been reports on biperiden , and butylscopolammonium bromide , as pharmacological therapies that have shown some degree of improvement of hypersalivation, but attention should be paid to the side effects of anticholinergic drugs. A review by Raja covers the overall side effects of clozapine, and references include the Maudsley Prescribing Guidelines in Psychiatry and 100 Q&As for Clozapine. Please refer to guidelines from the respective societies , for details on pharmacological therapies for diabetes and epilepsy. Safety information about Clozaril ® , available on the Novartis Pharma website for healthcare professionals, is a reference for post‐marketing side effects in Japan.
Recommendation Clozapine with a concomitant use of ECT may have transient effects but is useful 2C . Clozapine with a concomitant use of lamotrigine is potentially useful 2D . Clozapine with a concomitant use of other mood stabilizers, antiepileptic drugs, antidepressants, and BZ drugs has not been shown to be useful, and it is suggested that these concomitant therapies be avoided in attempts to improve psychiatric symptoms 2D . The concomitant use of clozapine with valproic acid at the early stages of clozapine introduction is not recommended, due to the possible increase in myocarditis risk 1C . Weak augmentation effects can be expected from clozapine with a concomitant use of antipsychotics, but clozapine monotherapy is stipulated in Japan as a general rule, and thus no recommendations are made (no recommendation C ). Explanation This CQ describes concomitant therapy when the effects of clozapine are insufficient for treatment‐resistant schizophrenia (so‐called augmentation therapy). Recommendations are discussed by dividing the concomitant therapy options into six categories: electroconvulsive therapy (ECT), mood stabilizers/antiepileptic drugs, antidepressants, antipsychotics, BZ drugs, and other drugs. However, there are insufficient data from RCTs, and further controlled clinical studies are needed. Clozapine with the concomitant use of ECT One RCT (n = 39) and two comparative studies , have reported the efficacy and safety of clozapine with the concomitant use of ECT. Each of these studies had a small sample size, and there are no reliable reports with a large sample size. However, the combined use of clozapine and ECT has been shown to possibly be effective in patients who show a partial response to clozapine. There are no clinical studies showing sustained effects after the end of treatment with ECT, and it should be kept in mind that the effects of a concomitant use of ECT may be transient. Clozapine with a concomitant use of mood stabilizers or antiepileptic drugs A meta‐analysis summarized five RCTs (total n = 161) regarding the concomitant use of clozapine and lamotrigine. There were no issues with tolerability or stability, and significant improvements were observed compared to placebos. However, the concomitant effects with lamotrigine are insufficient when all of the existing reports are considered together. In addition, the effects of clozapine on the glucuronidation of lamotrigine were not clear, and the package insert of lamotrigine stated that the dose and administration when combined with sodium valproate should be followed to avoid serious side effects. There have been four RCTs on the concomitant use of topiramate, , but overall the results did not show significant improvements compared to placebos. Another RCT suggested high discontinuation rates, indicating that topiramate is not useful. Clozapine with the concomitant use of lithium carbonate has not been shown to improve psychiatric symptoms and has low tolerability. The guideline suggests that lithium as a concomitant therapy should be avoided when it is restricted to the objective of improving psychiatric symptoms. Clozapine with the concomitant use of carbamazepine or sodium valproate is not recommended, because (1) it may cause fluctuations in blood clozapine concentrations, and (2) there are no consensus reports showing that it improves psychiatric symptoms. The concomitant use of sodium valproate in the early stages of clozapine administration is not recommended unless there is a particular reason for doing so, since it may also increase the rate of myocarditis. Clozapine with the concomitant use of antidepressants There are small‐scale RCTs on the concomitant use of clozapine and duloxetine, mirtazapine, and fluvoxamine. , Of these, one RCT (n = 40) which investigated concomitant use with duloxetine showed improvements in clinical symptoms and high tolerability, but this was not at a scale at which such a regimen could be recommended. Clozapine with the concomitant use of BZ drugs BZ drugs are frequently used concomitantly with clozapine in clinical settings. However, the guideline suggests avoiding the use of BZ drugs together with clozapine because there are no consensus reports confirming improvements in psychiatric symptoms, and these drugs may also cause adverse effects. Clozapine with the concomitant use of other drugs One RCT (n = 42) showed that clozapine with the concomitant use of ginkgo biloba improved negative symptoms, but this was not at a scale at which this treatment could be recommended. Clozapine with the concomitant use of other antipsychotics There is a relatively large number of clinical trials of clozapine with the concomitant therapy of antipsychotics compared to other therapy. Overall, there is sufficient evidence on the topic. A meta‐analysis that summarized 14 RCTs (total n = 734) revealed significant improvements in psychiatric symptoms, but the effects were weak and there was a possibility that symptoms could worsen. Therefore, a concomitant use of clozapine with other antipsychotics is not very useful. Moreover, clozapine has been stipulated to be used solely as monotherapy in Japan as a general rule, with the exception of cross‐titrations within four weeks of introduction, and thus clozapine with a concomitant use of other antipsychotics cannot be recommended at this time. Additional clinical trials in Japan should be conducted; more evidence must be accumulated.
Clozapine with a concomitant use of ECT may have transient effects but is useful 2C . Clozapine with a concomitant use of lamotrigine is potentially useful 2D . Clozapine with a concomitant use of other mood stabilizers, antiepileptic drugs, antidepressants, and BZ drugs has not been shown to be useful, and it is suggested that these concomitant therapies be avoided in attempts to improve psychiatric symptoms 2D . The concomitant use of clozapine with valproic acid at the early stages of clozapine introduction is not recommended, due to the possible increase in myocarditis risk 1C . Weak augmentation effects can be expected from clozapine with a concomitant use of antipsychotics, but clozapine monotherapy is stipulated in Japan as a general rule, and thus no recommendations are made (no recommendation C ).
This CQ describes concomitant therapy when the effects of clozapine are insufficient for treatment‐resistant schizophrenia (so‐called augmentation therapy). Recommendations are discussed by dividing the concomitant therapy options into six categories: electroconvulsive therapy (ECT), mood stabilizers/antiepileptic drugs, antidepressants, antipsychotics, BZ drugs, and other drugs. However, there are insufficient data from RCTs, and further controlled clinical studies are needed. Clozapine with the concomitant use of ECT One RCT (n = 39) and two comparative studies , have reported the efficacy and safety of clozapine with the concomitant use of ECT. Each of these studies had a small sample size, and there are no reliable reports with a large sample size. However, the combined use of clozapine and ECT has been shown to possibly be effective in patients who show a partial response to clozapine. There are no clinical studies showing sustained effects after the end of treatment with ECT, and it should be kept in mind that the effects of a concomitant use of ECT may be transient. Clozapine with a concomitant use of mood stabilizers or antiepileptic drugs A meta‐analysis summarized five RCTs (total n = 161) regarding the concomitant use of clozapine and lamotrigine. There were no issues with tolerability or stability, and significant improvements were observed compared to placebos. However, the concomitant effects with lamotrigine are insufficient when all of the existing reports are considered together. In addition, the effects of clozapine on the glucuronidation of lamotrigine were not clear, and the package insert of lamotrigine stated that the dose and administration when combined with sodium valproate should be followed to avoid serious side effects. There have been four RCTs on the concomitant use of topiramate, , but overall the results did not show significant improvements compared to placebos. Another RCT suggested high discontinuation rates, indicating that topiramate is not useful. Clozapine with the concomitant use of lithium carbonate has not been shown to improve psychiatric symptoms and has low tolerability. The guideline suggests that lithium as a concomitant therapy should be avoided when it is restricted to the objective of improving psychiatric symptoms. Clozapine with the concomitant use of carbamazepine or sodium valproate is not recommended, because (1) it may cause fluctuations in blood clozapine concentrations, and (2) there are no consensus reports showing that it improves psychiatric symptoms. The concomitant use of sodium valproate in the early stages of clozapine administration is not recommended unless there is a particular reason for doing so, since it may also increase the rate of myocarditis. Clozapine with the concomitant use of antidepressants There are small‐scale RCTs on the concomitant use of clozapine and duloxetine, mirtazapine, and fluvoxamine. , Of these, one RCT (n = 40) which investigated concomitant use with duloxetine showed improvements in clinical symptoms and high tolerability, but this was not at a scale at which such a regimen could be recommended. Clozapine with the concomitant use of BZ drugs BZ drugs are frequently used concomitantly with clozapine in clinical settings. However, the guideline suggests avoiding the use of BZ drugs together with clozapine because there are no consensus reports confirming improvements in psychiatric symptoms, and these drugs may also cause adverse effects. Clozapine with the concomitant use of other drugs One RCT (n = 42) showed that clozapine with the concomitant use of ginkgo biloba improved negative symptoms, but this was not at a scale at which this treatment could be recommended. Clozapine with the concomitant use of other antipsychotics There is a relatively large number of clinical trials of clozapine with the concomitant therapy of antipsychotics compared to other therapy. Overall, there is sufficient evidence on the topic. A meta‐analysis that summarized 14 RCTs (total n = 734) revealed significant improvements in psychiatric symptoms, but the effects were weak and there was a possibility that symptoms could worsen. Therefore, a concomitant use of clozapine with other antipsychotics is not very useful. Moreover, clozapine has been stipulated to be used solely as monotherapy in Japan as a general rule, with the exception of cross‐titrations within four weeks of introduction, and thus clozapine with a concomitant use of other antipsychotics cannot be recommended at this time. Additional clinical trials in Japan should be conducted; more evidence must be accumulated.
Recommendation In combination with antipsychotics, m‐ECT for treatment‐resistant schizophrenia may be effective for improving psychiatric symptoms C or reducing the relapse rate D . The tolerance to m‐ECT for treatment‐resistant schizophrenia is equal to that for schizophrenia without treatment resistance C , including cognitive impairment D . Therefore, although there is insufficient evidence about m‐ECT for treatment‐resistant schizophrenia, the guideline suggests a concomitant use of antipsychotics with m‐ECT when clozapine is not used because it may provide some benefit 2C . Explanation ECT, which was introduced by Cerletti and Bini in 1937, is intended to treat psychiatric symptoms by inducing seizures with the application of electrical shocks to the head. The early ECT‐treatment methods involved sine wave stimulation and no anesthesia, but this has changed to m‐ECT with short pulse waves and the use of intravenous anesthetics and muscle relaxants. ECT for schizophrenia The usefulness of ECT (including m‐ECT) for schizophrenia has been evaluated in many studies. A meta‐analysis and systematic reviews that integrated many controlled clinical trials demonstrated that ECT was superior to sham ECT in the short term (<6 months) with regard to efficacy, relapse prevention, and the promotion of hospital discharge. However, a certain degree of caution is required given the fact that there is insufficient evidence regarding these effects for medium‐ and long‐term treatment durations. It is also quite possible that ECT should be combined with antipsychotics to provide effects that exceed those of antipsychotic monotherapy. , Side effects of ECT include persistent seizures, post‐seizure delirium, headaches, myalgia, and vomiting; symptomatic treatment often alleviates these side effects. , The mortality rate of ECT is low and is thought to be due primarily to side effects involving the cardiovascular system, but it corresponds almost entirely to the mortality rate of general anesthesia, and it is likely to be the same risk as that posed by pharmacological therapy. , , Concomitant therapy using antipsychotics and ECT has been suggested to be more likely to cause short‐term memory impairment compared to antipsychotic monotherapy, but it has shown no known increases in side effects other than those mentioned above. Based on these results, ECT for schizophrenia is thought to be a useful form of treatment in the short term when restricted to concomitant use with antipsychotics 2A. m‐ECT for treatment‐resistant schizophrenia Modified ECT is often considered for patients with catatonic or treatment‐resistant schizophrenia in actual clinical practice (see CQ5‐2 ⇒ pg. 100 re: catatonia). However, most studies that investigated the usefulness of m‐ECT in patients with treatment‐resistant schizophrenia are case reports or case series, and there are only a few sufficiently controlled comparative trials. Other trials that were not randomized or did not have comparative controls have been conducted, but the sample sizes of all of these trials are extremely small. , , , Nevertheless, these studies observed significant improvements in psychiatric symptoms in the short term for patients that underwent m‐ECT along with antipsychotic treatment for all trials. The patient groups that received combined continuous m‐ECT with antipsychotics had lower relapse rates than antipsychotic monotherapy groups and continuous m‐ECT groups. The tolerance to m‐ECT for treatment‐resistant schizophrenia was thought to be similar to that for schizophrenia without treatment resistance, including effects on cognitive impairment. , , , In conclusion, although there is insufficient evidence, m‐ECT combined with antipsychotic therapy may have a certain degree of usefulness for improving psychiatric symptoms and reducing relapse rates in patients with treatment‐resistant schizophrenia. The risk‐benefit balance must be evaluated before performing m‐ECT instead of using the clozapine for patients with treatment‐resistant schizophrenia.
In combination with antipsychotics, m‐ECT for treatment‐resistant schizophrenia may be effective for improving psychiatric symptoms C or reducing the relapse rate D . The tolerance to m‐ECT for treatment‐resistant schizophrenia is equal to that for schizophrenia without treatment resistance C , including cognitive impairment D . Therefore, although there is insufficient evidence about m‐ECT for treatment‐resistant schizophrenia, the guideline suggests a concomitant use of antipsychotics with m‐ECT when clozapine is not used because it may provide some benefit 2C .
ECT, which was introduced by Cerletti and Bini in 1937, is intended to treat psychiatric symptoms by inducing seizures with the application of electrical shocks to the head. The early ECT‐treatment methods involved sine wave stimulation and no anesthesia, but this has changed to m‐ECT with short pulse waves and the use of intravenous anesthetics and muscle relaxants. ECT for schizophrenia The usefulness of ECT (including m‐ECT) for schizophrenia has been evaluated in many studies. A meta‐analysis and systematic reviews that integrated many controlled clinical trials demonstrated that ECT was superior to sham ECT in the short term (<6 months) with regard to efficacy, relapse prevention, and the promotion of hospital discharge. However, a certain degree of caution is required given the fact that there is insufficient evidence regarding these effects for medium‐ and long‐term treatment durations. It is also quite possible that ECT should be combined with antipsychotics to provide effects that exceed those of antipsychotic monotherapy. , Side effects of ECT include persistent seizures, post‐seizure delirium, headaches, myalgia, and vomiting; symptomatic treatment often alleviates these side effects. , The mortality rate of ECT is low and is thought to be due primarily to side effects involving the cardiovascular system, but it corresponds almost entirely to the mortality rate of general anesthesia, and it is likely to be the same risk as that posed by pharmacological therapy. , , Concomitant therapy using antipsychotics and ECT has been suggested to be more likely to cause short‐term memory impairment compared to antipsychotic monotherapy, but it has shown no known increases in side effects other than those mentioned above. Based on these results, ECT for schizophrenia is thought to be a useful form of treatment in the short term when restricted to concomitant use with antipsychotics 2A. m‐ECT for treatment‐resistant schizophrenia Modified ECT is often considered for patients with catatonic or treatment‐resistant schizophrenia in actual clinical practice (see CQ5‐2 ⇒ pg. 100 re: catatonia). However, most studies that investigated the usefulness of m‐ECT in patients with treatment‐resistant schizophrenia are case reports or case series, and there are only a few sufficiently controlled comparative trials. Other trials that were not randomized or did not have comparative controls have been conducted, but the sample sizes of all of these trials are extremely small. , , , Nevertheless, these studies observed significant improvements in psychiatric symptoms in the short term for patients that underwent m‐ECT along with antipsychotic treatment for all trials. The patient groups that received combined continuous m‐ECT with antipsychotics had lower relapse rates than antipsychotic monotherapy groups and continuous m‐ECT groups. The tolerance to m‐ECT for treatment‐resistant schizophrenia was thought to be similar to that for schizophrenia without treatment resistance, including effects on cognitive impairment. , , , In conclusion, although there is insufficient evidence, m‐ECT combined with antipsychotic therapy may have a certain degree of usefulness for improving psychiatric symptoms and reducing relapse rates in patients with treatment‐resistant schizophrenia. The risk‐benefit balance must be evaluated before performing m‐ECT instead of using the clozapine for patients with treatment‐resistant schizophrenia.
Recommendation The usefulness of concomitant therapy of antipsychotics and other mood stabilizers/antiepileptic drugs, antidepressants, and BZ drugs has not been established. The guideline suggests avoiding these concomitant therapies for psychiatric symptoms 2D . Switching to other antipsychotics should be considered for patients whose outcome would be poor without further intervention and for whom clozapine cannot be used 2D . A concomitant use of clozapine with antipsychotics should be considered when no effects have been obtained by switching to other antipsychotics or when switching is difficult 2D . Explanation As described in CQ4‐1 (⇒ pg. 72), the introduction of clozapine is the first recommendation for treating patients with treatment‐resistant schizophrenia. Medical institutions that do not meet the necessary standards for administering clozapine should establish a clozapine usage system. When this is difficult, hospital transfers to a medical institution that can introduce clozapine should be considered. However, this CQ describes therapeutic options to be considered when a patient's response or tolerability to clozapine is poor or when clozapine treatment cannot be conducted due to facility limitations. Most of the reports in this field are case reports or open trials. The RCTs are limited to small‐scale reports in which bias risks cannot be excluded. Concomitant therapy with antipsychotics other than clozapine The effects of concomitant therapy with antipsychotics other than clozapine, such as mood stabilizers, , antiepileptic drugs, antidepressants, and various other drugs have been examined. However, the effectiveness of concomitant therapies for treatment‐resistant schizophrenia has not been demonstrated in RCTs that provided reliable evidence, and it is possible that such therapies cause side effects. The guideline thus suggests that for the objective of improving psychiatric symptoms, these concomitant therapies should be avoided. Concomitant therapy with antipsychotics other than clozapine and BZ drugs There have been few studies of the concomitant use of BZ drugs for treatment‐resistant schizophrenia, and the effectiveness of these therapies for the psychiatric symptoms of schizophrenia has not been shown (with the exception of their use for transient sedation). In addition, cohort studies of patients with schizophrenia suggested that the concomitant use of BZ drugs may have increased the mortality rate, and thus the guideline suggests avoiding concomitant therapy with antipsychotics other than clozapine and BZ drugs. Switching between antipsychotics other than clozapine The evidence of the effects of antipsychotics other than clozapine for improving the psychiatric symptoms of treatment‐resistant schizophrenia is insufficient. However, several studies indicated that the antipsychotics olanzapine and risperidone were each superior compared to FGAs, , , , and they were non‐inferior to clozapine in comparative trials. , , , Therefore, options for switching to either of these two drugs should be considered if they have not yet been sufficiently used and remain usable after their potential side effects are considered. However, the switching of antipsychotics should be conducted with caution in cases in which a poor outcome would result from not switching, because the patient's current symptoms can also worsen. Otherwise, maintaining the current prescription is one of the treatment options. When the effects of the antipsychotic other than clozapine to which the drug was switched are insufficient, switching should be suspended. Polypharmacy of antipsychotics other than clozapine The efficacy of polypharmacy compared with monotherapy for treatment‐resistant schizophrenia has not been established, but the possibility of its effectiveness also cannot be denied. , , The results of many cohort studies suggested that the increased mortality rate of schizophrenia patients is due to antipsychotic polypharmacy (n = 7217, n = 88 ), but no such correlations were observed in a large‐scale cohort study (n = 66 881). As such, although there is room for discussion regarding the correlations between antipsychotic polypharmacy and an increased mortality rate, there is insufficient evidence about the utility of antipsychotic polypharmacy for improving psychiatric symptoms in schizophrenia, and it is possible that antipsychotic polypharmacy can decrease patients' adherence to the regimen, increase the total dose, and increase adverse events due to drug interactions. Polypharmacy should, therefore, be conducted only after carefully evaluating its effectiveness and when no other options remain. Options for suspending antipsychotic polypharmacy treatment should be considered promptly when further side effects occur due to the combined therapy, and a re‐evaluation of the usefulness of the treatment should be conducted after a certain period of usage if no effects are seen. Monotherapy should then be conducted, but not conducted with long‐term use of polypharmacy without caution. Slowly reducing the dose of one drug while monitoring changes in psychiatric symptoms is necessary when polypharmacy has already been conducted for a long period. , In Japan, status reports from the treating medical institution to the Ministry of Health, Labor, and Welfare are required when four or more antipsychotics are used concomitantly, and there are provisions to reduce medical fees except in special cases.
The usefulness of concomitant therapy of antipsychotics and other mood stabilizers/antiepileptic drugs, antidepressants, and BZ drugs has not been established. The guideline suggests avoiding these concomitant therapies for psychiatric symptoms 2D . Switching to other antipsychotics should be considered for patients whose outcome would be poor without further intervention and for whom clozapine cannot be used 2D . A concomitant use of clozapine with antipsychotics should be considered when no effects have been obtained by switching to other antipsychotics or when switching is difficult 2D .
As described in CQ4‐1 (⇒ pg. 72), the introduction of clozapine is the first recommendation for treating patients with treatment‐resistant schizophrenia. Medical institutions that do not meet the necessary standards for administering clozapine should establish a clozapine usage system. When this is difficult, hospital transfers to a medical institution that can introduce clozapine should be considered. However, this CQ describes therapeutic options to be considered when a patient's response or tolerability to clozapine is poor or when clozapine treatment cannot be conducted due to facility limitations. Most of the reports in this field are case reports or open trials. The RCTs are limited to small‐scale reports in which bias risks cannot be excluded. Concomitant therapy with antipsychotics other than clozapine The effects of concomitant therapy with antipsychotics other than clozapine, such as mood stabilizers, , antiepileptic drugs, antidepressants, and various other drugs have been examined. However, the effectiveness of concomitant therapies for treatment‐resistant schizophrenia has not been demonstrated in RCTs that provided reliable evidence, and it is possible that such therapies cause side effects. The guideline thus suggests that for the objective of improving psychiatric symptoms, these concomitant therapies should be avoided. Concomitant therapy with antipsychotics other than clozapine and BZ drugs There have been few studies of the concomitant use of BZ drugs for treatment‐resistant schizophrenia, and the effectiveness of these therapies for the psychiatric symptoms of schizophrenia has not been shown (with the exception of their use for transient sedation). In addition, cohort studies of patients with schizophrenia suggested that the concomitant use of BZ drugs may have increased the mortality rate, and thus the guideline suggests avoiding concomitant therapy with antipsychotics other than clozapine and BZ drugs. Switching between antipsychotics other than clozapine The evidence of the effects of antipsychotics other than clozapine for improving the psychiatric symptoms of treatment‐resistant schizophrenia is insufficient. However, several studies indicated that the antipsychotics olanzapine and risperidone were each superior compared to FGAs, , , , and they were non‐inferior to clozapine in comparative trials. , , , Therefore, options for switching to either of these two drugs should be considered if they have not yet been sufficiently used and remain usable after their potential side effects are considered. However, the switching of antipsychotics should be conducted with caution in cases in which a poor outcome would result from not switching, because the patient's current symptoms can also worsen. Otherwise, maintaining the current prescription is one of the treatment options. When the effects of the antipsychotic other than clozapine to which the drug was switched are insufficient, switching should be suspended. Polypharmacy of antipsychotics other than clozapine The efficacy of polypharmacy compared with monotherapy for treatment‐resistant schizophrenia has not been established, but the possibility of its effectiveness also cannot be denied. , , The results of many cohort studies suggested that the increased mortality rate of schizophrenia patients is due to antipsychotic polypharmacy (n = 7217, n = 88 ), but no such correlations were observed in a large‐scale cohort study (n = 66 881). As such, although there is room for discussion regarding the correlations between antipsychotic polypharmacy and an increased mortality rate, there is insufficient evidence about the utility of antipsychotic polypharmacy for improving psychiatric symptoms in schizophrenia, and it is possible that antipsychotic polypharmacy can decrease patients' adherence to the regimen, increase the total dose, and increase adverse events due to drug interactions. Polypharmacy should, therefore, be conducted only after carefully evaluating its effectiveness and when no other options remain. Options for suspending antipsychotic polypharmacy treatment should be considered promptly when further side effects occur due to the combined therapy, and a re‐evaluation of the usefulness of the treatment should be conducted after a certain period of usage if no effects are seen. Monotherapy should then be conducted, but not conducted with long‐term use of polypharmacy without caution. Slowly reducing the dose of one drug while monitoring changes in psychiatric symptoms is necessary when polypharmacy has already been conducted for a long period. , In Japan, status reports from the treating medical institution to the Ministry of Health, Labor, and Welfare are required when four or more antipsychotics are used concomitantly, and there are provisions to reduce medical fees except in special cases.
The effects of concomitant therapy with antipsychotics other than clozapine, such as mood stabilizers, , antiepileptic drugs, antidepressants, and various other drugs have been examined. However, the effectiveness of concomitant therapies for treatment‐resistant schizophrenia has not been demonstrated in RCTs that provided reliable evidence, and it is possible that such therapies cause side effects. The guideline thus suggests that for the objective of improving psychiatric symptoms, these concomitant therapies should be avoided.
There have been few studies of the concomitant use of BZ drugs for treatment‐resistant schizophrenia, and the effectiveness of these therapies for the psychiatric symptoms of schizophrenia has not been shown (with the exception of their use for transient sedation). In addition, cohort studies of patients with schizophrenia suggested that the concomitant use of BZ drugs may have increased the mortality rate, and thus the guideline suggests avoiding concomitant therapy with antipsychotics other than clozapine and BZ drugs.
The evidence of the effects of antipsychotics other than clozapine for improving the psychiatric symptoms of treatment‐resistant schizophrenia is insufficient. However, several studies indicated that the antipsychotics olanzapine and risperidone were each superior compared to FGAs, , , , and they were non‐inferior to clozapine in comparative trials. , , , Therefore, options for switching to either of these two drugs should be considered if they have not yet been sufficiently used and remain usable after their potential side effects are considered. However, the switching of antipsychotics should be conducted with caution in cases in which a poor outcome would result from not switching, because the patient's current symptoms can also worsen. Otherwise, maintaining the current prescription is one of the treatment options. When the effects of the antipsychotic other than clozapine to which the drug was switched are insufficient, switching should be suspended.
The efficacy of polypharmacy compared with monotherapy for treatment‐resistant schizophrenia has not been established, but the possibility of its effectiveness also cannot be denied. , , The results of many cohort studies suggested that the increased mortality rate of schizophrenia patients is due to antipsychotic polypharmacy (n = 7217, n = 88 ), but no such correlations were observed in a large‐scale cohort study (n = 66 881). As such, although there is room for discussion regarding the correlations between antipsychotic polypharmacy and an increased mortality rate, there is insufficient evidence about the utility of antipsychotic polypharmacy for improving psychiatric symptoms in schizophrenia, and it is possible that antipsychotic polypharmacy can decrease patients' adherence to the regimen, increase the total dose, and increase adverse events due to drug interactions. Polypharmacy should, therefore, be conducted only after carefully evaluating its effectiveness and when no other options remain. Options for suspending antipsychotic polypharmacy treatment should be considered promptly when further side effects occur due to the combined therapy, and a re‐evaluation of the usefulness of the treatment should be conducted after a certain period of usage if no effects are seen. Monotherapy should then be conducted, but not conducted with long‐term use of polypharmacy without caution. Slowly reducing the dose of one drug while monitoring changes in psychiatric symptoms is necessary when polypharmacy has already been conducted for a long period. , In Japan, status reports from the treating medical institution to the Ministry of Health, Labor, and Welfare are required when four or more antipsychotics are used concomitantly, and there are provisions to reduce medical fees except in special cases.
Other pathological conditions, such as social hallucinations or delusions, need to be considered, as well when the objective of schizophrenia treatment is complete recovery, including the restoration of social function. EPS, neuroleptic malignant syndrome, and weight gain occurring during antipsychotic therapy hinder the introduction and continuation of pharmacotherapy. Furthermore, psychomotor agitation, catatonia, depression, and water intoxication can all occur during the treatment process, which can impair treatment continuation and hinder recovery. This chapter describes treatments for these pathological conditions that hinder the introduction and continuation of treatment. The descriptions in this guideline are restricted to pharmacotherapy. However, the pathological conditions addressed in this chapter are complex and often other treatments are more useful interventions than pharmacotherapy. Treatment interventions other than pharmacotherapy should be investigated in clinical settings. The pathological conditions described in this chapter are often those where research consent is difficult to obtain because of low incidence or severe symptomology and, as a consequence, limited evidence from RCTs. For this reason, recommendations were investigated while considering not only RCTs but also case series and observational studies. It is necessary to evaluate individual pathological conditions and physical/human resources of the treatment facilities on a case‐by‐case basis while referring to evidence when researching these types of clinical problems that have limited evidence. The significance of each CQ examined in this chapter is shown below, and a summary of this chapter is shown in Table . Please refer to each CQ including its explanations for specific content. We have set CQs relating to treatment methods for pathological conditions that impair the introduction or continuation of schizophrenia treatment or measures against side effects of antipsychotics. For the former, CQ5‐1 refers to psychomotor agitation, which is a state in which behavior and emotions are extremely enhanced. CQ5‐2 discusses catatonia, a condition in which catatonic stupor and catatonic agitation repeat intermittently. CQ5‐3 refers to depressive symptoms which increase difficulties in social life and the risk of suicide. CQ5‐4 describes cognitive impairment related to social functional prognosis rather than psychiatric symptoms. CQ5‐5 covers pathological polydipsia and water intoxication, which have been observed in 10%‐20% of patients with chronic schizophrenia and is difficult to treat. For the latter, CQ5‐6 describes treatment and prevention methods that are recommended for EPS due to antipsychotics. CQ5‐7 covers neuroleptic malignant syndrome, which is a serious side effect that includes symptoms such as fever, myalgia, and a wide range of autonomic neuropathy and may result in death as an outcome. Finally, CQ5‐8 covers weight gain, which is not only a risk factor for metabolic disorders and cardiovascular diseases but which also reduces adherence to antipsychotics and can worsen psychiatric symptoms due to the patients’ disgust with their own appearance. CQ5‐1 What pharmacological therapies are recommended for psychomotor agitation? Recommendation Oral drug administration is recommended as a top priority while communicating with the patient to the extent possible. Furthermore, sufficiently examining psychological interventions and environmental adjustments with regards to pharmacotherapy for psychomotor agitation in schizophrenia are recommended 1D . Oral administration Administration of aripiprazole, olanzapine, and risperidone is desirable 2D . Intramuscular injection Olanzapine is recommended 1D while haloperidol monotherapy is not desirable 2C . Despite the evidence for the combination of haloperidol+promethazine, no recommendation is made (promethazine injections are not indicated). Midazolam has also no indications (no recommendation for either C ). Intravenous injection Haloperidol use is desirable 2D . There is some evidence for flunitrazepam but no recommendation is made for injections of flunitrazepam, which is not indicated (no recommendation) D . Investigating options for the introduction of ECT is desirable when there is no response to pharmacotherapy 2D . Explanation Psychomotor agitation is a condition in which behavior and emotions are extremely enhanced and rapid improvements are needed. However, there are only a few placebo‐controlled trials for patients with these conditions. Therefore, this sectionnn references results including those from single‐blind trials, observational studies, cohort studies, and trials which included peripheral diseases and mood disorders of schizophrenia. The primary assessment criteria were set as improvements in psychiatric symptoms 24 hours after oral administration and two hours after intramuscular (IM) injection. There are no studies which examined differences due to different administration routes as primary assessment criteria but the previous guideline recommended oral administration initially at the minimum dose for psychomotor agitation or treatment‐resistant patients if psychological interventions and environmental adjustments were sufficiently considered and the patient cooperates after communicating with them to the extent possible. , , Thirteen trials that primarily investigated the effectiveness of SGAs were referenced for oral administration. , , , , , , , , , , , , The drugs and initial administered doses investigated were as follows: aripiprazole (10‐20 mg), two trials; , haloperidol (5‐15 mg), seven trials; , , , , , , olanzapine (10‐20 mg), eight trials; , , , , , , , quetiapine (100‐800 mg), two trials; , and risperidone (2‐6 mg), five trials. , , , , Improvements in psychiatric symptoms were observed in assessments such as the Positive and Negative Syndrome Scale Excited Component(PANSS‐EC) and Brief Psychiatric Rating Scale (BPRS) but only nine trials assessed the symptoms within 24 hours. , , , , , , , , Trials for quetiapine included one with a small sample size of 20 patients and an initial administered dose of over 100 mg and one trial where the symptoms were assessed after 72 hours and the initial administered dose was 300‐800 mg . Based on the above‐listed studies, the differences between drugs are not clear but there is weak evidence that aripiprazole, haloperidol, olanzapine, and risperidone can improve psychiatric symptoms within 24 hours, including at recommended initial administered doses for Japan D . As for adverse events, EPS was high for haloperidol compared to other drugs , , , D . There is also an expert opinion which indicated the convenience of solutions and orally disintegrating tablets for psychiatric emergencies but there is no evidence for the superiority of any drug form. Trials that investigated IM injections showed general improvements in assessments such as PANSS‐EC and BPRS. , , , , , , , , , , , , , , , , , , IM haloperidol injections were shown to be effective compared to placebos in a meta‐analysis of 32 trials C . However, the frequency of antiparkinsonian drug use for EPS was high compared to that of placebos D , motor dysfunction tended to occur more frequently than with IM olanzapine injections D , and acute dystonia was higher than with IM haloperidol+promethazine injections C . Compared to IM haloperidol injections, IM olanzapine injections showed identical or higher improvement, , , , , , , , , , D as well as fewer EPS and shorter QT extension , , , , , D . Additionally, one RCT showed a faster onset of effects D . A meta‐analysis based on four clinically reliable large‐scale RCTs was conducted for IM haloperidol+promethazine injections. The results showed that (i) IM haloperidol (5‐10 mg) + promethazine (25‐50 mg) injections were similarly effective after two hours but the onset of effects was slower compared to IM midazolam (7.5‐15 mg) injections, with respiratory depression observed in one midazolam case, (ii) superior effectiveness compared to IM lorazepam (~4 mg) injections, (iii) superior effectiveness and tolerability and faster onset of effects compared to IM haloperidol (5‐10 mg) injections, with IM haloperidol injections having a higher rate of acute dystonia C , (iv) and similar efficacy after two hours, no differences in side effects, and superior continuation of subsequent effects when compared with IM olanzapine (5‐10 mg) injection C (mixing the injections of haloperidol and promethazine result in turbidity, so mixed injection is not allowed ). However, promethazine injections and midazolam are not indicated for schizophrenia in Japan. Biperiden has been shown to be effective for EPS, including acute dystonia (see CQ5‐6 ⇒ pg. 117), but there is almost no evidence on the efficacy and adverse events caused by concomitant haloperidol and IM injections for acute psychomotor agitation. Small‐scale trials showed that IM BZ drug injections were more effective than a placebo, but no evidence which clearly showed superiority was obtained C . There is almost no evidence on intravenous (IV) injections, and we referenced Hatta et al., which are the only research results from Japan. Hatta et al. also collected expert opinions and recommended IV administration of haloperidol or flunitrazepam when the patient needs to be put to sleep. It was also reported that IV injections of haloperidol ultimately reduced the final administered dose of BZ drugs, while IV flunitrazepam+levomepromazine IM injections caused significantly higher occurrence rates of respiratory depression compared to IV flunitrazepam or flunitrazepam+haloperidol injections. IV injections of flunitrazepam+haloperidol significantly extended QTc compared to IV flunitrazepam injections but no serious arrhythmia was observed , D . However, injections of flunitrazepam are not indicated for schizophrenia in Japan. A review by Pompili et al. on the effectiveness of ECT on schizophrenia indicated that ECT was a useful method for psychomotor agitation patients. It must be assumed that there will be many exceptions regardless of the method used since the administered doses or methods included in the package insert may not be sufficient for these types of patients. It is also important to screen for physical problems before choosing the drug and its administration route and to conduct sufficient physical monitoring after drug administration. References 1 Allen MH , Currier GW , Carpenter D , et al. The expert consensus guideline series. Treatment of behavioral emergencies . J Psychiatr Pract (Suppl . 2005 ; 1) : 5 – 108 . 2 Buchanan RW , Kreyenbuhl J , Kelly DL , et al. The 2009 schizophrenia PORT psychopharmacological treatment recommendations and summary statements . Schizophr Bull . 2010 ; 36 : 71 – 93 . 19955390 3 Hasan A , Falkai P , Wobrock T , et al. World Federation of Societies of Biological Psychiatry (WFSBP)Guidelines for Biological Treatment of Schizophrenia, part 1: update 2012 on the acute treatment of schizophrenia and the management of treatment resistance . World J Biol Psychiatry . 2012 ; 13 : 318 – 78 . 22834451 4 Currier GW , Chou JC , Feifel D , et al. Acute treatment of psychotic agitation: a randomized comparison of oral treatment with risperidone and lorazepam versus intramuscular treatment with haloperidol and lorazepam . J Clin Psichiatry . 2004 ; 65 : 386 – 94 . 5 Currier GW , Trenton AJ , Walsh PG , et al. A pilot, open‐label safety study of quetiapine for treatment of moderate psychotic agitation in the emergency setting . J Psychiatr Pract . 2006 ; 12 : 223 – 8 . 16883147 6 Escobar R , San L , Pérez V , et al. Effectiveness results of olanzapine in acute psychotic patients with agitation in the emergency room setting: results from NATURA study . Actas Esp Psychiatr . 2008 ; 36 : 151 – 7 . 7 Hori H , Ueda N , Yoshimura R , et al. Olanzapine orally disintegrating tablets (Zyprexa Zydis)rapidly improve excitement compornents in acute phase of first‐episode schizophrenic patients: an open‐label prospective study . World J Biol Psychiatry . 2009 ; 10 : 741 – 5 . 19707954 8 Hsu WY , Huang SS , Lee BS , et al. Comparison of intramuscular olanzapine, orally disintegrating olanzapine tablets, oral risperidone solution, and intramuscular haloperidol in the management of acute agitation in an acute care psychiatric ward in Taiwan . J Clin Psychopharmacol . 2010 ; 30 : 230 – 4 . 20473056 9 Katagiri H , Fujikoshi S , Suzuki T , et al. A randomized, double‐blind, placebo‐controlled study of rapid‐acting intramuscular olanzapine in Japanese patients for schizophrenia with acute agitation . BMC Psychiatry . 2013 ; 13 : 20 . 23311957 10 Kinon BJ , Ahl J , Rotelli MD , et al. Efficacy of accelerated dose titration of olanzapine with adjunctive lorazepam to treat acute agitation in schizophrenia . Am J Emerg Med . 2004 ; 22 : 181 – 6 . 15138953 11 Kinon BJ , Roychowdhury SM , Milton DR , et al. Effective resolusion with olanzapine of acute presentation of behavioral agitation and positive psychiatric symptoms in schizophrenia . J Clin Psychiatry . 2001 ; 62 ( Supple 2 ): 17 – 21 . 12 Kinon BJ , Stsuffer VL , Kollack‐Walker S , et al. Olanzapine versus aripiprazole for the treatment of agitation in acutely ill patients with schizophrenia . J Clin Psychopharmacol . 2008 ; 28 : 601 – 7 . 19011427 13 Marder SR , West B , Lau GS , et al. Aripiprazole effects in patients with acute schizophrenia experiencing higher or lower Agitation: a post hoc analysis of 4 randomized, placebo‐controlled trials . J Clin Psychiatry . 2007 ; 68 : 662 – 8 . 17503974 14 Veser FH , Veser BD , McMullan JT , et al. Risperidone versus haloperidol, in combination with lorazepam, in the treatment of acute agitation and psychosis: a pilot, randomized, double‐blind, pracebo controlled trial . J Psychiatry Pract . 2006 ; 12 : 103 – 8 . 15 Villari V , Rocca P , Fonzo V , et al. Oral risperidone, olanzapine and quetiapine versus haloperidol in psychotic agitation . Prog Neuropsychopharmacol Biol Psychiatry . 2008 ; 32 : 405 – 13 . 17900775 16 Walther S , Moggi F , Horn H , et al. Rapid tranquilization of severely agitated patients with schizophrenia spectrum disorders: a naturalistic, rater‐blinded, randomized, controlled study with oral haloperidol, risperidone, and olanzapine . J Clin Psychopharmacol . 2014 ; 34 : 124 – 8 . 24346752 17 Hatta S (Sawa Y, Hirata T (eds)): Pharmacological therapy . Japanese Association for Emergency Psychiatry (ed): Emergency Psychiatry Clinical Guideline 2009. 18 TREC Collaborative Group . Rapid tranquillization for agitated patients in emergency psychiatric room: a randomized trial of midazolam versus haloperidol plus promethazine . BMJ . 2003 ; 327 : 708 – 13 . 14512476 19 Alexander J , Tharyan P , Adams C , et al. Rapid tranquillisation of violent or agitated patients in a psychiatric emergency setting. Pragmatic randomised trial of intramuscular lorazepam v. haloperidol plus promethazine . Br J Psychiatry . 2004 ; 185 : 63 – 9 . 15231557 20 Battaglia J , Houston JP , Ahl J , et al. A post hoc analysis of transitional to oral treatment with olanzapine or haloperidol after 24‐hour intramuscular treatment in acutely agitated adult patient with schizophrenia . Clin Ther . 2005 ; 27 : 1612 – 8 . 16330297 21 Battaglia J , Lindborg SR , Alaka K , et al. Carming versus sedative effects of intramuscular olanzapine in agitated patients . Am J Em Med . 2003 ; 21 : 192 – 8 . 22 Breier A , Meehan K , Birkett M , et al. A double‐blind, placebo‐controlled dose‐response comparison of intramuscular olanzapine and haloperidol in treatment of acute agitataion in schizophrenia . Arch Gen Psychiatry . 2002 ; 59 : 441 – 8 . 11982448 23 Citrome L . Comparison of intramuscular ziprasidone, olanzapine, or aripiprazole for agitation: a quantitative review of efficacy and safety . J Clin Psichiatry . 2007 ; 68 : 1876 – 85 . 24 Higashima M , Takeda T , Nagasaka T , et al. Combined therapy with low‐potency neuroleptic levomepromazine as an adjunct to haloperidol for agitated patients with acute exacerbation of schizophrenia . Eur psychiatry . 2004 ; 19 : 380 – 1 . 15363480 25 Huf G , Alexander J , Allen MH , et al. Haloperidol plus promethazine for psychosis‐induced aggression . Cochrane Database Syst Rev . 2009 ; (3) : CD005146 . 26 Huf G , Coutinho ES , Adams CE , et al. Rapid tranquillisation in psychiatric emergency settings in Brazil: pragmatic randomized controlled trial of intramuscular haloperidol versus intramuscular haloperidol plus promethazine . BMJ . 2007 ; 335 : 869 . 17954515 27 Lindberg SR , Beasly CM , Alaka K , et al. Effects of intramuscular olanzapine vs. haloperidol and placebo on QTc intervals in acutely agitated patients . Psychiatry Res . 2003 ; 119 : 113 – 23 . 12860365 28 Perrin A , Anand E , Dyachkova Y , et al. A prospective, observational study of the safety and effectiveness of intramusucular psychotropic treatment in acutely agitated patients with schizophrenia and bipolar mania . Eur Psychiatry . 2012 ; 27 : 234 – 9 . 20620029 29 Powney MJ , Adams CE , Jones H . Haloperidol for psychosis‐induced aggression or agitation (rapid tranquillisation) . Cochrane Database Syst Rev . 2012 ; 11 : CD009377 . 23152276 30 RaveendranNS TP , Alexander J , et al. Rapid tranquillisation in psychiatric emergency settings in India: pragmatic randomized controlled trial of intramuscular olanzapine versus intramuscular haloperidol plus promethazine . BMJ . 2007 ; 335 : 865 . 17954514 31 San L , Arranz B , Querejeta I , et al. A naturalistic multicenter study of intramuscular olanzapine in the treatment of acutely agitated manic or schizophrenic patients . Eur Psychiatry . 2006 ; 21 : 539 – 43 . 16697151 32 Wright P , Birkett M , David SR , et al. Double‐blind, placebo‐controlled comparison of intramascular olanzapine and intramascular haloperidol in the treatment of acute agitation in schizophrenia . Am J psychiatry . 2001 ; 158 : 1149 – 51 . 11431240 33 Ishimoto K , editor. Yamaguchi Hospital Pharmacists Association (ed): Checking Manual for Dispensing Injectable Drugs , 4 th edn. Tokyo : Elsevier Japan ; 2012 . pp. 313 . 34 Gillies D , Sampson S , Beck A , et al. Benzodiazepines for psychosis‐induced aggression or agitation . Cochrane Database Syst Rev . 2013 ; 4 : CD003079 . 35 Hatta K , Nakamura M , Yoshida K , et al. A prospective naturalistic multicentre study of intravenous medications in behavioural emergencies: haloperidol versus flunitrazepam . Psychiatry Res . 2010 ; 178 : 182 – 5 . 20452043 36 Hatta K , Takahashi T , Nakamura H , et al. Prolonged upper airway instability in the parenteral use of benzodiazepine with levomepromazine . J Clin Psychopharmacol . 2000 ; 20 : 99 – 101 . 37 Hatta K , Takahashi T , Nakamura H , et al. The association between intravenous haloperidol and prolonged QT interval . J Clin Psychopharmacol . 2001 ; 21 : 257 – 61 . 11386487 38 Pompili M , Lester D , Dominici G , et al. Indications for electroconvulsive treatment in schizophrenia: a systematic review . Schizophr Res . 2013 ; 146 : 1 – 9 . 23499244
Recommendation Oral drug administration is recommended as a top priority while communicating with the patient to the extent possible. Furthermore, sufficiently examining psychological interventions and environmental adjustments with regards to pharmacotherapy for psychomotor agitation in schizophrenia are recommended 1D . Oral administration Administration of aripiprazole, olanzapine, and risperidone is desirable 2D . Intramuscular injection Olanzapine is recommended 1D while haloperidol monotherapy is not desirable 2C . Despite the evidence for the combination of haloperidol+promethazine, no recommendation is made (promethazine injections are not indicated). Midazolam has also no indications (no recommendation for either C ). Intravenous injection Haloperidol use is desirable 2D . There is some evidence for flunitrazepam but no recommendation is made for injections of flunitrazepam, which is not indicated (no recommendation) D . Investigating options for the introduction of ECT is desirable when there is no response to pharmacotherapy 2D . Explanation Psychomotor agitation is a condition in which behavior and emotions are extremely enhanced and rapid improvements are needed. However, there are only a few placebo‐controlled trials for patients with these conditions. Therefore, this sectionnn references results including those from single‐blind trials, observational studies, cohort studies, and trials which included peripheral diseases and mood disorders of schizophrenia. The primary assessment criteria were set as improvements in psychiatric symptoms 24 hours after oral administration and two hours after intramuscular (IM) injection. There are no studies which examined differences due to different administration routes as primary assessment criteria but the previous guideline recommended oral administration initially at the minimum dose for psychomotor agitation or treatment‐resistant patients if psychological interventions and environmental adjustments were sufficiently considered and the patient cooperates after communicating with them to the extent possible. , , Thirteen trials that primarily investigated the effectiveness of SGAs were referenced for oral administration. , , , , , , , , , , , , The drugs and initial administered doses investigated were as follows: aripiprazole (10‐20 mg), two trials; , haloperidol (5‐15 mg), seven trials; , , , , , , olanzapine (10‐20 mg), eight trials; , , , , , , , quetiapine (100‐800 mg), two trials; , and risperidone (2‐6 mg), five trials. , , , , Improvements in psychiatric symptoms were observed in assessments such as the Positive and Negative Syndrome Scale Excited Component(PANSS‐EC) and Brief Psychiatric Rating Scale (BPRS) but only nine trials assessed the symptoms within 24 hours. , , , , , , , , Trials for quetiapine included one with a small sample size of 20 patients and an initial administered dose of over 100 mg and one trial where the symptoms were assessed after 72 hours and the initial administered dose was 300‐800 mg . Based on the above‐listed studies, the differences between drugs are not clear but there is weak evidence that aripiprazole, haloperidol, olanzapine, and risperidone can improve psychiatric symptoms within 24 hours, including at recommended initial administered doses for Japan D . As for adverse events, EPS was high for haloperidol compared to other drugs , , , D . There is also an expert opinion which indicated the convenience of solutions and orally disintegrating tablets for psychiatric emergencies but there is no evidence for the superiority of any drug form. Trials that investigated IM injections showed general improvements in assessments such as PANSS‐EC and BPRS. , , , , , , , , , , , , , , , , , , IM haloperidol injections were shown to be effective compared to placebos in a meta‐analysis of 32 trials C . However, the frequency of antiparkinsonian drug use for EPS was high compared to that of placebos D , motor dysfunction tended to occur more frequently than with IM olanzapine injections D , and acute dystonia was higher than with IM haloperidol+promethazine injections C . Compared to IM haloperidol injections, IM olanzapine injections showed identical or higher improvement, , , , , , , , , , D as well as fewer EPS and shorter QT extension , , , , , D . Additionally, one RCT showed a faster onset of effects D . A meta‐analysis based on four clinically reliable large‐scale RCTs was conducted for IM haloperidol+promethazine injections. The results showed that (i) IM haloperidol (5‐10 mg) + promethazine (25‐50 mg) injections were similarly effective after two hours but the onset of effects was slower compared to IM midazolam (7.5‐15 mg) injections, with respiratory depression observed in one midazolam case, (ii) superior effectiveness compared to IM lorazepam (~4 mg) injections, (iii) superior effectiveness and tolerability and faster onset of effects compared to IM haloperidol (5‐10 mg) injections, with IM haloperidol injections having a higher rate of acute dystonia C , (iv) and similar efficacy after two hours, no differences in side effects, and superior continuation of subsequent effects when compared with IM olanzapine (5‐10 mg) injection C (mixing the injections of haloperidol and promethazine result in turbidity, so mixed injection is not allowed ). However, promethazine injections and midazolam are not indicated for schizophrenia in Japan. Biperiden has been shown to be effective for EPS, including acute dystonia (see CQ5‐6 ⇒ pg. 117), but there is almost no evidence on the efficacy and adverse events caused by concomitant haloperidol and IM injections for acute psychomotor agitation. Small‐scale trials showed that IM BZ drug injections were more effective than a placebo, but no evidence which clearly showed superiority was obtained C . There is almost no evidence on intravenous (IV) injections, and we referenced Hatta et al., which are the only research results from Japan. Hatta et al. also collected expert opinions and recommended IV administration of haloperidol or flunitrazepam when the patient needs to be put to sleep. It was also reported that IV injections of haloperidol ultimately reduced the final administered dose of BZ drugs, while IV flunitrazepam+levomepromazine IM injections caused significantly higher occurrence rates of respiratory depression compared to IV flunitrazepam or flunitrazepam+haloperidol injections. IV injections of flunitrazepam+haloperidol significantly extended QTc compared to IV flunitrazepam injections but no serious arrhythmia was observed , D . However, injections of flunitrazepam are not indicated for schizophrenia in Japan. A review by Pompili et al. on the effectiveness of ECT on schizophrenia indicated that ECT was a useful method for psychomotor agitation patients. It must be assumed that there will be many exceptions regardless of the method used since the administered doses or methods included in the package insert may not be sufficient for these types of patients. It is also important to screen for physical problems before choosing the drug and its administration route and to conduct sufficient physical monitoring after drug administration.
Oral drug administration is recommended as a top priority while communicating with the patient to the extent possible. Furthermore, sufficiently examining psychological interventions and environmental adjustments with regards to pharmacotherapy for psychomotor agitation in schizophrenia are recommended 1D . Oral administration Administration of aripiprazole, olanzapine, and risperidone is desirable 2D . Intramuscular injection Olanzapine is recommended 1D while haloperidol monotherapy is not desirable 2C . Despite the evidence for the combination of haloperidol+promethazine, no recommendation is made (promethazine injections are not indicated). Midazolam has also no indications (no recommendation for either C ). Intravenous injection Haloperidol use is desirable 2D . There is some evidence for flunitrazepam but no recommendation is made for injections of flunitrazepam, which is not indicated (no recommendation) D . Investigating options for the introduction of ECT is desirable when there is no response to pharmacotherapy 2D .
Psychomotor agitation is a condition in which behavior and emotions are extremely enhanced and rapid improvements are needed. However, there are only a few placebo‐controlled trials for patients with these conditions. Therefore, this sectionnn references results including those from single‐blind trials, observational studies, cohort studies, and trials which included peripheral diseases and mood disorders of schizophrenia. The primary assessment criteria were set as improvements in psychiatric symptoms 24 hours after oral administration and two hours after intramuscular (IM) injection. There are no studies which examined differences due to different administration routes as primary assessment criteria but the previous guideline recommended oral administration initially at the minimum dose for psychomotor agitation or treatment‐resistant patients if psychological interventions and environmental adjustments were sufficiently considered and the patient cooperates after communicating with them to the extent possible. , , Thirteen trials that primarily investigated the effectiveness of SGAs were referenced for oral administration. , , , , , , , , , , , , The drugs and initial administered doses investigated were as follows: aripiprazole (10‐20 mg), two trials; , haloperidol (5‐15 mg), seven trials; , , , , , , olanzapine (10‐20 mg), eight trials; , , , , , , , quetiapine (100‐800 mg), two trials; , and risperidone (2‐6 mg), five trials. , , , , Improvements in psychiatric symptoms were observed in assessments such as the Positive and Negative Syndrome Scale Excited Component(PANSS‐EC) and Brief Psychiatric Rating Scale (BPRS) but only nine trials assessed the symptoms within 24 hours. , , , , , , , , Trials for quetiapine included one with a small sample size of 20 patients and an initial administered dose of over 100 mg and one trial where the symptoms were assessed after 72 hours and the initial administered dose was 300‐800 mg . Based on the above‐listed studies, the differences between drugs are not clear but there is weak evidence that aripiprazole, haloperidol, olanzapine, and risperidone can improve psychiatric symptoms within 24 hours, including at recommended initial administered doses for Japan D . As for adverse events, EPS was high for haloperidol compared to other drugs , , , D . There is also an expert opinion which indicated the convenience of solutions and orally disintegrating tablets for psychiatric emergencies but there is no evidence for the superiority of any drug form. Trials that investigated IM injections showed general improvements in assessments such as PANSS‐EC and BPRS. , , , , , , , , , , , , , , , , , , IM haloperidol injections were shown to be effective compared to placebos in a meta‐analysis of 32 trials C . However, the frequency of antiparkinsonian drug use for EPS was high compared to that of placebos D , motor dysfunction tended to occur more frequently than with IM olanzapine injections D , and acute dystonia was higher than with IM haloperidol+promethazine injections C . Compared to IM haloperidol injections, IM olanzapine injections showed identical or higher improvement, , , , , , , , , , D as well as fewer EPS and shorter QT extension , , , , , D . Additionally, one RCT showed a faster onset of effects D . A meta‐analysis based on four clinically reliable large‐scale RCTs was conducted for IM haloperidol+promethazine injections. The results showed that (i) IM haloperidol (5‐10 mg) + promethazine (25‐50 mg) injections were similarly effective after two hours but the onset of effects was slower compared to IM midazolam (7.5‐15 mg) injections, with respiratory depression observed in one midazolam case, (ii) superior effectiveness compared to IM lorazepam (~4 mg) injections, (iii) superior effectiveness and tolerability and faster onset of effects compared to IM haloperidol (5‐10 mg) injections, with IM haloperidol injections having a higher rate of acute dystonia C , (iv) and similar efficacy after two hours, no differences in side effects, and superior continuation of subsequent effects when compared with IM olanzapine (5‐10 mg) injection C (mixing the injections of haloperidol and promethazine result in turbidity, so mixed injection is not allowed ). However, promethazine injections and midazolam are not indicated for schizophrenia in Japan. Biperiden has been shown to be effective for EPS, including acute dystonia (see CQ5‐6 ⇒ pg. 117), but there is almost no evidence on the efficacy and adverse events caused by concomitant haloperidol and IM injections for acute psychomotor agitation. Small‐scale trials showed that IM BZ drug injections were more effective than a placebo, but no evidence which clearly showed superiority was obtained C . There is almost no evidence on intravenous (IV) injections, and we referenced Hatta et al., which are the only research results from Japan. Hatta et al. also collected expert opinions and recommended IV administration of haloperidol or flunitrazepam when the patient needs to be put to sleep. It was also reported that IV injections of haloperidol ultimately reduced the final administered dose of BZ drugs, while IV flunitrazepam+levomepromazine IM injections caused significantly higher occurrence rates of respiratory depression compared to IV flunitrazepam or flunitrazepam+haloperidol injections. IV injections of flunitrazepam+haloperidol significantly extended QTc compared to IV flunitrazepam injections but no serious arrhythmia was observed , D . However, injections of flunitrazepam are not indicated for schizophrenia in Japan. A review by Pompili et al. on the effectiveness of ECT on schizophrenia indicated that ECT was a useful method for psychomotor agitation patients. It must be assumed that there will be many exceptions regardless of the method used since the administered doses or methods included in the package insert may not be sufficient for these types of patients. It is also important to screen for physical problems before choosing the drug and its administration route and to conduct sufficient physical monitoring after drug administration.
Recommendation Searching for organic factors and improving general conditions are recommended before intervention 1D . Considering the possibility of catatonia being the initial symptom of antipsychotic‐induced neuroleptic malignant syndrome, as well as suspending antipsychotics and prioritizing the treatment of neuroleptic malignant syndrome are recommended when the disease is suspected 1D . It is desirable to pay sufficient attention to changes in the general condition and conduct pharmacological therapy according to normal schizophrenia treatment since there is not sufficient evidence regarding the efficacy and adverse events of pharmacological therapy limited to catatonia in schizophrenia 2D . It is desirable to introduce ECT since there is established evidence of its effectiveness 2D . Explanation Catatonia refers to a pathological condition which intermittently alternates between catatonic stupor, in which all spontaneous behavior stops despite having clear consciousness, and catatonic excitement, which is inconsistent and incomprehensible excitement without any voluntary control. This is primarily seen in catatonic schizophrenia but can also appear in psychiatric illnesses other than schizophrenia. This section presumes that the patient has been diagnosed with schizophrenia, but it must be considered that this diagnosis is challenging when catatonia is encountered. It must also be assumed even when the patient is diagnosed with schizophrenia that there may be organic factors in the background, such as neurological diseases, endocrine/metabolic diseases, infectious diseases, withdrawal symptoms, and drug addiction. It is important to prioritize tests that can be conducted quickly and to search for organic factors to the extent possible while improving the general condition by sufficient fluid replacement. Only a few studies targeted schizophrenia, so we considered recommendations assuming that this includes schizophrenia for this CQ, while referencing studies on peripheral illnesses. There is currently insufficient evidence on the effectiveness and adverse events of pharmacotherapy on just catatonia in schizophrenia. With regards to antipsychotics, the effects of olanzapine and quetiapine in 25 observational studies including related illnesses were inconsistent and it has been reported that catatonic symptoms worsened, EPS appeared, and irritability worsened with aripiprazole, risperidone, and FGAs. Similarly, trials which include illnesses other than schizophrenia indicated a risk of exacerbation due to antipsychotics D . Catatonia and neuroleptic‐induced catatonia can also occur depending on antipsychotic treatment, and neuroleptic‐induced catatonia may also be an initial symptom of neuroleptic malignant syndrome D . The above‐described studies highlight that catatonia in schizophrenia needs to be differentiated between whether it is due to an underlying illness or whether it is an initial symptom of neuroleptic malignant syndrome due to pharmacolotherapy. Treatment should promptly switch to that for neuroleptic malignant syndrome while still conducting normal treatment and being aware of changes in the general condition when progression into neuroleptic malignant syndrome is suspected D (see CQ5‐7 ⇒ pg. 133). Furthermore, a Cochrane Review which investigated the effectiveness of BZ drugs against catatonia in schizophrenia or serious psychiatric illness conducted a meta‐analysis of 22 studies but reported no superiority of BZ drugs compared to a placebo. There were no differences to a placebo for catatonia in chronic schizophrenia , D . Improvements were shown in observational studies but the overall sample size was small and patients with illnesses other than schizophrenia were often included. , , , , , , Therefore, it is likely that there are differences in the treatment response to BZ drugs since a variety of pathological conditions are included in catatonia. There is also no consensus on the efficacy and adverse events of BZ drugs, as well as the continuity of its effects, against catatonia in schizophrenia. ECT was shown to be effective for 85% of patients according to a case series that investigated treatment methods for 270 episodes and 178 patients with catatonia in illnesses including schizophrenia but also other diseases. A systematic review based on 31 trials on the effectiveness of ECT in patients with schizophrenia showed that it was particularly useful for patients with schizophrenia with (i) catatonia which required rapid improvements, (ii) resistance to pharmacotherapy, and (iii) psychomotor agitation. Upon careful examination of the three studies in this review which investigated the effectiveness of ECT against catatonia in schizophrenia, Hatta et al. showed that 50 patients with schizophrenia, who received lorazepam in response to catatonia and either ECT or orally administered mood stabilizers, and for whom the former treatment was ineffective, all improved with ECT. In contrast, for oral administration the improvement was restricted to 68%, 26%, 16%, and 2% of patients with CHLORPROMAZINE, risperidone, haloperidol, and BZ drugs, respectively. Phutane et al. investigated the reasons for conducting ECT against schizophrenia in 202 patients and showed that the most common reason was to increase the effects of pharmacological therapy. The second most common reason was to improve catatonia and which resulted in significant improvements. Thirthalli et al. investigated the effectiveness of ECT in 87 patients with schizophrenia and reported that the 53 patients with catatonia showed faster improvement when compared with the other 34 patients. Based on these studies, ECT is thought to be effective against catatonia in schizophrenia D . Catatonia in schizophrenia is a pathological condition that significantly reduces QOL. There is no doubt that rapid treatment is needed, but the evidence on treatment methods is insufficient and there are currently no treatment methods that can be actively recommended. Accumulating evidence on catatonia specifically in schizophrenia and elucidating the pathological bases of catatonia are needed to provide clear evidence on the optimal treatment method.
Searching for organic factors and improving general conditions are recommended before intervention 1D . Considering the possibility of catatonia being the initial symptom of antipsychotic‐induced neuroleptic malignant syndrome, as well as suspending antipsychotics and prioritizing the treatment of neuroleptic malignant syndrome are recommended when the disease is suspected 1D . It is desirable to pay sufficient attention to changes in the general condition and conduct pharmacological therapy according to normal schizophrenia treatment since there is not sufficient evidence regarding the efficacy and adverse events of pharmacological therapy limited to catatonia in schizophrenia 2D . It is desirable to introduce ECT since there is established evidence of its effectiveness 2D .
Catatonia refers to a pathological condition which intermittently alternates between catatonic stupor, in which all spontaneous behavior stops despite having clear consciousness, and catatonic excitement, which is inconsistent and incomprehensible excitement without any voluntary control. This is primarily seen in catatonic schizophrenia but can also appear in psychiatric illnesses other than schizophrenia. This section presumes that the patient has been diagnosed with schizophrenia, but it must be considered that this diagnosis is challenging when catatonia is encountered. It must also be assumed even when the patient is diagnosed with schizophrenia that there may be organic factors in the background, such as neurological diseases, endocrine/metabolic diseases, infectious diseases, withdrawal symptoms, and drug addiction. It is important to prioritize tests that can be conducted quickly and to search for organic factors to the extent possible while improving the general condition by sufficient fluid replacement. Only a few studies targeted schizophrenia, so we considered recommendations assuming that this includes schizophrenia for this CQ, while referencing studies on peripheral illnesses. There is currently insufficient evidence on the effectiveness and adverse events of pharmacotherapy on just catatonia in schizophrenia. With regards to antipsychotics, the effects of olanzapine and quetiapine in 25 observational studies including related illnesses were inconsistent and it has been reported that catatonic symptoms worsened, EPS appeared, and irritability worsened with aripiprazole, risperidone, and FGAs. Similarly, trials which include illnesses other than schizophrenia indicated a risk of exacerbation due to antipsychotics D . Catatonia and neuroleptic‐induced catatonia can also occur depending on antipsychotic treatment, and neuroleptic‐induced catatonia may also be an initial symptom of neuroleptic malignant syndrome D . The above‐described studies highlight that catatonia in schizophrenia needs to be differentiated between whether it is due to an underlying illness or whether it is an initial symptom of neuroleptic malignant syndrome due to pharmacolotherapy. Treatment should promptly switch to that for neuroleptic malignant syndrome while still conducting normal treatment and being aware of changes in the general condition when progression into neuroleptic malignant syndrome is suspected D (see CQ5‐7 ⇒ pg. 133). Furthermore, a Cochrane Review which investigated the effectiveness of BZ drugs against catatonia in schizophrenia or serious psychiatric illness conducted a meta‐analysis of 22 studies but reported no superiority of BZ drugs compared to a placebo. There were no differences to a placebo for catatonia in chronic schizophrenia , D . Improvements were shown in observational studies but the overall sample size was small and patients with illnesses other than schizophrenia were often included. , , , , , , Therefore, it is likely that there are differences in the treatment response to BZ drugs since a variety of pathological conditions are included in catatonia. There is also no consensus on the efficacy and adverse events of BZ drugs, as well as the continuity of its effects, against catatonia in schizophrenia. ECT was shown to be effective for 85% of patients according to a case series that investigated treatment methods for 270 episodes and 178 patients with catatonia in illnesses including schizophrenia but also other diseases. A systematic review based on 31 trials on the effectiveness of ECT in patients with schizophrenia showed that it was particularly useful for patients with schizophrenia with (i) catatonia which required rapid improvements, (ii) resistance to pharmacotherapy, and (iii) psychomotor agitation. Upon careful examination of the three studies in this review which investigated the effectiveness of ECT against catatonia in schizophrenia, Hatta et al. showed that 50 patients with schizophrenia, who received lorazepam in response to catatonia and either ECT or orally administered mood stabilizers, and for whom the former treatment was ineffective, all improved with ECT. In contrast, for oral administration the improvement was restricted to 68%, 26%, 16%, and 2% of patients with CHLORPROMAZINE, risperidone, haloperidol, and BZ drugs, respectively. Phutane et al. investigated the reasons for conducting ECT against schizophrenia in 202 patients and showed that the most common reason was to increase the effects of pharmacological therapy. The second most common reason was to improve catatonia and which resulted in significant improvements. Thirthalli et al. investigated the effectiveness of ECT in 87 patients with schizophrenia and reported that the 53 patients with catatonia showed faster improvement when compared with the other 34 patients. Based on these studies, ECT is thought to be effective against catatonia in schizophrenia D . Catatonia in schizophrenia is a pathological condition that significantly reduces QOL. There is no doubt that rapid treatment is needed, but the evidence on treatment methods is insufficient and there are currently no treatment methods that can be actively recommended. Accumulating evidence on catatonia specifically in schizophrenia and elucidating the pathological bases of catatonia are needed to provide clear evidence on the optimal treatment method.
Recommendation There are various causes that may induce depressive symptoms in schizophrenia. It is recommended that they are distinguished by considering the symptoms of the illness itself, psychological reactions, and drug‐induced symptoms, and to take measures according to the cause 1D . It is desirable to reduce the dose of antipsychotics when they are suspected of causing depression 2D . With regard to switching antipsychotics, switching to SGAs is recommended when taking haloperidol 1C . It is desirable not to use antidepressants or lithium concomitantly due to inconsistent results and the possibility of side effects occurring due to drug interactions 2D . It is desirable not to conduct ECT since it has not shown any antidepressive effects 2D . Explanation Depressive symptoms of schizophrenia occur in all stages, including prodromal, initial, acute, and convalescent post‐psychotic depression and chronic pre‐relapse, with a prevalence of 6%‐75% and mode of 25%. The coexistence of depressive symptoms increases difficulties in social life and the risk of suicide. , Its causes are also extremely complex and it is recommended that they are differentiated by considering side effects of antipsychotics, results of drug abuse or withdrawal, symptoms due to the disease itself, psychological reactions such as despair or difficulties in social life, and institutional pathologies due to long‐term hospitalization and that measures according to the cause are implemented 1D . As for improvements in depressive symptoms due to reduced doses of antipsychotics, one trial studied reduced doses of LAIs of fluphenazine decanoate in 22 patients with schizophrenia with primarily negative symptoms. The results showed that physical discomfort decreased, depression improved, and positive symptoms did not worsen D . Based on this study, it is desirable to reduce antipsychotic doses since it may improve depression 2D . As for effects against depressive symptoms, results of a meta‐analysis which compared BPRS and PANSS showed that the SGAs aripiprazole, clozapine, olanzapine, quetiapine, and amisulpride* were more effective than FGAs (primarily haloperidol), but risperidone, zotepine, sertindole*, and ziprasidone* showed no differences to FGAs C . Therefore, with regard to the effects on depressive conditions by switching antipsychotics, it is recommended to switch to SGAs when administering haloperidol 1C . Recent results of the effectiveness using depressive symptom scales specialized for schizophrenia showed no significant differences when perphenazine and SGAs were directly compared C . There were also no differences between SGAs (olanzapine, quetiapine, risperidone, and ziprasidone*) on depressive symptoms over 24 months in 226 patients who were hospitalized in the acute phase C . The evidence on the effectiveness of augmentation therapy with antidepressants on depressive symptoms is not consistent. A meta‐analysis of antidepressants (imipramine, amitriptyline, mianserin, nortriptyline, trazodone, sertraline, bupropion*, moclobemide*, and viloxazine*) based on 11 RCTs showed no increases in psychiatric symptoms due to enhanced antidepressants and the possibility of antidepressant effects D . However, a small sample size and inconsistent trial entry criteria and assessment methods were mentioned as impairing the interpretation of the results of these studies. The few RCTs on the concomitant use of new antidepressants produced inconsistent results even for the same drug. Mirtazapine at 30 mg/d was shown to have effects in one out of three studies , , D . Two trials showed that 40 mg/d of citalopram* improved Hamilton Rating Scale for Depression (HDRS) and decreased suicidal ideations D . Neither report showed differences in side effects or worsening of psychiatric symptoms D . A trial which was limited to post‐psychotic depression, which occurs in the convalescent stage following the disappearance of acute symptoms of schizophrenia reported that imipramine addition therapy was effective D . However, these clinical trials had small sample sizes with 14 and 21 participants and both trials were conducted with patients who were receiving fluphenazine decanoate LAI treatment. Few subsequent studies based on concomitant use of antidepressants showed clear differences with concomitant use of placebos and trial design issues have been raised D . The existence of only small‐scale trials, trial design issues like inconsistent depressive symptom assessment, and inconsistent results on effectiveness despite a worsening of psychiatric symptoms not being observed were highlighted as problems in determining the usefulness of the concomitant use of antidepressants for depressive symptoms of schizophrenia, including post‐psychotic depression. Concomitant use is not recommended at this time when further considering that package inserts in Japan clearly state contra‐indications from interactions or warnings for concomitant use caused by increased blood antipsychotic concentrations due to the inhibition of drug‐metabolizing enzymes by antidepressants 2D . A systematic review on concomitant lithium carbonate therapy showed that there were no differences in the improvement of depressive symptoms between a concomitant lithium carbonate group and a placebo group D . The discussion of these results suggested inconsistencies in assessment methods and early discontinuation due to side effects. One RCT (n = 21) which used improvements in depression scores of BPRS as an indicator showed improvements only in the concomitant group in an eight‐week assessment D . Overall, the results of concomitant lithium carbonate therapy were contradictory, and furthermore, its package insert in Japan states that concomitant use with drugs like haloperidol can cause electrocardiographic changes, severe EPS, persistent dyskinesia, idiopathic neuroleptic malignant syndrome, and irreversible brain damage. Therefore, considering the concomitant use warnings, it is recommended not to do so 2D . There are few studies that focus on improvements in depressive symptoms of schizophrenia due to ECT. ECT (8‐20 times) did not show any antidepressive effects in a placebo comparative open trial which included 15 patients with treatment‐resistant schizophrenia. In this trial, even doses over 600 mg/d chlorpromazine equivalent of two or more types of antipsychotics of different classes were ineffective for over six weeks and clozapine was either ineffective or its administration was rejected D . In conclusion, ECT is not recommended 2D .
There are various causes that may induce depressive symptoms in schizophrenia. It is recommended that they are distinguished by considering the symptoms of the illness itself, psychological reactions, and drug‐induced symptoms, and to take measures according to the cause 1D . It is desirable to reduce the dose of antipsychotics when they are suspected of causing depression 2D . With regard to switching antipsychotics, switching to SGAs is recommended when taking haloperidol 1C . It is desirable not to use antidepressants or lithium concomitantly due to inconsistent results and the possibility of side effects occurring due to drug interactions 2D . It is desirable not to conduct ECT since it has not shown any antidepressive effects 2D .
Depressive symptoms of schizophrenia occur in all stages, including prodromal, initial, acute, and convalescent post‐psychotic depression and chronic pre‐relapse, with a prevalence of 6%‐75% and mode of 25%. The coexistence of depressive symptoms increases difficulties in social life and the risk of suicide. , Its causes are also extremely complex and it is recommended that they are differentiated by considering side effects of antipsychotics, results of drug abuse or withdrawal, symptoms due to the disease itself, psychological reactions such as despair or difficulties in social life, and institutional pathologies due to long‐term hospitalization and that measures according to the cause are implemented 1D . As for improvements in depressive symptoms due to reduced doses of antipsychotics, one trial studied reduced doses of LAIs of fluphenazine decanoate in 22 patients with schizophrenia with primarily negative symptoms. The results showed that physical discomfort decreased, depression improved, and positive symptoms did not worsen D . Based on this study, it is desirable to reduce antipsychotic doses since it may improve depression 2D . As for effects against depressive symptoms, results of a meta‐analysis which compared BPRS and PANSS showed that the SGAs aripiprazole, clozapine, olanzapine, quetiapine, and amisulpride* were more effective than FGAs (primarily haloperidol), but risperidone, zotepine, sertindole*, and ziprasidone* showed no differences to FGAs C . Therefore, with regard to the effects on depressive conditions by switching antipsychotics, it is recommended to switch to SGAs when administering haloperidol 1C . Recent results of the effectiveness using depressive symptom scales specialized for schizophrenia showed no significant differences when perphenazine and SGAs were directly compared C . There were also no differences between SGAs (olanzapine, quetiapine, risperidone, and ziprasidone*) on depressive symptoms over 24 months in 226 patients who were hospitalized in the acute phase C . The evidence on the effectiveness of augmentation therapy with antidepressants on depressive symptoms is not consistent. A meta‐analysis of antidepressants (imipramine, amitriptyline, mianserin, nortriptyline, trazodone, sertraline, bupropion*, moclobemide*, and viloxazine*) based on 11 RCTs showed no increases in psychiatric symptoms due to enhanced antidepressants and the possibility of antidepressant effects D . However, a small sample size and inconsistent trial entry criteria and assessment methods were mentioned as impairing the interpretation of the results of these studies. The few RCTs on the concomitant use of new antidepressants produced inconsistent results even for the same drug. Mirtazapine at 30 mg/d was shown to have effects in one out of three studies , , D . Two trials showed that 40 mg/d of citalopram* improved Hamilton Rating Scale for Depression (HDRS) and decreased suicidal ideations D . Neither report showed differences in side effects or worsening of psychiatric symptoms D . A trial which was limited to post‐psychotic depression, which occurs in the convalescent stage following the disappearance of acute symptoms of schizophrenia reported that imipramine addition therapy was effective D . However, these clinical trials had small sample sizes with 14 and 21 participants and both trials were conducted with patients who were receiving fluphenazine decanoate LAI treatment. Few subsequent studies based on concomitant use of antidepressants showed clear differences with concomitant use of placebos and trial design issues have been raised D . The existence of only small‐scale trials, trial design issues like inconsistent depressive symptom assessment, and inconsistent results on effectiveness despite a worsening of psychiatric symptoms not being observed were highlighted as problems in determining the usefulness of the concomitant use of antidepressants for depressive symptoms of schizophrenia, including post‐psychotic depression. Concomitant use is not recommended at this time when further considering that package inserts in Japan clearly state contra‐indications from interactions or warnings for concomitant use caused by increased blood antipsychotic concentrations due to the inhibition of drug‐metabolizing enzymes by antidepressants 2D . A systematic review on concomitant lithium carbonate therapy showed that there were no differences in the improvement of depressive symptoms between a concomitant lithium carbonate group and a placebo group D . The discussion of these results suggested inconsistencies in assessment methods and early discontinuation due to side effects. One RCT (n = 21) which used improvements in depression scores of BPRS as an indicator showed improvements only in the concomitant group in an eight‐week assessment D . Overall, the results of concomitant lithium carbonate therapy were contradictory, and furthermore, its package insert in Japan states that concomitant use with drugs like haloperidol can cause electrocardiographic changes, severe EPS, persistent dyskinesia, idiopathic neuroleptic malignant syndrome, and irreversible brain damage. Therefore, considering the concomitant use warnings, it is recommended not to do so 2D . There are few studies that focus on improvements in depressive symptoms of schizophrenia due to ECT. ECT (8‐20 times) did not show any antidepressive effects in a placebo comparative open trial which included 15 patients with treatment‐resistant schizophrenia. In this trial, even doses over 600 mg/d chlorpromazine equivalent of two or more types of antipsychotics of different classes were ineffective for over six weeks and clozapine was either ineffective or its administration was rejected D . In conclusion, ECT is not recommended 2D .
Recommendation Antipsychotics improve cognitive impairment but the magnitude of these effects is small A . SGAs have a slightly higher improvement effect than FGAs B . There are no differences in improvement effects for cognitive function between drugs B . Concomitant use of anticholinergic drugs and BZ drugs has an adverse effect on cognitive function D . No improvement effects in cognitive function can be seen with cholinesterase inhibitor, mirtazapine, and mianserin addition therapy C . It is recommended to use SGAs alone in appropriate doses and minimize concomitant use of anticholinergic drugs or BZ sedatives to improve cognitive impairment 1A . Explanation Cognitive function is the ability to integrate information processing and social function while cognitive impairment is correlated more with social‐function prognosis than with psychiatric symptoms. Neuropsychological methods have been used to assess cognitive impairment in research but attention should be paid to the recovery of social function in actual clinical practice as well. Effects of antipsychotics There were many comparative studies and two meta‐analyses on the improvement effects of antipsychotics on cognitive impairment. , , , , , , , , , , , , A meta‐analysis that carefully examined 41 studies showed that SGAs (clozapine, olanzapine, quetiapine, and risperidone) improved cognitive impairment. Fourteen RCTs that compared them with FGAs showed that they improved cognitive function more than FGAs, though the effect size was small at 0.24. Furthermore, no differences in the improvement effects of cognitive function were found between drugs. These types of improvement effects of cognitive impairment were seen in cases with only first‐episode schizophrenia , , , and in cases where subjects were early‐onset schizophrenia (13‐18 years old). One hypothesis for why SGAs are superior to FGAs for improving cognitive impairment is that the former drugs have fewer motor‐based side effects such as EPS. Fewer EPS results in less concomitant use of anticholinergic drugs, which are used to treat for EPS. Anticholinergic drugs worsen cognitive impairment in schizophrenia and reducing their dose or suspending them improves cognitive impairment. BZ sedatives, which are often used concomitantly in pharmacotherapy, also worsen cognitive impairment, and reducing their dose or suspending them improves cognitive impairment. Reducing concomitant drugs is desirable for improving cognitive impairment C . Attention must be paid to the doses used in antipsychotic drug therapy when conducting it with the expectation of improving cognitive impairment. In general, higher doses of antipsychotics reduce cognitive function. Polypharmacy therapies with chlorpromazine equivalent at over 1000 mg/d or high doses of antipsychotics resulted in reduced visual memory, delayed regeneration, behavioral IQ, and performance when compared to therapies with lower doses. The severity of illness and the low cognitive function of patients who receive high doses in treatment must also be considered, but it is thought that high doses are at least partially related to cognitive impairment D . There is limited evidence relating to addition (concomitant) therapy of drugs other than antipsychotics. Addition therapy of cholinesterase inhibitors has not been shown to be superior in double‐blind trials. , , , , This also applied to addition therapy of mirtazapine and mianserin, which are serotonin 5‐HT2 receptor antagonists, which is related to cognitive function C . In conclusion, it is recommended that an appropriate dose of SGAs is used and the concomitant use of anticholinergic drugs and BZ sedatives is minimized for improving cognitive impairment 1A .
Antipsychotics improve cognitive impairment but the magnitude of these effects is small A . SGAs have a slightly higher improvement effect than FGAs B . There are no differences in improvement effects for cognitive function between drugs B . Concomitant use of anticholinergic drugs and BZ drugs has an adverse effect on cognitive function D . No improvement effects in cognitive function can be seen with cholinesterase inhibitor, mirtazapine, and mianserin addition therapy C . It is recommended to use SGAs alone in appropriate doses and minimize concomitant use of anticholinergic drugs or BZ sedatives to improve cognitive impairment 1A .
Cognitive function is the ability to integrate information processing and social function while cognitive impairment is correlated more with social‐function prognosis than with psychiatric symptoms. Neuropsychological methods have been used to assess cognitive impairment in research but attention should be paid to the recovery of social function in actual clinical practice as well. Effects of antipsychotics There were many comparative studies and two meta‐analyses on the improvement effects of antipsychotics on cognitive impairment. , , , , , , , , , , , , A meta‐analysis that carefully examined 41 studies showed that SGAs (clozapine, olanzapine, quetiapine, and risperidone) improved cognitive impairment. Fourteen RCTs that compared them with FGAs showed that they improved cognitive function more than FGAs, though the effect size was small at 0.24. Furthermore, no differences in the improvement effects of cognitive function were found between drugs. These types of improvement effects of cognitive impairment were seen in cases with only first‐episode schizophrenia , , , and in cases where subjects were early‐onset schizophrenia (13‐18 years old). One hypothesis for why SGAs are superior to FGAs for improving cognitive impairment is that the former drugs have fewer motor‐based side effects such as EPS. Fewer EPS results in less concomitant use of anticholinergic drugs, which are used to treat for EPS. Anticholinergic drugs worsen cognitive impairment in schizophrenia and reducing their dose or suspending them improves cognitive impairment. BZ sedatives, which are often used concomitantly in pharmacotherapy, also worsen cognitive impairment, and reducing their dose or suspending them improves cognitive impairment. Reducing concomitant drugs is desirable for improving cognitive impairment C . Attention must be paid to the doses used in antipsychotic drug therapy when conducting it with the expectation of improving cognitive impairment. In general, higher doses of antipsychotics reduce cognitive function. Polypharmacy therapies with chlorpromazine equivalent at over 1000 mg/d or high doses of antipsychotics resulted in reduced visual memory, delayed regeneration, behavioral IQ, and performance when compared to therapies with lower doses. The severity of illness and the low cognitive function of patients who receive high doses in treatment must also be considered, but it is thought that high doses are at least partially related to cognitive impairment D . There is limited evidence relating to addition (concomitant) therapy of drugs other than antipsychotics. Addition therapy of cholinesterase inhibitors has not been shown to be superior in double‐blind trials. , , , , This also applied to addition therapy of mirtazapine and mianserin, which are serotonin 5‐HT2 receptor antagonists, which is related to cognitive function C . In conclusion, it is recommended that an appropriate dose of SGAs is used and the concomitant use of anticholinergic drugs and BZ sedatives is minimized for improving cognitive impairment 1A .
Recommendation SGAs may be effective as an antipsychotic treatment for psychosis‐related polydipsia D , so it is desirable to conduct standard SGA‐based pharmacotherapy 2D . It is desirable to introduce clozapine in cases of psychosis‐related polydipsia due to treatment‐resistant schizophrenia 2D . Sample sizes and assessments are not consistent with other pharmacotherapies, and there is no desirable pharmacotherapy 2D . Explanation Psychosis‐related polydipsia and its associated water intoxication was seen in 10%‐20% of patients with chronic schizophrenia and another study reported that polydipsia and water intoxication were found in 10%‐20% and 3%‐4% of patients in Japan who are hospitalized in psychiatric hospitals, respectively. Complications of hyponatremia due to water intoxication can induce heart failure, consciousness disturbance, seizures, rhabdomyolysis, and neuroleptic malignant syndrome, which often complicate treatment and reduce vital prognosis. For this reason, measures against psychosis‐related polydipsia are clinically important, but there are no large‐scale prospective studies. Furthermore, reports on individual efforts often include interventions on treatment environments and behavioral patterns. Reports focused on pharmacotherapies are limited and the level of evidence is low. Are there antipsychotics that are effective against psychosis‐related polydipsia? Many reports have indicated that clozapine‐based treatment is effective , , , , , , , , , (case series) D. Other reports have indicated the effectiveness of replacement with SGAs that can be used in Japan, including quetiapine, , , , , aripiprazole, olanzapine, perospirone, blonanserin, and risperidone , , , but with inconsistent assessments D . Psychosis‐related polydipsia and water intoxication have been reported already before the introduction of antipsychotics and these studies considered psychiatric symptoms of schizophrenia. It is desirable to conduct standard SGA‐based pharmacotherapy. Next, it is desirable to examine options for introducing clozapine if psychosis‐related polydipsia and water intoxication are serious and thought to be due to the symptoms of treatment‐resistant schizophrenia (see Chapter 4 ⇒ pg. 67). Are there other pharmacotherapies that are effective against psychosis‐related polydipsia? Psychosis‐related polydipsia is thought to be caused by angiotensin II in association with chronic dopamine D2 receptor blockade by antipsychotics. , , . Treatment effects by angiotensin‐converting enzyme (ACE) inhibitors (captopril , and enalapril ), beta‐blockers (propranolol , , ), opioid antagonists (naloxone ), demeclocycline* , carbamazepine , and lithium , have been reported but sample sizes of these studies are small and the assessments are inconsistent . Furthermore, the risk of side effects occurring due to concomitant use is not clear, so there is no desirable pharmacotherapy 2D .
SGAs may be effective as an antipsychotic treatment for psychosis‐related polydipsia D , so it is desirable to conduct standard SGA‐based pharmacotherapy 2D . It is desirable to introduce clozapine in cases of psychosis‐related polydipsia due to treatment‐resistant schizophrenia 2D . Sample sizes and assessments are not consistent with other pharmacotherapies, and there is no desirable pharmacotherapy 2D .
Psychosis‐related polydipsia and its associated water intoxication was seen in 10%‐20% of patients with chronic schizophrenia and another study reported that polydipsia and water intoxication were found in 10%‐20% and 3%‐4% of patients in Japan who are hospitalized in psychiatric hospitals, respectively. Complications of hyponatremia due to water intoxication can induce heart failure, consciousness disturbance, seizures, rhabdomyolysis, and neuroleptic malignant syndrome, which often complicate treatment and reduce vital prognosis. For this reason, measures against psychosis‐related polydipsia are clinically important, but there are no large‐scale prospective studies. Furthermore, reports on individual efforts often include interventions on treatment environments and behavioral patterns. Reports focused on pharmacotherapies are limited and the level of evidence is low. Are there antipsychotics that are effective against psychosis‐related polydipsia? Many reports have indicated that clozapine‐based treatment is effective , , , , , , , , , (case series) D. Other reports have indicated the effectiveness of replacement with SGAs that can be used in Japan, including quetiapine, , , , , aripiprazole, olanzapine, perospirone, blonanserin, and risperidone , , , but with inconsistent assessments D . Psychosis‐related polydipsia and water intoxication have been reported already before the introduction of antipsychotics and these studies considered psychiatric symptoms of schizophrenia. It is desirable to conduct standard SGA‐based pharmacotherapy. Next, it is desirable to examine options for introducing clozapine if psychosis‐related polydipsia and water intoxication are serious and thought to be due to the symptoms of treatment‐resistant schizophrenia (see Chapter 4 ⇒ pg. 67). Are there other pharmacotherapies that are effective against psychosis‐related polydipsia? Psychosis‐related polydipsia is thought to be caused by angiotensin II in association with chronic dopamine D2 receptor blockade by antipsychotics. , , . Treatment effects by angiotensin‐converting enzyme (ACE) inhibitors (captopril , and enalapril ), beta‐blockers (propranolol , , ), opioid antagonists (naloxone ), demeclocycline* , carbamazepine , and lithium , have been reported but sample sizes of these studies are small and the assessments are inconsistent . Furthermore, the risk of side effects occurring due to concomitant use is not clear, so there is no desirable pharmacotherapy 2D .
Recommendation Treatment after the onset of EPS Like with other drug‐induced side effects, it is recommended to reduce the dose of the causative drug and discontinue it in severe cases as a general rule if EPS occurs 1D . However, it is necessary to take measures in consideration of the benefits and harm when the causative drug is effective for psychiatric symptoms. Descriptions are given based on side effects and symptoms below. Drug‐induced Parkinsonism ① Switching to SGAs which are less likely to cause Parkinsonism is recommended when Parkinsonism has occurred with FGAs which are likely to induce EPS 1A . Switching to clozapine, quetiapine, or olanzapine is desirable when the same side effects occur even when using SGAs 2D . ② Upon careful assessment of psychiatric symptoms, reducing the dose of orally administered antipsychotics is recommended if possible 1D . ③ Concomitant use of anticholinergic drugs (biperiden and trihexyphenidyl) or antiparkinsonian drugs (amantadine) is desirable when ① or ② cannot be selected or when antipsychotic drug adjustments alone are not effective 2D . Acute dystonia Oral anticholinergic drugs (biperiden and trihexyphenidyl), antihistamines (promethazine), or IM anticholinergic drug injections are desirable 2D . Switching to aripiprazole, olanzapine, or quetiapine is desirable when acute dystonia is caused by high‐titer FGAs 2D . One meta‐analysis compared high and low doses of antipsychotics for acute dystonia and showed that the onset risk was low in the low‐dose group D . Reduced doses of antipsychotics are desirable as an option 2D . Akathisia Active interventions such as pharmacotherapy, psychotherapy, and environmental adjustments are recommended when there is a high degree of urgency accompanied by severe anxiety and frustration, and where risks of suicidal ideations, suicide attempts, or other harm are expected 1D . Reduced doses of the orally administered antipsychotic are recommended upon sufficient discussion with the patient when akathisia symptoms are mild 1D . Switching to SGAs is recommended when high‐titer/high‐dose FGAs are prescribed 1C . Furthermore, using medium‐titer or low‐titer FGAs is desirable when switching to SGAs is not possible 2D . The concomitant use of anticholinergic drugs, beta‐blockers (propranolol), clonazepam, mianserin, mirtazapine, trazodone, cyproheptadine, and vitamin B6 is not desirable 2D . Tardive dyskinesia (TD) Switching to clozapine, olanzapine, and quetiapine after the onset of TD may reduce side effects D and it is desirable to switch to these drugs 2D . Small‐scale RCT results showed that reduced doses of anticholinergic drugs decreased the severity of TD, so reducing doses during concomitant use of anticholinergic drugs is desirable 2D . Tardive dystonia There is no established treatment method but switching to clozapine is desirable when selecting antipsychotics 2D . Prevention of EPS Prevention of drug‐induced Parkinsonism Selecting SGAs is recommended over FGAs 1A . Selecting one SGA among clozapine, quetiapine, or olanzapine is desirable 2D . Prevention of acute dystonia Selecting SGAs is recommended over FGAs 1C . Anticholinergic drugs (biperiden and trihexyphenidyl) are effective for prevention when using FGAs D and temporary use for up to several weeks after starting treatment is desirable 2D . Prevention of akathisia Avoid high‐titer/high‐dose FGAs and select SGAs 1C . Alternatively, it is desirable to select medium‐titer or low‐titer FGAs when SGAs are not possible to use 2D . No recommendation is made for a specific SGA. Prevention of TD Selecting SGAs is recommended over FGAs 1D . Prevention of tardive dystonia There is almost no evidence at this stage on drugs that are effective for preventing tardive dystonia, so no recommendation is made. Explanation EPS can be divided into acute symptoms that are likely to occur after starting or increasing the administration of antipsychotics (acute dystonia, akathisia, and Parkinsonism) and tardive symptoms which often occur several months after administration (TD and tardive dystonia). It is very important to first conduct a differential diagnosis of these symptoms and various psychiatric symptoms (eg, anxiety, irritation/agitation, depression, catatonic symptoms, and conversion symptoms). Pharmacotherapies for EPS include preventative therapy to minimize the onset of symptoms as much as possible and symptomatic treatment when symptoms occur. Prescription plans are made as part of the process of on‐site medical care for first time or untreated patients for preventing the onset. In this guideline, we first describe what measures are to be conducted when the onset of EPS has already occurred, while prevention is discussed later. (A) Treatment after the onset of EPS (1) Drug‐induced Parkinsonism Drug‐induced Parkinsonism develops within a few weeks after drug administration. It has a tendency to develop in patients who are middle‐aged or older and the onset risk in many cases increases depending on the dose of antipsychotics. The onset is also influenced by individual vulnerabilities such as organic brain diseases and aging. Muscle rigidity, bradykinesia, dysarthria, dysphagia, postural dysregulation, and other symptoms are observed in a manner similar to idiopathic Parkinsonism but bilateral aspects are common in the case of drug‐induced symptoms. There are also some differences such as that no tremor is observed at rest. Proper measures against drug‐induced Parkinsonism are important since Parkinsonism interferes with the patient’s behavior and is also a risk for TD in addition to being a cause of sluggishness, falls, and aspiration. ① Many studies have shown that clozapine, , olanzapine, , , , , , , quetiapine, , , aripiprazole, , , perospirone, risperidone, , , , , blonanserin, and paliperidone , , cause fewer EPS than haloperidol A . Switching to SGAs, which are less likely to cause Parkinsonism, is recommended when Parkinsonism occurs while using FGAs, which are more likely to cause EPS 1A . One RCT‐based meta‐analysis directly compared the concomitant use rate of antiparkinsonian drugs between SGAs. The results showed that the concomitant use of antiparkinsonian drugs with risperidone was more common than with clozapine, olanzapine, and quetiapine. The concomitant use with aripiprazole was used more often than with olanzapine but was comparable to with risperidone. The concomitant use of antiparkinsonian drugs with clozapine was used much less than with risperidone but similarly often as with olanzapine. The concomitant use with olanzapine was used less often than with aripiprazole, risperidone but similar to with clozapine and more often than with quetiapine. The concomitant use with quetiapine was used less often than with olanzapine and risperidone D . These results indicate that there are differences in the frequency of drug‐induced Parkinsonism even among SGAs, and switching to clozapine, quetiapine, or olanzapine is desirable when the same side effects occur even when using SGAs 2D . ② There are four systematic reviews that showed that the likelihood of EPS depended on the dose of antipsychotic. , , , Reducing the dose of orally administered antipsychotics is recommended if possible upon careful assessment of psychiatric symptoms 1D . ③ There are two RCTs on the effects of anticholinergic drugs and antiparkinsonian drugs. Comparisons of 35 patients with schizophrenia who had already exhibited EPS with 18 patients who took amantadine (100 mg/d, average of 22.4 mg/d of haloperidol) and 17 patients who took biperiden (2 mg/d, average of 19.6 mg haloperidol per day) showed similar improvements in EPS in the latter two groups. Furthermore, 32 patients with schizophrenia whose symptoms were stable suspended the concomitant use of trihexyphenidyl before they were randomly allocated between groups which were administered amantadine (100 mg/d) or biperiden (2 mg/d) after seven days. The effects were assessed after two weeks with the Simpson‐Angus Neurologic Rating Scale. Both groups showed similar improvements in the EPS score D . However, anticholinergic drugs have peripheral anticholinergic side effects such as dry mouth, constipation, dysuria, and blurred vision, and side effects such as cognitive impairment and particularly visual memory impairment D . Amantadine has side effects such as vomiting and hallucinations D . Therefore, caution is required to use these drugs properly. Both, anticholinergic drugs (biperiden) and antiparkinsonian drugs (amantadine) show effects, but they each have a characteristic risk of side effects, and it is desirable to implement concomitant use while taking these into consideration 2D . Cognitive impairment from anticholinergic drugs can seriously impair the lives of patients, so gradual discontinuation should be the objective after Parkinsonism has improved. A double‐blind trial has reported that “reducing doses at a rate of 1 mg/2 weeks is possible” as a specific tapering method for trihexyphenidyl. (2) Acute dystonia Acute dystonia is common in young men and is an abnormal posture or muscle rigidity due to involuntary and continuous muscle contraction that usually occurs within three days of drug administration. Upturning of the eyes or twisting of the neck and trunk are common manifestations. These can also be painful, and although rare, cases like laryngeal dystonia can be fatal. , Approximately 80% of events occur in the evening or night. This can also be a factor for refusing to take medication. Prompt symptomatic treatment is often necessary. Clinical use of anticholinergic drugs (biperiden and trihexyphenidyl) and antihistamines (promethazine) is suggested for symptomatic treatment 2D. Furthermore, IM anticholinergic drug injections are used for rapid recovery, which is our recommendation 2D . With regards to changes in antipsychotics, there is one double‐blind trial that split 70 patients with schizophrenia with a history of developing acute dystonia due to taking FGAs into an oral risperidone group and oral olanzapine group with 35 patients each. A comparison of cases with concomitant use of drugs for acute dystonia (anticholinergic drugs) between the two groups revealed that 14 of 35 patients in the former and 4 of 35 patients in the latter group showed dystonia D . Furthermore, one meta‐analysis showed that aripiprazole and olanzapine caused a significantly lower frequency of dystonia onset when compared to haloperidol D , the other meta‐analysis showed that quetiapine triggered dystonia in significantly fewer patients when compared to FGAs D . In conclusion, switching to aripiprazole, olanzapine, and quetiapine is desirable when acute dystonia has occurred due to the administration of high‐titer FGAs 2D . With regard to reducing the dose of antipsychotics, a meta‐analysis compared between ultra‐high doses (over 35 mg/d) and somewhat high doses (7.5‐15 mg/d) of haloperidol, as well as between low doses (<400 mg/d) vs moderate doses (400‐800 mg/d) and low doses vs high doses (over 800 mg/d) for chlorpromazine. The results showed that the lower dose resulted in a significantly lower frequency of acute dystonia onset. In conclusion, reducing the dose of antipsychotics while carefully assessing symptoms is recommended 2D . (3) Akathisia Akathisia is a side effect characterized by restlessness of the body such as “fidgeting of the lower limbs,” “stomping feet,” and “not being able to sit still.” Mildly affected patients may be able to control it themselves. However, caution is required as these symptoms can be accompanied by strong anxiety and frustration and cause suicidal ideations, suicide attempts, and other harm. , , Active interventions such as pharmacotherapy, psychotherapy, and environmental adjustments including hospitalization are recommended in such urgent cases 1D . Please refer to CQ5‐1 (⇒ pg. 94). The likelihood of akathisia depends on the dose of the antipsychotic , , D . Reducing the dose of the antipsychotic being taken orally is recommended after sufficient discussion with the patient when akathisia symptoms are mild 1D . A double‐blind trial conducted on 119 patients with young‐onset schizophrenia showed that SGAs (2.5‐20 mg/d of olanzapine, 0.5‐6 mg/d of risperidone) had a lower incidence of akathisia than FGAs (10‐140 mg/d of molindone) C . A clinical trial that compared the mean difference in DIEPSS in 182 Japanese patients divided between olanzapine and haloperidol groups showed that the former had significantly less akathisia than the latter C . A systematic review and a meta‐analysis also showed that the onset of akathisia was lower with SGAs than with FGAs C but caution is required as the FGAs used in this comparison were often used in trials where high doses of high‐titer haloperidol were administered . Meanwhile, the rate of concomitant use of antiparkinsonian drugs was higher in the group with moderate doses of perphenazine (160 patients, 8–32 mg/d), which is a medium‐titer FGA for chronic schizophrenia, when compared to SGA groups with olanzapine (174 patients, 7.5‐30 mg/d), quetiapine (166 patients, 200‐800 mg/d), and risperidone (167 patients, 1.5‐6 mg/d). However, there were no differences in the incidence of akathisia D . In one RCT which blinded only doses up to 12 weeks and compared the onset of akathisia between 118 patients with FGAs (of which sulpiride was the most common in 58 patients, with an average dose of 813 (200‐2400) mg/d) and 109 patients with SGAs (50 patients with olanzapine at an average dose of 15 (5‐30) mg/d, 23 patients with quetiapine at an average dose of 450 (200‐750) mg/d, and 22 patients with risperidone at an average dose of 5 (2‐10) mg/d, etc.),8 patients in the FGA group and 4 patients in the SGA group had akathisia, with an odds ratio of 0.4 (95% CI = 0.1‐1.6) and P = 0.18, with SGAs showing superior tendencies but no significant differences D . In conclusion, switching to SGAs is recommended when high‐titer/high‐dose FGAs are prescribed 1C . Furthermore, it is desirable to use medium‐titer or low‐titer FGAs when switching to SGAs is not possible 2D . There was one very small‐scale double‐blind comparative study for anticholinergic drugs, but results showed that there were no significant differences with IM placebo groups. If anything, cognitive impairment was indicated due to anticholinergic drugs. Beta‐blockers such as propranolol (80 mg/d) have been shown to be effective, , but the trial was very small D . The effects were also not seen within 48 hours, and it was only effective when used concomitantly with anticholinergic drugs D . Furthermore, assessments were not consistent even in a systematic review of three RCTs (total n = 51) D . Therefore, it is necessary to consider the risks of side effects such as beta‐blocker‐induced hypotension or bradycardia D . Two small‐scale RCTs have shown the effectiveness of clonazepam , D . As for antidepressants, mianserin (15 mg/d) D , mirtazapine (15 mg/d) D , trazodone (100 mg/d) D , cyproheptadine D , and vitamin B6 (both 600 mg/d and 1200 mg/d ) D have been shown to be effective. However, caution is required as these were all results of small‐scale double‐blind placebo‐controlled trials and the use of these drugs constitutes off‐label use in Japan. In conclusion, concomitant use of anticholinergic drugs, beta‐blockers (propranolol), clonazepam, mianserin, mirtazapine, trazodone, cyproheptadine, and vitamin B6 is not desirable 2D . (4) Tardive dyskinesia TD often refers to various involuntary movements near the neck, face, and mouth (pursed lip, tongue movement, and lip movement) and irregular movements of the upper and lower limbs occurring a few months after taking antipsychotics. These can be irreversible, and there is no established treatment method. Reducing the dose of antipsychotics was shown to be effective in a very small‐scale trial of eight patients. TD did not develop in the four patients of the reduced‐dose group, whereas two of the four in the normal dose group developed TD D . However, a Cochrane Review meta‐analysis concluded that there were issues with the study design and that effectiveness could not be determined D . In conclusion, there is no evidence that reducing the dose of antipsychotics prevents TD, and it is desirable not to do so 2D . Switching to clozapine, olanzapine, or quetiapine has been suggested to have an effect after the onset of TD. An extremely small‐scale non‐blind trial with seven subjects who had severe TD showed improvements in TD with clozapine D . Switching to olanzapine in 92 patients with moderate or higher TD resulted in ~70% of subjects no longer being diagnosed with TD after eight months D . Two small‐scale trials showed the effects of quetiapine on TD. A 12‐month single‐blind trial where 45 patients with TD were randomly allocated into groups of 22 patients who received quetiapine (400 mg/d) and 23 patients who were administered haloperidol (8.5 mg/d) showed significant improvements in the TD assessment score in ESRS D . A small‐scale trial that compared switching to quetiapine (13 patients) and continuous treatment (nine patients) for patients with TD showed that switching to quetiapine decreased TD D . Based on these results, switching to clozapine, olanzapine, and quetiapine is desirable since these switchings may reduce side effects 2D . There was a small‐scale RCT which showed that TD severity decreased due to reduced doses of anticholinergic drugs D , and reduced doses are desirable when anticholinergic drugs are used concomitantly 2D . One RCT that compared the mean differences in the total Abnormal Involuntary Movement Scale (AIMS) score in patients taking Ginkgo biloba extract between 78 patients in the active drug group and 79 patients in the placebo group showed the effectiveness of treatment D . One RCT that compared the effect of piracetam (mean differences) on the TD assessment score in ESRS between 21 patients in the active drug group and 19 patients in the placebo group showed the effectiveness of treatment D . However, both would be considered off‐label use in Japan, and concomitant use is not desirable 2D . (5) Tardive dystonia Tardive dystonia refers to postural or behavioral abnormalities due to persistent and involuntary myotonia occurring several months after taking antipsychotic drugs. It may no longer be possible to maintain posture or make smooth movements as intended, and this can cause severe difficulties in activities of daily living, including walking. Extremely small‐scale open trials of 7 and 5 patients showed that switching to clozapine had an effect D . Options for introducing clozapine while considering its drawbacks, including side effects and hematological monitoring, is desirable 2D . (B) Prevention of EPS (1) Prevention of drug‐induced Parkinsonism Many studies have shown that clozapine, , olanzapine, , , , , , , quetiapine, , , aripiprazole, , , perospirone, risperidone, , , , , blonanserin, and paliperidone , , resulted in fewer EPS than haloperidol A , and selecting SGAs over FGAs is recommended 1A . As for comparisons between SGAs, a meta‐analysis based on RCTs that directly compared the rates of concomitant use of antiparkinsonian drugs indicated that there were differences in concomitant use rate even among SGAs D . Therefore, it is desirable to select either clozapine, quetiapine, or olanzapine while considering other symptom profiles in cases where a medical history of drug‐induced Parkinsonism or underlying illness is suspected 2D . (2) Prevention of acute dystonia A retrospective cohort study of 1975 patients in the United States from 1997 to 2006 showed that the odds ratio of acute dystonia incidence in a group of patients given SGA monotherapy was significantly lower than in a group given FGA monotherapy (odds ratio = 0.12, 95% CI = 0.08‐0.19) C . A prospective cohort study of 1337 subjects who were hospitalized in a psychiatric emergency unit showed that 41 out of 1337 patients (3.1%) exhibited acute dystonia. The incidence rates by drug were as follows: FGAs, 32/561; risperidone, 4/495; olanzapine, 1/95; quetiapine, 1/15; and clozapine, 0/142. SGAs showed a significantly lower rate of onset than FGAs ( P = 0.000) C . In conclusion, SGAs are recommended over FGAs when selecting antipsychotics for preventing acute dystonia 1C . When using FGAs, patients who were administered anticholinergic drugs as a preventative measure (N = 9, total n = 1366) had an acute dystonia incidence of 14.8%, whereas patients who were administered high‐titer antipsychotics (N = 6, total n = 330) had an incidence of 51.2%. Based on these results, anticholinergic drugs (biperiden and trihexyphenidyl) were effective for prevention of acute dystonia when using FGAs D , and these drugs are suggested 2D . The temporary use of anticholinergic drugs is also desirable up to a few weeks after starting treatment when using it for prevention 2D . (3) Prevention of akathisia High‐titer/high‐dose FGAs are avoided and SGAs are selected for the prevention of akathisia onset based on studies shown in A 1C . Alternatively, it is desirable to use medium‐titer or low‐titer FGAs when SGAs cannot be used 2D . A meta‐analysis that compared SGAs analyzed mean differences in the Barnes Akathisia Scale (BAS) . Among aripiprazole, clozapine, olanzapine, quetiapine, and risperidone, direct comparisons of the mean difference of BAS between aripiprazole and olanzapine (N = 3, total n = 1862) showed that akathisia was lower with aripiprazole compared to olanzapine. However, this difference was small at an MD of 0.1 (0.01, 0.19, P = 0.04), and no significant differences were seen in combinations between the other SGAs B . Therefore, no recommendation on specific drugs are made with regards to choosing between SGAs. (4) Prevention of TD SGAs have been shown to be less likely to cause TD than FGAs. , , A 2.6‐year RCT compared AIMS scores between olanzapine (average of 13.5 mg/d, 1192 patients) and haloperidol (average of 13.9 mg/d, 522 patients) and showed that the incidence rate after one year was 5.1% in the former and 18.8% in the latter group. The relative risk throughout the observation period was an incidence ratio of 3.69 (95% CI: 2.10‐6.50) C . An open‐label prospective observational study which compared SGAs (olanzapine, quetiapine, and risperidone) and FGAs (haloperidol) , showed that the incidence of TD after six months was SGAs:FGAs = 0.9:3.8% and the odds ratio was 0.29 (95% CI: 0.18‐0.46), with a lower incidence rate in SGAs D . In conclusion, it is recommended that SGAs are selected over FGAs 1D . However, the results of One RCT lasting over one year which compared risperidone (average of 4.9 ± 1.9 mg/d, 177 patients) and haloperidol (11.7 ± 5.0 mg/d, 188 patients), were favorable for risperidone, with an incidence of new TD in one patient (0.6%) for the risperidone group and five patients (2.7%) for the haloperidol group. But the difference was not significant. Placebo‐controlled and 20 mg/d haloperidol‐controlled studies showed improvements in TD symptoms with risperidone if <6 mg/d. Therefore, caution is required because an optimal dose setting is required for prevention. (5) Prevention of tardive dystonia There are no studies on the prevention of tardive dystonia, including the selection of antipsychotics, concomitant use of anticholinergic drugs, and concomitant use of antiparkinsonian drugs. A recent cross‐sectional and retrospective study which investigated 80 non‐elderly patients with schizophrenia who took SGAs for more than one year and who never took FGAs showed that 11 out of 78 patients (14.11%) exhibited tardive dystonia. Meanwhile, the frequency of tardive dystonia due to FGA administration was reported to be 15 out of 716 patients (2.1%) in a study with Japanese subjects and 26 out of 194 hospitalized patients (64.7% had LAI of FGAs) in a study with Dutch patients. Direct comparisons cannot be made due to differences in trial design, but there are no clear conclusions that can be drawn at this time regarding the preventative effects of SGAs on tardive dystonia, and no recommendation is made for specific drugs.
Treatment after the onset of EPS Like with other drug‐induced side effects, it is recommended to reduce the dose of the causative drug and discontinue it in severe cases as a general rule if EPS occurs 1D . However, it is necessary to take measures in consideration of the benefits and harm when the causative drug is effective for psychiatric symptoms. Descriptions are given based on side effects and symptoms below. Drug‐induced Parkinsonism ① Switching to SGAs which are less likely to cause Parkinsonism is recommended when Parkinsonism has occurred with FGAs which are likely to induce EPS 1A . Switching to clozapine, quetiapine, or olanzapine is desirable when the same side effects occur even when using SGAs 2D . ② Upon careful assessment of psychiatric symptoms, reducing the dose of orally administered antipsychotics is recommended if possible 1D . ③ Concomitant use of anticholinergic drugs (biperiden and trihexyphenidyl) or antiparkinsonian drugs (amantadine) is desirable when ① or ② cannot be selected or when antipsychotic drug adjustments alone are not effective 2D . Acute dystonia Oral anticholinergic drugs (biperiden and trihexyphenidyl), antihistamines (promethazine), or IM anticholinergic drug injections are desirable 2D . Switching to aripiprazole, olanzapine, or quetiapine is desirable when acute dystonia is caused by high‐titer FGAs 2D . One meta‐analysis compared high and low doses of antipsychotics for acute dystonia and showed that the onset risk was low in the low‐dose group D . Reduced doses of antipsychotics are desirable as an option 2D . Akathisia Active interventions such as pharmacotherapy, psychotherapy, and environmental adjustments are recommended when there is a high degree of urgency accompanied by severe anxiety and frustration, and where risks of suicidal ideations, suicide attempts, or other harm are expected 1D . Reduced doses of the orally administered antipsychotic are recommended upon sufficient discussion with the patient when akathisia symptoms are mild 1D . Switching to SGAs is recommended when high‐titer/high‐dose FGAs are prescribed 1C . Furthermore, using medium‐titer or low‐titer FGAs is desirable when switching to SGAs is not possible 2D . The concomitant use of anticholinergic drugs, beta‐blockers (propranolol), clonazepam, mianserin, mirtazapine, trazodone, cyproheptadine, and vitamin B6 is not desirable 2D . Tardive dyskinesia (TD) Switching to clozapine, olanzapine, and quetiapine after the onset of TD may reduce side effects D and it is desirable to switch to these drugs 2D . Small‐scale RCT results showed that reduced doses of anticholinergic drugs decreased the severity of TD, so reducing doses during concomitant use of anticholinergic drugs is desirable 2D . Tardive dystonia There is no established treatment method but switching to clozapine is desirable when selecting antipsychotics 2D . Prevention of EPS Prevention of drug‐induced Parkinsonism Selecting SGAs is recommended over FGAs 1A . Selecting one SGA among clozapine, quetiapine, or olanzapine is desirable 2D . Prevention of acute dystonia Selecting SGAs is recommended over FGAs 1C . Anticholinergic drugs (biperiden and trihexyphenidyl) are effective for prevention when using FGAs D and temporary use for up to several weeks after starting treatment is desirable 2D . Prevention of akathisia Avoid high‐titer/high‐dose FGAs and select SGAs 1C . Alternatively, it is desirable to select medium‐titer or low‐titer FGAs when SGAs are not possible to use 2D . No recommendation is made for a specific SGA. Prevention of TD Selecting SGAs is recommended over FGAs 1D . Prevention of tardive dystonia There is almost no evidence at this stage on drugs that are effective for preventing tardive dystonia, so no recommendation is made.
Like with other drug‐induced side effects, it is recommended to reduce the dose of the causative drug and discontinue it in severe cases as a general rule if EPS occurs 1D . However, it is necessary to take measures in consideration of the benefits and harm when the causative drug is effective for psychiatric symptoms. Descriptions are given based on side effects and symptoms below. Drug‐induced Parkinsonism ① Switching to SGAs which are less likely to cause Parkinsonism is recommended when Parkinsonism has occurred with FGAs which are likely to induce EPS 1A . Switching to clozapine, quetiapine, or olanzapine is desirable when the same side effects occur even when using SGAs 2D . ② Upon careful assessment of psychiatric symptoms, reducing the dose of orally administered antipsychotics is recommended if possible 1D . ③ Concomitant use of anticholinergic drugs (biperiden and trihexyphenidyl) or antiparkinsonian drugs (amantadine) is desirable when ① or ② cannot be selected or when antipsychotic drug adjustments alone are not effective 2D . Acute dystonia Oral anticholinergic drugs (biperiden and trihexyphenidyl), antihistamines (promethazine), or IM anticholinergic drug injections are desirable 2D . Switching to aripiprazole, olanzapine, or quetiapine is desirable when acute dystonia is caused by high‐titer FGAs 2D . One meta‐analysis compared high and low doses of antipsychotics for acute dystonia and showed that the onset risk was low in the low‐dose group D . Reduced doses of antipsychotics are desirable as an option 2D . Akathisia Active interventions such as pharmacotherapy, psychotherapy, and environmental adjustments are recommended when there is a high degree of urgency accompanied by severe anxiety and frustration, and where risks of suicidal ideations, suicide attempts, or other harm are expected 1D . Reduced doses of the orally administered antipsychotic are recommended upon sufficient discussion with the patient when akathisia symptoms are mild 1D . Switching to SGAs is recommended when high‐titer/high‐dose FGAs are prescribed 1C . Furthermore, using medium‐titer or low‐titer FGAs is desirable when switching to SGAs is not possible 2D . The concomitant use of anticholinergic drugs, beta‐blockers (propranolol), clonazepam, mianserin, mirtazapine, trazodone, cyproheptadine, and vitamin B6 is not desirable 2D . Tardive dyskinesia (TD) Switching to clozapine, olanzapine, and quetiapine after the onset of TD may reduce side effects D and it is desirable to switch to these drugs 2D . Small‐scale RCT results showed that reduced doses of anticholinergic drugs decreased the severity of TD, so reducing doses during concomitant use of anticholinergic drugs is desirable 2D . Tardive dystonia There is no established treatment method but switching to clozapine is desirable when selecting antipsychotics 2D .
Prevention of drug‐induced Parkinsonism Selecting SGAs is recommended over FGAs 1A . Selecting one SGA among clozapine, quetiapine, or olanzapine is desirable 2D . Prevention of acute dystonia Selecting SGAs is recommended over FGAs 1C . Anticholinergic drugs (biperiden and trihexyphenidyl) are effective for prevention when using FGAs D and temporary use for up to several weeks after starting treatment is desirable 2D . Prevention of akathisia Avoid high‐titer/high‐dose FGAs and select SGAs 1C . Alternatively, it is desirable to select medium‐titer or low‐titer FGAs when SGAs are not possible to use 2D . No recommendation is made for a specific SGA. Prevention of TD Selecting SGAs is recommended over FGAs 1D . Prevention of tardive dystonia There is almost no evidence at this stage on drugs that are effective for preventing tardive dystonia, so no recommendation is made.
EPS can be divided into acute symptoms that are likely to occur after starting or increasing the administration of antipsychotics (acute dystonia, akathisia, and Parkinsonism) and tardive symptoms which often occur several months after administration (TD and tardive dystonia). It is very important to first conduct a differential diagnosis of these symptoms and various psychiatric symptoms (eg, anxiety, irritation/agitation, depression, catatonic symptoms, and conversion symptoms). Pharmacotherapies for EPS include preventative therapy to minimize the onset of symptoms as much as possible and symptomatic treatment when symptoms occur. Prescription plans are made as part of the process of on‐site medical care for first time or untreated patients for preventing the onset. In this guideline, we first describe what measures are to be conducted when the onset of EPS has already occurred, while prevention is discussed later. (A) Treatment after the onset of EPS (1) Drug‐induced Parkinsonism Drug‐induced Parkinsonism develops within a few weeks after drug administration. It has a tendency to develop in patients who are middle‐aged or older and the onset risk in many cases increases depending on the dose of antipsychotics. The onset is also influenced by individual vulnerabilities such as organic brain diseases and aging. Muscle rigidity, bradykinesia, dysarthria, dysphagia, postural dysregulation, and other symptoms are observed in a manner similar to idiopathic Parkinsonism but bilateral aspects are common in the case of drug‐induced symptoms. There are also some differences such as that no tremor is observed at rest. Proper measures against drug‐induced Parkinsonism are important since Parkinsonism interferes with the patient’s behavior and is also a risk for TD in addition to being a cause of sluggishness, falls, and aspiration. ① Many studies have shown that clozapine, , olanzapine, , , , , , , quetiapine, , , aripiprazole, , , perospirone, risperidone, , , , , blonanserin, and paliperidone , , cause fewer EPS than haloperidol A . Switching to SGAs, which are less likely to cause Parkinsonism, is recommended when Parkinsonism occurs while using FGAs, which are more likely to cause EPS 1A . One RCT‐based meta‐analysis directly compared the concomitant use rate of antiparkinsonian drugs between SGAs. The results showed that the concomitant use of antiparkinsonian drugs with risperidone was more common than with clozapine, olanzapine, and quetiapine. The concomitant use with aripiprazole was used more often than with olanzapine but was comparable to with risperidone. The concomitant use of antiparkinsonian drugs with clozapine was used much less than with risperidone but similarly often as with olanzapine. The concomitant use with olanzapine was used less often than with aripiprazole, risperidone but similar to with clozapine and more often than with quetiapine. The concomitant use with quetiapine was used less often than with olanzapine and risperidone D . These results indicate that there are differences in the frequency of drug‐induced Parkinsonism even among SGAs, and switching to clozapine, quetiapine, or olanzapine is desirable when the same side effects occur even when using SGAs 2D . ② There are four systematic reviews that showed that the likelihood of EPS depended on the dose of antipsychotic. , , , Reducing the dose of orally administered antipsychotics is recommended if possible upon careful assessment of psychiatric symptoms 1D . ③ There are two RCTs on the effects of anticholinergic drugs and antiparkinsonian drugs. Comparisons of 35 patients with schizophrenia who had already exhibited EPS with 18 patients who took amantadine (100 mg/d, average of 22.4 mg/d of haloperidol) and 17 patients who took biperiden (2 mg/d, average of 19.6 mg haloperidol per day) showed similar improvements in EPS in the latter two groups. Furthermore, 32 patients with schizophrenia whose symptoms were stable suspended the concomitant use of trihexyphenidyl before they were randomly allocated between groups which were administered amantadine (100 mg/d) or biperiden (2 mg/d) after seven days. The effects were assessed after two weeks with the Simpson‐Angus Neurologic Rating Scale. Both groups showed similar improvements in the EPS score D . However, anticholinergic drugs have peripheral anticholinergic side effects such as dry mouth, constipation, dysuria, and blurred vision, and side effects such as cognitive impairment and particularly visual memory impairment D . Amantadine has side effects such as vomiting and hallucinations D . Therefore, caution is required to use these drugs properly. Both, anticholinergic drugs (biperiden) and antiparkinsonian drugs (amantadine) show effects, but they each have a characteristic risk of side effects, and it is desirable to implement concomitant use while taking these into consideration 2D . Cognitive impairment from anticholinergic drugs can seriously impair the lives of patients, so gradual discontinuation should be the objective after Parkinsonism has improved. A double‐blind trial has reported that “reducing doses at a rate of 1 mg/2 weeks is possible” as a specific tapering method for trihexyphenidyl. (2) Acute dystonia Acute dystonia is common in young men and is an abnormal posture or muscle rigidity due to involuntary and continuous muscle contraction that usually occurs within three days of drug administration. Upturning of the eyes or twisting of the neck and trunk are common manifestations. These can also be painful, and although rare, cases like laryngeal dystonia can be fatal. , Approximately 80% of events occur in the evening or night. This can also be a factor for refusing to take medication. Prompt symptomatic treatment is often necessary. Clinical use of anticholinergic drugs (biperiden and trihexyphenidyl) and antihistamines (promethazine) is suggested for symptomatic treatment 2D. Furthermore, IM anticholinergic drug injections are used for rapid recovery, which is our recommendation 2D . With regards to changes in antipsychotics, there is one double‐blind trial that split 70 patients with schizophrenia with a history of developing acute dystonia due to taking FGAs into an oral risperidone group and oral olanzapine group with 35 patients each. A comparison of cases with concomitant use of drugs for acute dystonia (anticholinergic drugs) between the two groups revealed that 14 of 35 patients in the former and 4 of 35 patients in the latter group showed dystonia D . Furthermore, one meta‐analysis showed that aripiprazole and olanzapine caused a significantly lower frequency of dystonia onset when compared to haloperidol D , the other meta‐analysis showed that quetiapine triggered dystonia in significantly fewer patients when compared to FGAs D . In conclusion, switching to aripiprazole, olanzapine, and quetiapine is desirable when acute dystonia has occurred due to the administration of high‐titer FGAs 2D . With regard to reducing the dose of antipsychotics, a meta‐analysis compared between ultra‐high doses (over 35 mg/d) and somewhat high doses (7.5‐15 mg/d) of haloperidol, as well as between low doses (<400 mg/d) vs moderate doses (400‐800 mg/d) and low doses vs high doses (over 800 mg/d) for chlorpromazine. The results showed that the lower dose resulted in a significantly lower frequency of acute dystonia onset. In conclusion, reducing the dose of antipsychotics while carefully assessing symptoms is recommended 2D . (3) Akathisia Akathisia is a side effect characterized by restlessness of the body such as “fidgeting of the lower limbs,” “stomping feet,” and “not being able to sit still.” Mildly affected patients may be able to control it themselves. However, caution is required as these symptoms can be accompanied by strong anxiety and frustration and cause suicidal ideations, suicide attempts, and other harm. , , Active interventions such as pharmacotherapy, psychotherapy, and environmental adjustments including hospitalization are recommended in such urgent cases 1D . Please refer to CQ5‐1 (⇒ pg. 94). The likelihood of akathisia depends on the dose of the antipsychotic , , D . Reducing the dose of the antipsychotic being taken orally is recommended after sufficient discussion with the patient when akathisia symptoms are mild 1D . A double‐blind trial conducted on 119 patients with young‐onset schizophrenia showed that SGAs (2.5‐20 mg/d of olanzapine, 0.5‐6 mg/d of risperidone) had a lower incidence of akathisia than FGAs (10‐140 mg/d of molindone) C . A clinical trial that compared the mean difference in DIEPSS in 182 Japanese patients divided between olanzapine and haloperidol groups showed that the former had significantly less akathisia than the latter C . A systematic review and a meta‐analysis also showed that the onset of akathisia was lower with SGAs than with FGAs C but caution is required as the FGAs used in this comparison were often used in trials where high doses of high‐titer haloperidol were administered . Meanwhile, the rate of concomitant use of antiparkinsonian drugs was higher in the group with moderate doses of perphenazine (160 patients, 8–32 mg/d), which is a medium‐titer FGA for chronic schizophrenia, when compared to SGA groups with olanzapine (174 patients, 7.5‐30 mg/d), quetiapine (166 patients, 200‐800 mg/d), and risperidone (167 patients, 1.5‐6 mg/d). However, there were no differences in the incidence of akathisia D . In one RCT which blinded only doses up to 12 weeks and compared the onset of akathisia between 118 patients with FGAs (of which sulpiride was the most common in 58 patients, with an average dose of 813 (200‐2400) mg/d) and 109 patients with SGAs (50 patients with olanzapine at an average dose of 15 (5‐30) mg/d, 23 patients with quetiapine at an average dose of 450 (200‐750) mg/d, and 22 patients with risperidone at an average dose of 5 (2‐10) mg/d, etc.),8 patients in the FGA group and 4 patients in the SGA group had akathisia, with an odds ratio of 0.4 (95% CI = 0.1‐1.6) and P = 0.18, with SGAs showing superior tendencies but no significant differences D . In conclusion, switching to SGAs is recommended when high‐titer/high‐dose FGAs are prescribed 1C . Furthermore, it is desirable to use medium‐titer or low‐titer FGAs when switching to SGAs is not possible 2D . There was one very small‐scale double‐blind comparative study for anticholinergic drugs, but results showed that there were no significant differences with IM placebo groups. If anything, cognitive impairment was indicated due to anticholinergic drugs. Beta‐blockers such as propranolol (80 mg/d) have been shown to be effective, , but the trial was very small D . The effects were also not seen within 48 hours, and it was only effective when used concomitantly with anticholinergic drugs D . Furthermore, assessments were not consistent even in a systematic review of three RCTs (total n = 51) D . Therefore, it is necessary to consider the risks of side effects such as beta‐blocker‐induced hypotension or bradycardia D . Two small‐scale RCTs have shown the effectiveness of clonazepam , D . As for antidepressants, mianserin (15 mg/d) D , mirtazapine (15 mg/d) D , trazodone (100 mg/d) D , cyproheptadine D , and vitamin B6 (both 600 mg/d and 1200 mg/d ) D have been shown to be effective. However, caution is required as these were all results of small‐scale double‐blind placebo‐controlled trials and the use of these drugs constitutes off‐label use in Japan. In conclusion, concomitant use of anticholinergic drugs, beta‐blockers (propranolol), clonazepam, mianserin, mirtazapine, trazodone, cyproheptadine, and vitamin B6 is not desirable 2D . (4) Tardive dyskinesia TD often refers to various involuntary movements near the neck, face, and mouth (pursed lip, tongue movement, and lip movement) and irregular movements of the upper and lower limbs occurring a few months after taking antipsychotics. These can be irreversible, and there is no established treatment method. Reducing the dose of antipsychotics was shown to be effective in a very small‐scale trial of eight patients. TD did not develop in the four patients of the reduced‐dose group, whereas two of the four in the normal dose group developed TD D . However, a Cochrane Review meta‐analysis concluded that there were issues with the study design and that effectiveness could not be determined D . In conclusion, there is no evidence that reducing the dose of antipsychotics prevents TD, and it is desirable not to do so 2D . Switching to clozapine, olanzapine, or quetiapine has been suggested to have an effect after the onset of TD. An extremely small‐scale non‐blind trial with seven subjects who had severe TD showed improvements in TD with clozapine D . Switching to olanzapine in 92 patients with moderate or higher TD resulted in ~70% of subjects no longer being diagnosed with TD after eight months D . Two small‐scale trials showed the effects of quetiapine on TD. A 12‐month single‐blind trial where 45 patients with TD were randomly allocated into groups of 22 patients who received quetiapine (400 mg/d) and 23 patients who were administered haloperidol (8.5 mg/d) showed significant improvements in the TD assessment score in ESRS D . A small‐scale trial that compared switching to quetiapine (13 patients) and continuous treatment (nine patients) for patients with TD showed that switching to quetiapine decreased TD D . Based on these results, switching to clozapine, olanzapine, and quetiapine is desirable since these switchings may reduce side effects 2D . There was a small‐scale RCT which showed that TD severity decreased due to reduced doses of anticholinergic drugs D , and reduced doses are desirable when anticholinergic drugs are used concomitantly 2D . One RCT that compared the mean differences in the total Abnormal Involuntary Movement Scale (AIMS) score in patients taking Ginkgo biloba extract between 78 patients in the active drug group and 79 patients in the placebo group showed the effectiveness of treatment D . One RCT that compared the effect of piracetam (mean differences) on the TD assessment score in ESRS between 21 patients in the active drug group and 19 patients in the placebo group showed the effectiveness of treatment D . However, both would be considered off‐label use in Japan, and concomitant use is not desirable 2D . (5) Tardive dystonia Tardive dystonia refers to postural or behavioral abnormalities due to persistent and involuntary myotonia occurring several months after taking antipsychotic drugs. It may no longer be possible to maintain posture or make smooth movements as intended, and this can cause severe difficulties in activities of daily living, including walking. Extremely small‐scale open trials of 7 and 5 patients showed that switching to clozapine had an effect D . Options for introducing clozapine while considering its drawbacks, including side effects and hematological monitoring, is desirable 2D . (B) Prevention of EPS (1) Prevention of drug‐induced Parkinsonism Many studies have shown that clozapine, , olanzapine, , , , , , , quetiapine, , , aripiprazole, , , perospirone, risperidone, , , , , blonanserin, and paliperidone , , resulted in fewer EPS than haloperidol A , and selecting SGAs over FGAs is recommended 1A . As for comparisons between SGAs, a meta‐analysis based on RCTs that directly compared the rates of concomitant use of antiparkinsonian drugs indicated that there were differences in concomitant use rate even among SGAs D . Therefore, it is desirable to select either clozapine, quetiapine, or olanzapine while considering other symptom profiles in cases where a medical history of drug‐induced Parkinsonism or underlying illness is suspected 2D . (2) Prevention of acute dystonia A retrospective cohort study of 1975 patients in the United States from 1997 to 2006 showed that the odds ratio of acute dystonia incidence in a group of patients given SGA monotherapy was significantly lower than in a group given FGA monotherapy (odds ratio = 0.12, 95% CI = 0.08‐0.19) C . A prospective cohort study of 1337 subjects who were hospitalized in a psychiatric emergency unit showed that 41 out of 1337 patients (3.1%) exhibited acute dystonia. The incidence rates by drug were as follows: FGAs, 32/561; risperidone, 4/495; olanzapine, 1/95; quetiapine, 1/15; and clozapine, 0/142. SGAs showed a significantly lower rate of onset than FGAs ( P = 0.000) C . In conclusion, SGAs are recommended over FGAs when selecting antipsychotics for preventing acute dystonia 1C . When using FGAs, patients who were administered anticholinergic drugs as a preventative measure (N = 9, total n = 1366) had an acute dystonia incidence of 14.8%, whereas patients who were administered high‐titer antipsychotics (N = 6, total n = 330) had an incidence of 51.2%. Based on these results, anticholinergic drugs (biperiden and trihexyphenidyl) were effective for prevention of acute dystonia when using FGAs D , and these drugs are suggested 2D . The temporary use of anticholinergic drugs is also desirable up to a few weeks after starting treatment when using it for prevention 2D . (3) Prevention of akathisia High‐titer/high‐dose FGAs are avoided and SGAs are selected for the prevention of akathisia onset based on studies shown in A 1C . Alternatively, it is desirable to use medium‐titer or low‐titer FGAs when SGAs cannot be used 2D . A meta‐analysis that compared SGAs analyzed mean differences in the Barnes Akathisia Scale (BAS) . Among aripiprazole, clozapine, olanzapine, quetiapine, and risperidone, direct comparisons of the mean difference of BAS between aripiprazole and olanzapine (N = 3, total n = 1862) showed that akathisia was lower with aripiprazole compared to olanzapine. However, this difference was small at an MD of 0.1 (0.01, 0.19, P = 0.04), and no significant differences were seen in combinations between the other SGAs B . Therefore, no recommendation on specific drugs are made with regards to choosing between SGAs. (4) Prevention of TD SGAs have been shown to be less likely to cause TD than FGAs. , , A 2.6‐year RCT compared AIMS scores between olanzapine (average of 13.5 mg/d, 1192 patients) and haloperidol (average of 13.9 mg/d, 522 patients) and showed that the incidence rate after one year was 5.1% in the former and 18.8% in the latter group. The relative risk throughout the observation period was an incidence ratio of 3.69 (95% CI: 2.10‐6.50) C . An open‐label prospective observational study which compared SGAs (olanzapine, quetiapine, and risperidone) and FGAs (haloperidol) , showed that the incidence of TD after six months was SGAs:FGAs = 0.9:3.8% and the odds ratio was 0.29 (95% CI: 0.18‐0.46), with a lower incidence rate in SGAs D . In conclusion, it is recommended that SGAs are selected over FGAs 1D . However, the results of One RCT lasting over one year which compared risperidone (average of 4.9 ± 1.9 mg/d, 177 patients) and haloperidol (11.7 ± 5.0 mg/d, 188 patients), were favorable for risperidone, with an incidence of new TD in one patient (0.6%) for the risperidone group and five patients (2.7%) for the haloperidol group. But the difference was not significant. Placebo‐controlled and 20 mg/d haloperidol‐controlled studies showed improvements in TD symptoms with risperidone if <6 mg/d. Therefore, caution is required because an optimal dose setting is required for prevention. (5) Prevention of tardive dystonia There are no studies on the prevention of tardive dystonia, including the selection of antipsychotics, concomitant use of anticholinergic drugs, and concomitant use of antiparkinsonian drugs. A recent cross‐sectional and retrospective study which investigated 80 non‐elderly patients with schizophrenia who took SGAs for more than one year and who never took FGAs showed that 11 out of 78 patients (14.11%) exhibited tardive dystonia. Meanwhile, the frequency of tardive dystonia due to FGA administration was reported to be 15 out of 716 patients (2.1%) in a study with Japanese subjects and 26 out of 194 hospitalized patients (64.7% had LAI of FGAs) in a study with Dutch patients. Direct comparisons cannot be made due to differences in trial design, but there are no clear conclusions that can be drawn at this time regarding the preventative effects of SGAs on tardive dystonia, and no recommendation is made for specific drugs.
Recommendation It is recommended to suspend antipsychotics and conduct physical treatment such as systemic monitoring and infusion 1D . Dantrolene use is recommended since it reduces the mortality rate when compared to patients who do not receive specific treatment 1D . However, caution is required as it may occasionally cause serious liver damage D . Bromocriptine use may worsen psychiatric symptoms, D but its use is recommended since it significantly reduces the mortality rate when compared to patients who do not receive specific treatment 1D . ECT does not result in any significant differences when compared to patienys who do not receive specific treatment but tends to reduce the mortality rate D . Its effects are expected to improve psychiatric symptoms D , so it is desirable to administer ECT 2D . Explanation Neuroleptic malignant syndrome is a serious and potentially fatal side effect that presents with symptoms such as fever, muscle rigidity, various autonomic neuropathy, and increases in creatine kinase levels. The incidence rate is 0.01%‐3% , , and risk factors include young age, being male, neurological illnesses, dehydration, iron deficiency, weakness, agitation, physical restraints, and rapid or non‐oral administration of antipsychotics. , , Mortality rates have decreased compared to the past with increased awareness of neuroleptic malignant syndrome and early diagnosis, but it still has a mortality rate of ~10%. There is no evidence from RCTs for the treatment of neuroleptic malignant syndrome since it is a rare and heterogeneous disease and because it is a life‐threatening event. , If neuroleptic malignant syndrome is suspected, antipsychotics should be suspended and physical treatment such as systemic monitoring management and infusion should be conducted. Simultaneously, other physical illnesses should be carefully excluded and the diagnosis should be confirmed. There are no studies that compared the suspension and continuation of antipsychotics but many studies and specialist‐led daily clinics first suspended antipsychotics. Cases, where no suspension is made, may lead to death, therefore, it is recommended to suspend antipsychotics 1D . Furthermore, caution is required when using antipsychotics and anticholinergic drugs concomitantly since reducing doses and suspension of anticholinergic drugs increases the possibility of neuroleptic malignant syndrome. An analysis of case series which compared dantrolene use with a group which only underwent physical treatment (n = 734) showed a mortality rate of 21% in the group which only underwent physical treatment, whereas the mortality in the dantrolene group was significantly lower at 9%‐10% D . An open‐label trial in Japan (n = 27) also showed improvement effects from the use of dantrolene at 77.8% D . Meanwhile, liver damage has been reported as an adverse side effect of dantrolene. The incidence rate of liver damage during long‐term administration of dantrolene (n = 1044) was 1.8% and the mortality rate was 0.3% D . A report of a case series (n = 122) where liver damage occurred due to dantrolene showed that 27 out of 122 patients died. However, when the daily upper limit of administration was below 200 mg when used for neuroleptic malignant syndrome, no deaths occurred D . The possibility of cardiovascular collapse was also indicated, so dantrolene should not be used concomitantly with calcium channel blockers. An analysis of a case series that compared patients using bromocriptine with a group which only underwent physical treatment (n = 734) showed a mortality rate of 21% in the group which only underwent physical treatment, whereas the mortality rate in the bromocriptine use group was significantly lower at 8%‐10% D . Meanwhile, worsening of psychiatric symptoms was reported as an adverse side effect due to bromocriptine. A study on bromocriptine (0.5‐6 mg/d) in nine patients with chronic schizophrenia or schizoaffective disorder showed only slightly worsening psychiatric symptoms in six out of nine patients. Furthermore, a study on bromocriptine in 16 patients with chronic schizophrenia showed worsening psychiatric symptoms only in the high‐dose group (40 mg/d). No significant worsening of psychiatric symptoms was seen in the low‐dose group (5 mg/d) D . An ECT case series study (n = 784) showed that the mortality rate of the group that did not receive specific treatment was 21%, whereas the mortality in the ECT‐treatment group (n = 29) tended to have a lower mortality rate at 10.3% but this difference was not statistically significant. Furthermore, a separate case series (n = 55) conducted ECT in 40 patients for the treatment of neuroleptic malignant syndrome, 10 patients for the treatment of neuroleptic malignant syndrome and psychiatric symptoms, and 5 patients for the treatment of psychiatric symptoms during neuroleptic malignant syndrome treatment. No statistical analysis was conducted but treatment effects were observed in ~90% of patients D . Side effects included those of the cardiovascular system in four patients and hyperkalemia due to the use of suxamethonium at the time of ECT in one patient. Based on the results described above, ECT did not show significant differences when compared to groups which did not receive specific treatment but it tended to reduce the mortality rate and it is expected to be effective in improving psychiatric symptoms. Therefore, ECT implementation is recommended 2D . Other studies, including those which show the effectiveness of amantadine and BZ drugs, , all had low sample sizes, and there is insufficient evidence for a conclusion. Therefore, no recommendation is made.
It is recommended to suspend antipsychotics and conduct physical treatment such as systemic monitoring and infusion 1D . Dantrolene use is recommended since it reduces the mortality rate when compared to patients who do not receive specific treatment 1D . However, caution is required as it may occasionally cause serious liver damage D . Bromocriptine use may worsen psychiatric symptoms, D but its use is recommended since it significantly reduces the mortality rate when compared to patients who do not receive specific treatment 1D . ECT does not result in any significant differences when compared to patienys who do not receive specific treatment but tends to reduce the mortality rate D . Its effects are expected to improve psychiatric symptoms D , so it is desirable to administer ECT 2D .
Neuroleptic malignant syndrome is a serious and potentially fatal side effect that presents with symptoms such as fever, muscle rigidity, various autonomic neuropathy, and increases in creatine kinase levels. The incidence rate is 0.01%‐3% , , and risk factors include young age, being male, neurological illnesses, dehydration, iron deficiency, weakness, agitation, physical restraints, and rapid or non‐oral administration of antipsychotics. , , Mortality rates have decreased compared to the past with increased awareness of neuroleptic malignant syndrome and early diagnosis, but it still has a mortality rate of ~10%. There is no evidence from RCTs for the treatment of neuroleptic malignant syndrome since it is a rare and heterogeneous disease and because it is a life‐threatening event. , If neuroleptic malignant syndrome is suspected, antipsychotics should be suspended and physical treatment such as systemic monitoring management and infusion should be conducted. Simultaneously, other physical illnesses should be carefully excluded and the diagnosis should be confirmed. There are no studies that compared the suspension and continuation of antipsychotics but many studies and specialist‐led daily clinics first suspended antipsychotics. Cases, where no suspension is made, may lead to death, therefore, it is recommended to suspend antipsychotics 1D . Furthermore, caution is required when using antipsychotics and anticholinergic drugs concomitantly since reducing doses and suspension of anticholinergic drugs increases the possibility of neuroleptic malignant syndrome. An analysis of case series which compared dantrolene use with a group which only underwent physical treatment (n = 734) showed a mortality rate of 21% in the group which only underwent physical treatment, whereas the mortality in the dantrolene group was significantly lower at 9%‐10% D . An open‐label trial in Japan (n = 27) also showed improvement effects from the use of dantrolene at 77.8% D . Meanwhile, liver damage has been reported as an adverse side effect of dantrolene. The incidence rate of liver damage during long‐term administration of dantrolene (n = 1044) was 1.8% and the mortality rate was 0.3% D . A report of a case series (n = 122) where liver damage occurred due to dantrolene showed that 27 out of 122 patients died. However, when the daily upper limit of administration was below 200 mg when used for neuroleptic malignant syndrome, no deaths occurred D . The possibility of cardiovascular collapse was also indicated, so dantrolene should not be used concomitantly with calcium channel blockers. An analysis of a case series that compared patients using bromocriptine with a group which only underwent physical treatment (n = 734) showed a mortality rate of 21% in the group which only underwent physical treatment, whereas the mortality rate in the bromocriptine use group was significantly lower at 8%‐10% D . Meanwhile, worsening of psychiatric symptoms was reported as an adverse side effect due to bromocriptine. A study on bromocriptine (0.5‐6 mg/d) in nine patients with chronic schizophrenia or schizoaffective disorder showed only slightly worsening psychiatric symptoms in six out of nine patients. Furthermore, a study on bromocriptine in 16 patients with chronic schizophrenia showed worsening psychiatric symptoms only in the high‐dose group (40 mg/d). No significant worsening of psychiatric symptoms was seen in the low‐dose group (5 mg/d) D . An ECT case series study (n = 784) showed that the mortality rate of the group that did not receive specific treatment was 21%, whereas the mortality in the ECT‐treatment group (n = 29) tended to have a lower mortality rate at 10.3% but this difference was not statistically significant. Furthermore, a separate case series (n = 55) conducted ECT in 40 patients for the treatment of neuroleptic malignant syndrome, 10 patients for the treatment of neuroleptic malignant syndrome and psychiatric symptoms, and 5 patients for the treatment of psychiatric symptoms during neuroleptic malignant syndrome treatment. No statistical analysis was conducted but treatment effects were observed in ~90% of patients D . Side effects included those of the cardiovascular system in four patients and hyperkalemia due to the use of suxamethonium at the time of ECT in one patient. Based on the results described above, ECT did not show significant differences when compared to groups which did not receive specific treatment but it tended to reduce the mortality rate and it is expected to be effective in improving psychiatric symptoms. Therefore, ECT implementation is recommended 2D . Other studies, including those which show the effectiveness of amantadine and BZ drugs, , all had low sample sizes, and there is insufficient evidence for a conclusion. Therefore, no recommendation is made.
Recommendation Switching antipsychotics can be considered taking into account the risks of worsening psychiatric symptoms. Evidence on switching drug and reducing doses is as follows but switching drugs is recommended only after sufficiently examining each individual case 1D . Switching from olanzapine to risperidone, perphenazine, or aripiprazole suppresses weight gain C . However, it is desirable to consider the benefits and harm, as well as the course of treatment to date and antipsychotic drug use history when olanzapine is effective against psychiatric symptoms. Before switching the drug, it is also recommended to discuss with the patient the possibility of recurrence and relapse 2C . Switching from olanzapine to quetiapine does not have weight reduction effects while it worsens treatment continuation rate C . Therefore, it is not desirable to switch to quetiapine 2C . Reducing the dose of olanzapine is unlikely to prevent weight gain D . For this reason, reducing the dose is not desirable 2D . Metformin has been shown to have a weight reduction effect D . However, its indications on the package insert are limited. No recommendation is made (no recommendation, D ). Nizatidine D , amantadine D , and atomoxetine D have not been shown to affect weight reduction. Their use is not desirable 2D . Topiramate has a weight reduction effect D but it also has the possibility of significant side effects such as psychomotor arrest, hypersalivation, and dysesthesia D . Therefore, its use is not desirable 2D . Zonisamide has a weight reduction effect D but it may induce serious side effects such as cognitive dysfunction D . Its use is not desirable 2D . Explanation Weight gain is a common side effect of antipsychotics, particularly SGAs. It is also a risk factor for metabolic disorders and cardiovascular disease, which can worsen vital prognosis. Furthermore, disgust for their appearance may reduce patients’ adherence to antipsychotics and lead to a worsening of psychiatric symptoms. Therefore, the side effect of weight gain should be either avoided or attenuated to improve psychiatric symptoms. Associations with the antipsychotic’s histamine H1 receptor affinity and serotonin 5‐HT2C receptor affinity have been implicated in weight gain. , Furthermore, lifestyle factors that are characteristic of patients with schizophrenia, such as insufficient dietary intake limitations and insufficient exercise may have an effect on weight gain. The results of a meta‐analysis of first‐episode psychosis showed significant weight gain with SGAs compared to FGAs. Furthermore, a meta‐analysis of two FGAs (haloperidol and chlorpromazine) and 13 SGAs (clozapine, amisulpride*, olanzapine, risperidone, paliperidone, zotepine, quetiapine, aripiprazole, sertindole*, ziprasidone*, asenapine, lurasidone, and iloperidone*) revealed that all antipsychotics with the exception of haloperidol, ziprasidone*, and lurasidone* resulted in significant weight gain when compared to a placebo. Of these, olanzapine had the highest risk and induced significant weight gain compared to other drugs with the exception of zotepine, clozapine, iloperidone*, and chlorpromazine*. Drug change and reduced doses of antipsychotics Suspending the causative drug as a measure against side effects due to pharmacological therapy is the same across all treatments. However, suspension of drug administration in pharmacological therapy in schizophrenia can worsen psychiatric symptoms or induce recurrence, and continuing drug administration is recommended in this guideline as well (see Chapter 3). Therefore, switching the antipsychotic after sufficient consideration of the risks of worsening psychiatric symptoms can be considered as a pharmacological therapy against weight gain as a side effect due to antipsychotics. There are four RCTs that examined the effects of switching antipsychotics for weight gain. , , , Analysis of secondary data from the Clinical Antipsychotic Trial of Intervention Effectiveness (CATIE) showed that patients who switched from olanzapine to another drug (risperidone, quetiapine, perphenazine, or ziprasidone*) (n = 224) had significantly less weight gain compared with patients in the olanzapine continuation group (n = 73). Meanwhile, no significant differences between the two groups were seen regarding changes in psychiatric symptoms C . Studies on risperidone and quetiapine also investigated changes in weight due to continuing drugs or switching to other drugs but there were no significant differences between the continuation group and the group that switched the drug C . A study on subjects with a body mass index (BMI) over 27 who were taking olanzapine (n = 173), which compared between an olanzapine continuation group and an aripiprazole drug change group showed significant decreases in weight after 16 weeks in the aripiprazole drug change group. Meanwhile, the Clinical Global Impression‐Improvement scale (CGI‐I) did not worsen for either group but significant improvements were seen in the olanzapine continuation group C . Based on these studies, it is expected that switching from olanzapine to risperidone, perphenazine, or aripiprazole would suppress weight gain C . However, the benefits and harm, the course of treatment to date, and antipsychotic drug use history when olanzapine is effective against psychiatric symptoms must be considered as well. Switching to another drug should be decided after discussing the possibility of recurrence and relapse with the patient 2C . A study on subjects with a BMI over 25 who were taking olanzapine (n = 133) that compared between an olanzapine continuation group and a quetiapine drug change group showed no significant differences in weight loss. Furthermore, the olanzapine continuation group had a significantly higher treatment continuation rate C . A study on subjects whose weight increased by more than 5 kg while taking olanzapine (n = 149) that compared between an oral olanzapine tablet group and an oral disintegrating olanzapine tablet group showed no significant differences in body weight change between the two groups C . There are no RCTs that directly examined the effects of reduced doses of antipsychotics against weight gain. However, a study which investigated olanzapine prescription amounts and changes in body weight (n = 573) showed that weight gain occurred regardless of the amount of olanzapine prescribed (5 ± 2.5, 10 ± 2.5, 15 ± 2.5, >17.5 mg/d) and found no significant differences between the different prescription amount groups. From these results, it is predicted that improvement effects for weight gain cannot be expected even with reduced doses of olanzapine D . Pharmacological therapy intervention Assessments for intervention studies for pharmacological therapy against antipsychotic‐induced weight gain are limited to drugs that have been approved in Japan. There are 11 intervention studies with metformin, which is an oral biguanide hypoglycemic agent. , , , , , , , , , , Two , of these studies were excluded from the assessment due to particular characteristics of the control group. The two studies by Baptista et al. , both compared metformin with a placebo group (N = 2, total n = 55) and showed that the metformin intervention group (N = 2, total n = 54) had no significant changes in weight or BMI, whereas the remaining seven studies. , , , , , , all showed that the metformin intervention group (N = 7, total n = 287) showed significant weight loss or suppressed weight gain compared to the placebo group (N = 7, total n = 290). The results of the three meta‐analyses , , each showed significant weight loss in the metformin intervention group D . There were also no significant side effects observed, including the worsening of psychiatric symptoms D . However, indications on the package insert in Japan are “type‐2 diabetes where sufficient effects cannot be obtained with diet/exercise therapy or the use of sulfonylureas,” and no recommendation is made in this guideline. There were four intervention studies with nizatidine, which is a histamine H2 receptor blocker. , , , One of these four studies found significant weight loss in the nizatidine intervention group (N = 1, n = 17) compared to the placebo group (N = 1, n = 17), whereas three studies , , found no significant differences between the nizatidine intervention group (N = 3, n = 108) and the placebo group (N = 3, total n = 76). Results of two meta‐analyses , showed also no significant differences. In conclusion, it is desirable that nizatidine is not used 2D . There are two intervention studies for amantadine, which is an antiparkinsonian drug , . Both studies involved the same concomitant use of olanzapine but results from studies with larger sample size and longer observation period and two meta‐analyses , showed that there were no significant differences between the amantadine intervention group and the placebo group. Based on these studies, it is desirable not to use amantadine 2D . There is one intervention study for atomoxetine, which is a selective noradrenaline re‐uptake inhibitor. No significant differences were seen between the atomoxetine intervention group (n = 20) and the placebo group (n = 17). In conclusion, it is desirable not to use atomoxetine 2D . There are three intervention studies for topiramate, which is an antiepileptic drug. , , Significant weight loss was seen in the topiramate intervention group (N = 3, total n = 85) compared to the placebo group (N = 3, total n = 72). The results of three meta‐analyses , , similarly showed significant weight loss in the topiramate intervention group. However, the topiramate intervention group had significantly more side effects such as psychomotor arrest, hypersalivation, and dysesthesia. , Therefore, it is desirable not to use topiramate 2D . There is one intervention study for the antiepileptic drug zonisamide. Weight gain was significantly suppressed in the zonisamide intervention group (n = 11) compared to the placebo group (n = 12), but cognitive dysfunction was observed significantly more often in patients in the former group as a side effect. It is desirable not to use zonisamide, emphasizing the appearance of cognitive dysfunction as a side effect 2D .
Switching antipsychotics can be considered taking into account the risks of worsening psychiatric symptoms. Evidence on switching drug and reducing doses is as follows but switching drugs is recommended only after sufficiently examining each individual case 1D . Switching from olanzapine to risperidone, perphenazine, or aripiprazole suppresses weight gain C . However, it is desirable to consider the benefits and harm, as well as the course of treatment to date and antipsychotic drug use history when olanzapine is effective against psychiatric symptoms. Before switching the drug, it is also recommended to discuss with the patient the possibility of recurrence and relapse 2C . Switching from olanzapine to quetiapine does not have weight reduction effects while it worsens treatment continuation rate C . Therefore, it is not desirable to switch to quetiapine 2C . Reducing the dose of olanzapine is unlikely to prevent weight gain D . For this reason, reducing the dose is not desirable 2D . Metformin has been shown to have a weight reduction effect D . However, its indications on the package insert are limited. No recommendation is made (no recommendation, D ). Nizatidine D , amantadine D , and atomoxetine D have not been shown to affect weight reduction. Their use is not desirable 2D . Topiramate has a weight reduction effect D but it also has the possibility of significant side effects such as psychomotor arrest, hypersalivation, and dysesthesia D . Therefore, its use is not desirable 2D . Zonisamide has a weight reduction effect D but it may induce serious side effects such as cognitive dysfunction D . Its use is not desirable 2D .
Weight gain is a common side effect of antipsychotics, particularly SGAs. It is also a risk factor for metabolic disorders and cardiovascular disease, which can worsen vital prognosis. Furthermore, disgust for their appearance may reduce patients’ adherence to antipsychotics and lead to a worsening of psychiatric symptoms. Therefore, the side effect of weight gain should be either avoided or attenuated to improve psychiatric symptoms. Associations with the antipsychotic’s histamine H1 receptor affinity and serotonin 5‐HT2C receptor affinity have been implicated in weight gain. , Furthermore, lifestyle factors that are characteristic of patients with schizophrenia, such as insufficient dietary intake limitations and insufficient exercise may have an effect on weight gain. The results of a meta‐analysis of first‐episode psychosis showed significant weight gain with SGAs compared to FGAs. Furthermore, a meta‐analysis of two FGAs (haloperidol and chlorpromazine) and 13 SGAs (clozapine, amisulpride*, olanzapine, risperidone, paliperidone, zotepine, quetiapine, aripiprazole, sertindole*, ziprasidone*, asenapine, lurasidone, and iloperidone*) revealed that all antipsychotics with the exception of haloperidol, ziprasidone*, and lurasidone* resulted in significant weight gain when compared to a placebo. Of these, olanzapine had the highest risk and induced significant weight gain compared to other drugs with the exception of zotepine, clozapine, iloperidone*, and chlorpromazine*. Drug change and reduced doses of antipsychotics Suspending the causative drug as a measure against side effects due to pharmacological therapy is the same across all treatments. However, suspension of drug administration in pharmacological therapy in schizophrenia can worsen psychiatric symptoms or induce recurrence, and continuing drug administration is recommended in this guideline as well (see Chapter 3). Therefore, switching the antipsychotic after sufficient consideration of the risks of worsening psychiatric symptoms can be considered as a pharmacological therapy against weight gain as a side effect due to antipsychotics. There are four RCTs that examined the effects of switching antipsychotics for weight gain. , , , Analysis of secondary data from the Clinical Antipsychotic Trial of Intervention Effectiveness (CATIE) showed that patients who switched from olanzapine to another drug (risperidone, quetiapine, perphenazine, or ziprasidone*) (n = 224) had significantly less weight gain compared with patients in the olanzapine continuation group (n = 73). Meanwhile, no significant differences between the two groups were seen regarding changes in psychiatric symptoms C . Studies on risperidone and quetiapine also investigated changes in weight due to continuing drugs or switching to other drugs but there were no significant differences between the continuation group and the group that switched the drug C . A study on subjects with a body mass index (BMI) over 27 who were taking olanzapine (n = 173), which compared between an olanzapine continuation group and an aripiprazole drug change group showed significant decreases in weight after 16 weeks in the aripiprazole drug change group. Meanwhile, the Clinical Global Impression‐Improvement scale (CGI‐I) did not worsen for either group but significant improvements were seen in the olanzapine continuation group C . Based on these studies, it is expected that switching from olanzapine to risperidone, perphenazine, or aripiprazole would suppress weight gain C . However, the benefits and harm, the course of treatment to date, and antipsychotic drug use history when olanzapine is effective against psychiatric symptoms must be considered as well. Switching to another drug should be decided after discussing the possibility of recurrence and relapse with the patient 2C . A study on subjects with a BMI over 25 who were taking olanzapine (n = 133) that compared between an olanzapine continuation group and a quetiapine drug change group showed no significant differences in weight loss. Furthermore, the olanzapine continuation group had a significantly higher treatment continuation rate C . A study on subjects whose weight increased by more than 5 kg while taking olanzapine (n = 149) that compared between an oral olanzapine tablet group and an oral disintegrating olanzapine tablet group showed no significant differences in body weight change between the two groups C . There are no RCTs that directly examined the effects of reduced doses of antipsychotics against weight gain. However, a study which investigated olanzapine prescription amounts and changes in body weight (n = 573) showed that weight gain occurred regardless of the amount of olanzapine prescribed (5 ± 2.5, 10 ± 2.5, 15 ± 2.5, >17.5 mg/d) and found no significant differences between the different prescription amount groups. From these results, it is predicted that improvement effects for weight gain cannot be expected even with reduced doses of olanzapine D . Pharmacological therapy intervention Assessments for intervention studies for pharmacological therapy against antipsychotic‐induced weight gain are limited to drugs that have been approved in Japan. There are 11 intervention studies with metformin, which is an oral biguanide hypoglycemic agent. , , , , , , , , , , Two , of these studies were excluded from the assessment due to particular characteristics of the control group. The two studies by Baptista et al. , both compared metformin with a placebo group (N = 2, total n = 55) and showed that the metformin intervention group (N = 2, total n = 54) had no significant changes in weight or BMI, whereas the remaining seven studies. , , , , , , all showed that the metformin intervention group (N = 7, total n = 287) showed significant weight loss or suppressed weight gain compared to the placebo group (N = 7, total n = 290). The results of the three meta‐analyses , , each showed significant weight loss in the metformin intervention group D . There were also no significant side effects observed, including the worsening of psychiatric symptoms D . However, indications on the package insert in Japan are “type‐2 diabetes where sufficient effects cannot be obtained with diet/exercise therapy or the use of sulfonylureas,” and no recommendation is made in this guideline. There were four intervention studies with nizatidine, which is a histamine H2 receptor blocker. , , , One of these four studies found significant weight loss in the nizatidine intervention group (N = 1, n = 17) compared to the placebo group (N = 1, n = 17), whereas three studies , , found no significant differences between the nizatidine intervention group (N = 3, n = 108) and the placebo group (N = 3, total n = 76). Results of two meta‐analyses , showed also no significant differences. In conclusion, it is desirable that nizatidine is not used 2D . There are two intervention studies for amantadine, which is an antiparkinsonian drug , . Both studies involved the same concomitant use of olanzapine but results from studies with larger sample size and longer observation period and two meta‐analyses , showed that there were no significant differences between the amantadine intervention group and the placebo group. Based on these studies, it is desirable not to use amantadine 2D . There is one intervention study for atomoxetine, which is a selective noradrenaline re‐uptake inhibitor. No significant differences were seen between the atomoxetine intervention group (n = 20) and the placebo group (n = 17). In conclusion, it is desirable not to use atomoxetine 2D . There are three intervention studies for topiramate, which is an antiepileptic drug. , , Significant weight loss was seen in the topiramate intervention group (N = 3, total n = 85) compared to the placebo group (N = 3, total n = 72). The results of three meta‐analyses , , similarly showed significant weight loss in the topiramate intervention group. However, the topiramate intervention group had significantly more side effects such as psychomotor arrest, hypersalivation, and dysesthesia. , Therefore, it is desirable not to use topiramate 2D . There is one intervention study for the antiepileptic drug zonisamide. Weight gain was significantly suppressed in the zonisamide intervention group (n = 11) compared to the placebo group (n = 12), but cognitive dysfunction was observed significantly more often in patients in the former group as a side effect. It is desirable not to use zonisamide, emphasizing the appearance of cognitive dysfunction as a side effect 2D .
|
Spontaneous coronary and vertebral artery dissection in early pregnancy | 3ac6540e-ae3f-4503-a3f8-9d09656e56ea | 11840668 | Surgical Procedures, Operative[mh] | Spontaneous coronary artery dissection (SCAD) is the most common cause of myocardial infarction in pregnancy. Affected pregnant women are thought to have an underlying predisposition, with the acute dissection being precipitated by hormonal changes weakening the vessel wall and haemodynamic changes increasing the vessel wall stress. Although SCAD is more common in the third trimester of pregnancy and the postpartum period, it can occur in early pregnancy. When this occurs, patients should be counselled about the risk of maternal and fetal morbidity posed by the ongoing pregnancy, and shared decision-making between the patient and a multidisciplinary team should guide clinical care. A pregnant nulliparous female aged in her early 30s initially presented to the hospital at 11 weeks’ gestation with acute onset chest pain. Her medical history was significant for anxiety, treated with sertraline 50 mg daily, and endometriosis requiring previous micro-ablation surgery. At the time of her presentation, her only other medications were an antenatal multivitamin and progesterone pessaries. She had no allergies. Vaccinations for influenza and COVID-19 were up to date. She worked full time as a primary school teacher. She was a life-long non-smoker, did not use recreational drugs and consumed alcohol socially but none in pregnancy. Family history revealed a sister who experienced a vertebral artery dissection 6 months postpartum, although this was in the context of a motor vehicle accident and possible neck manipulation. Two other sisters were well with no history of cardiovascular events. A maternal grandmother died postpartum, reportedly due to pneumonia. There was no other family history of premature death or cardiovascular events. On presentation to an emergency department without access to obstetrical services, she reported sudden onset burning-type chest pain, arm pain, headache and clamminess. There were non-specific but dynamic ECG changes and high-sensitivity troponin peaked at 387 ng/L (normal range<11 ng/L). A transthoracic echocardiogram (TTE) was normal, as was a CT pulmonary angiogram performed to exclude a pulmonary embolism. Her symptoms resolved, and she was discharged from the emergency department with a plan for outpatient cardiology follow-up. Four days later, she re-presented to a different emergency department, again without access to obstetrical services, with chest and arm pain. ECG changes were consistent with a posteroinferior ST elevation myocardial infarction , and troponin peaked at 32 744 ng/L. A diagnosis of SCAD was made based on angiogram findings of an occluded right coronary artery with evidence of dissection. Balloon angioplasty (no stent) was performed with no immediate complications . CT brain was unremarkable. Vertebral ultrasound showed likely right-sided vertebral artery dissection. This was confirmed on vertebral artery CT as a >50% narrowing of the mid to distal right vertebral artery and on magnetic resonance angiography showing intramural haematoma from dissection extending from the level of the inferior endplate of C4 to the skull base/the fourth segment of the right vertebral artery . The vertebral dissection was managed conservatively. Four days later, recurrent chest pain and acute ECG changes required emergency cardiac catheterisation, which confirmed re-occlusion of the vessel by coronary artery dissection. Due to the extensive nature of the dissection extending to the ostium of the right coronary artery, stent implantation was not attempted due to the risk of dissecting the ascending aorta. A coronary artery bypass graft (CABG) to the right coronary artery was performed. Life-long aspirin (100 mg daily) was commenced. Metoprolol 12.5 mg two times a day was commenced to reduce the risk of subsequent dissection and assist left ventricular recovery, with up-titration recommended as blood pressure allowed. Post-CABG, TTE was reported as showing mild-to-moderate left ventricle function with an estimated ejection fraction of 40%. The basal to mid inferolateral, basal infero-septum and basal to mid inferior walls were akinetic. Renal artery ultrasound showed normal renal arteries without aneurysmal dilatation. Repeat CT and vertebral artery ultrasound were arranged for 6 weeks after the initial presentation. An urgent telehealth consultation occurred with a cardiologist managing a tertiary cardio-obstetrical service while she was recovering in intensive care. She was discharged to community cardiac rehabilitation, and cardio-obstetrical care was arranged in a tertiary maternity hospital. At 13 weeks’ gestation, she had her first joint cardiology and obstetrical appointment. Extensive counselling was provided around the risks of ongoing pregnancy with the patient, her partner and her extended family. She elected to continue the pregnancy. Maternal genetic testing for inherited aortopathies and related conditions was undertaken to guide further counselling and management. Non-invasive prenatal testing was deemed not necessary by the couple as it would not change their management of the pregnancy. All other standard antenatal testing was unremarkable. Six weeks post-CABG, at 19 weeks’ gestation, the repeat TTE showed a mildly dilated left ventricle with moderate systolic dysfunction. Basal and mid posterior walls were severely hypokinetic. The ejection fraction was 40%. At 20+5 weeks’ gestation, the standard morphology ultrasound showed a normal-looking fetus. A single umbilical artery was noted. Frequent review at both a cardio-obstetrics and a high-risk maternal fetal medicine clinic occurred throughout the pregnancy. At 24+5 weeks’ gestation, obstetrical ultrasound performed to assess fetal growth showed a well-grown fetus (estimated fetal weight (EFW) 46th centile, abdominal circumference (AC) 92nd centile). However, when repeated at 28+4 weeks’ gestation, the ultrasound showed a steady fetal growth trajectory (EFW 58% centile, AC 95% centile), but the fetal bowel loops were dilated (18 mm; upper limit of normal (ULN) 15 mm), the stomach was dilated and the anal muscular complex was visible excluding anorectal atresia. At 32+5 weeks’ gestation, obstetrical ultrasound showed dilated bowel loops at 30 mm diameter, suggestive of small bowel atresia. The amniotic fluid index was borderline high at 25 (normal range 5–25), but there was no other evidence of fetal compromise. Neonatology and genetic opinions were sought, and the appearance was thought to suggest jejunal atresia, likely related to a vascular event to the small bowel blood supply. After multidisciplinary discussion, delivery via Caesarean section at 34 weeks’ gestation was planned. This was thought to represent a compromise between the need to minimise the cardiac stress on the mother, acknowledging the possibility of an unrecognised genetic predisposition given the early and severe presentation, while avoiding excessive prematurity for the neonate. Head-to-pelvis non-late gadolinium enhancement magnetic resonance angiography was performed to exclude occult aneurysms, of which none were observed. At 33+4 weeks’ gestation, the patient presented to the maternity hospital with decreased fetal movement. Cardiotocography showed prolonged decelerations (fetal heart rate 80 beats per minute). Emergency Caesarean section resulted in the birth of a live male infant with Apgar’s of 6 at 1 min, 6 at 5 min and 10 at 10 min. The birth weight was 2154 g. Common differentials for chest pain in early pregnancy include musculoskeletal pain, gastro-oesophageal reflux disease and pneumonia. However, life-threatening conditions should always be considered, including pulmonary embolism, aortic dissection and acute coronary syndromes due to atherosclerotic disease or coronary artery dissection. Coronary artery dissection can be spontaneous (SCAD) or, less commonly, be induced by physical trauma. SCAD infrequently occurs in the context of an underlying connective tissue disorder or inherited aortopathy, but this may be considered if there is a family history of these conditions, syndromic examination findings, dissections at multiple sites or recurrent dissections. Although SCAD is more common in pregnancy, in this case, the timing of the events (ie, early pregnancy), the multiple vascular beds involved and the possible family history in the sister were suspicious for an underlying genetic/inherited disorder. Due to the strong association of fibromuscular dysplasia with SCAD, this was thought to be the most likely underlying cause of the presentation. An inherited aortopathy or connective tissue disorder such as vascular Ehlers-Danlos syndrome (COL3A1 mutation) and Loeys-Dietz syndrome (TGFBR1, TGFBR2, SMAD3 mutation) was also considered. This patient elected not to breastfeed considering both the considerable metabolic demands of lactation and the benefit of additional pharmacotherapy in her recovery. Four days postpartum, the maternal TTE showed moderate segmental left ventricular dysfunction with an ejection fraction of 39%. She was commenced on candesartan 2 mg daily. Contraception with a progesterone-impregnated intrauterine device was strongly advised but, due to patient preference, was not placed. Despite being aware of the suboptimal effectiveness of barrier contraception, the patient elected to use condoms for contraception. At 6 months postpartum, the patient was participating in moderate-intensity exercise for at least 30 min per day and had no symptoms of heart failure. Stress TTE showed a moderately dilated left ventricle with moderately reduced systolic function. There was no contractile reserve, and indeed, the left ventricle systolic function deteriorated with exercise. Metoprolol was changed to bisoprolol 2.5 mg daily. Candesartan was changed to sacubitril/valsartan 24/26 mg. Dapagliflozin 100 mg daily was started shortly thereafter. The infant was transferred to the neonatal intensive care unit. At 5 days of age, the infant had a laparotomy for 12 jejunal atresias with 9 primary anastomoses. Histopathology showed multiple sections of benign small bowel with central stenosis, with one section showing a collagenous fibrous band which appeared ischaemic. A total of 47 cm of bowel remained in situ. At 6 months of age, the male infant was thriving with a height and weight tracking on the fifth centile. No developmental concerns have been identified. The mother and child dyad was reviewed by the genetics team postpartum. Previously organised genetic testing, including an aortopathy panel (exome) and microarray, had both been normal. All first-degree relatives were advised to have a TTE, with repeated studies at 3- to 5-year intervals if normal. For the offspring of this pregnancy, screening was advised to start in adolescence. Preconception counselling for future pregnancies was discussed, including the potential risk of recurrent SCAD, myocardial infarction, heart failure and death. The safest option was of no further pregnancies, and this is the likely decision of the patient. However, future obstetrical management decisions would involve shared decision-making between the patient, her family and a broad multidisciplinary team. SCAD is an important cause of acute coronary syndrome and myocardial infarction, particularly among young women with few conventional cardiovascular risk factors. Pregnancy-associated SCAD is the most common cause of myocardial infarction in pregnant women and affects around 1.81/100 000 pregnancies. Pregnancy-associated SCAD usually occurs in the third trimester or postpartum period due to increased blood volume and hormonal changes impacting on the structure and tone of the blood vessels. SCAD in early pregnancy is exceedingly rare, with only a small number of case reports in the literature. This is the first case report of multiple vessel dissection in early pregnancy and the first case accompanied by morbidity in the offspring, which could potentially relate to either the vascular event in the mother or an underlying, and yet to be identified, genetic pathology. Known risk factors for pregnancy-associated SCAD are black race, chronic hypertension, lipid abnormalities, chronic depression, migraine, treatment for infertility, multiparity and advanced age at first pregnancy. Notably, the case described had none of these risk factors. Fibromuscular dysplasia is the condition most associated with SCAD. Inherited arteriopathies and connective tissue disorders are infrequently reported as the underlying cause of SCAD (<5% of cases). Outside these known genetic conditions, SCAD does not seem to be strongly familial, with a family history being reported in 1.2% of cases. Despite this, it is likely that genetic factors predispose to SCAD, but the rarity and sporadic nature of the condition make it difficult to understand the gene-environment associations. Multiple studies suggest that pregnancy-associated SCAD has a poorer prognosis than SCAD that is unrelated to pregnancy. In a review of 120 cases of pregnancy-associated SCAD, 76% of women presented with ST-segment elevation myocardial infarction. Maternal complications included cardiogenic shock (24%), ventricular fibrillation requiring defibrillation (16%), mechanical support (28%) and in-hospital mortality (4%). It is unclear why pregnancy-associated SCAD is associated with more aggressive and extensive dissections than non–pregnancy-associated SCAD. SCAD should be suspected in young women presenting with cardiac symptoms including chest pain with radiation to the limbs or neck or nausea and vomiting and with few conventional cardiovascular risk factors. ECG may demonstrate an acute myocardial infarction, and cardiac enzymes will be raised. Presentation as an acute coronary syndrome or infarct leads to angiography and diagnosis. Angiography in these cases has a greater risk for iatrogenic catheter-induced dissection of 3.4%, compared with <0.2% in other routine cases. Conservative management is appropriate in most cases, with healing frequently being observed by 1 month. However, urgent intervention may be required in some cases. Myocardial infarction may occur in 5–10% of conservatively managed patients within 7 days of the first presentation. Most of these patients require emergency revascularisation. After initial management, low-dose aspirin is safe to use during pregnancy and breastfeeding and is recommended. Beta-blockers should be considered in those with left ventricular dysfunction, arrhythmias or hypertension. Beta-blockers should also be considered if there is a risk factor for dissection, such as an ongoing pregnancy or a predisposing genetic diagnosis. All patients who have a myocardial infarction caused by SCAD should be referred to cardiac rehabilitation. Special attention should be given to mental health as anxiety and depression are common after SCAD, particularly when it occurs in pregnancy or the postpartum period. The 2018 American Heart Association SCAD Scientific Consensus Statement recommends that women with pregnancy-related SCAD should be advised against subsequent pregnancy due to the high risk of recurrence and the inability to predict recurrence and severity, which limits prevention strategies. For women who wish to pursue future pregnancy and where adoption and surrogacy are not acceptable, women are advised to wait at least 1 year after myocardial infarction before proceeding with pregnancy and ideally to have recovered ventricular systolic function without residual cardiopulmonary symptoms. During pregnancy, multidisciplinary care, continuation of a beta-blocker and rigorous management of hypertension are advised. Reassuringly, Tweet et al reported that most women who did pursue pregnancy after SCAD did not have a recurrence of SCAD in the subsequent pregnancy. Patient’s perspective At 11 weeks pregnant, I boarded a plane and was suddenly struck by what felt like intense acid reflux, uncontrollable coughing, clamminess and throbbing arm pain. After being taken to hospital by ambulance, I had numerous scans and tests that revealed an elevated troponin level. Doctors couldn’t determine the cause. I was in hospital for 3 days then was cleared to return to work. Two days later, I experienced a throbbing pain on the right side of my neck and head. After a few hours that pain stopped abruptly and was quickly followed by another heart episode, with similar symptoms. As a result, I had an angiogram and was diagnosed with SCAD. I also had a CT scan and ultrasound which diagnosed a vertebral artery dissection. While recovering in hospital, 4 days later I had another heart episode. This resulted in a second angiogram, then open-heart bypass surgery. Following my surgery, I was in excruciating pain and heavily medicated, leading to hallucinations. Doctors informed us that there were no statistics to guide us because my SCAD happened so early in the pregnancy. If I proceeded with the pregnancy I could possibly die or have a stroke, but the likelihood of this was unknown. We faced total uncertainty and felt desolate. Fortunately, my family advocated for me during my vulnerable state and encouraged me to wait a week when I was to be discharged before making any decisions. After resting at home, I was able to make an informed choice which I could not have done while heavily medicated in the hospital. I wanted to give my baby a chance at life. Cardiac rehabilitation was vital to my recovery. Instead of group classes, I received personalised sessions with a physiotherapist, which was essential given the additional stress of a pregnancy-related heart condition. Regular sessions with my psychologist helped me manage the trauma and ongoing emotional, mental and physical challenges. I lent heavily on my husband and our parents who offered constant support. I made significant lifestyle changes to reduce stress, including stopping work and minimising social engagements. After discharge from hospital, my husband and I moved in with my parents for support. Following medical advice, at 22 weeks, we moved into accommodation very close to the hospital. I meticulously followed the advice from the Maternal Fetal Medicine Clinic and the Cardiology Pregnancy Clinic. I was reassured by the constant monitoring, planning and multidisciplinary conversations that took place to care for my baby and me. Following the birth I have continued with cardiac rehabilitation and taking medication. My son and I are thriving. I am so grateful to be here to enjoy him! Learning points A pregnant or postpartum woman who presents with symptoms of myocardial ischaemia in the absence of cardiovascular risk factors should be considered as having spontaneous coronary artery dissection (SCAD) until proven otherwise. If SCAD occurs early in pregnancy, the patient should be counselled about the potential maternal and fetal morbidities associated with the ongoing pregnancy. Genetic testing is not required in all cases of pregnancy-associated SCAD. However, genetic testing may be considered where there are features suggestive of an inherited aortopathy or connective tissue disorder such as a family history of these conditions, syndromic examination findings, dissections at multiple sites or recurrent dissections. |
Considerations for the clinical development of immuno-oncology agents in cancer | 4989cfe0-5b74-4584-8c1a-54c480b64dd8 | 10451075 | Internal Medicine[mh] | Immunotherapy has a central role in the treatment of cancer with the approval of several immuno-oncology (IO) agents in different indications. Trials supporting the approval of these drugs has demonstrated that acting on the immune system can be a successful therapeutic approach . Beyond the use of cell therapy like CAR-T cells, several strategies, mainly using antibodies or antibody formats, have demonstrated clinical activity. Anti PD-(L)1, anti CTLA4 and anti LAG3 antibodies typically termed as immune checkpoint inhibitors (ICI), and Bi-specific T-cell engagers (TCE) have shown clinical activity in different indications, and it is anticipated that over the next few years several other agents with similar mechanism of action will demonstrate efficacy . However, the clinical development of these compounds differs from small molecules or chemotherapies. These agents do not act directly on tumor cells, but on cellular components of the host. In addition, activation of the immune system has a characteristic efficacy and safety profile with both acute and long-term side effects . Given the fact that when acting on one target there is a modulation of other cellular populations , and in many occasions, these targets are shared between different cell types, the ability to identify and develop biomarkers in immune-oncology is more challenging than for agents targeting oncogenic vulnerabilities . Finally, for some patients and indications, given the extraordinary activity observed with some compounds, an accelerated approval has been granted, speeding patient access to these therapies, but also requiring confirmatory registration phase III studies. This adds uncertainty about the real clinical value of the agent when explored in early stage studies . In this article, we describe the current status, limitations and options for improvement for the clinical development of immunotherapy in cancer including: (i) the limitations of preclinical models to predict biological activity in humans (ii) special pharmacologic considerations for the development of these agents (iii) the selection of indication, line of treatment and backbone partner, and (iv) the threshold of activity that has to be reached for the compound to be considered as clinically meaningful .
When evaluating therapeutic compounds against oncogenic vulnerabilities or cytotoxic chemotherapy, the efficacy of these agents requires evaluation using in vivo models . In this case, several models can be used, including nude mice with xenografted tumor cells, transgenic mice with a specific genomic alteration, or patient derived xenograft (PDX) models. Generally, it is considered that the effect observed in these models can mirror the potential activity detected in humans . In contrast, for immunotherapy agents, it is generally accepted that preclinical in vivo data do not translate into clinical efficacy in patients . The use of syngeneic mice models where the animal immune system is preserved has been utilized extensively, and we have seen this model incorporated in the evaluation of agents approved recently . A detailed review of models that recreate the human immune system is beyond the scope of this review and can be found in other articles . In this context, although very sophisticated models have been developed with the intent to reflect the human immune system, it is generally accepted that none of these models can predict the efficacy of the evaluated compound when tested in humans . Similarly, in vivo models do not predict safety for later human studies, therefore the US Food and Drug Administration (FDA) has decided not to make animal studies mandatory for investigational new drug (IND) applications of novel agents . This initiative, which was released recently, endorses the limited information of some of these pre-clinical models, including those to evaluate IO agents. As a consequence, models for testing efficacy in vitro like the use of tumor organoids or tissue cross reactivity studies for safety (among others), are gaining interest .
Several concepts must be taken in consideration when developing novel IO agents in cancer. For instance, from a pharmacokinetic (PK) perspective, if the target is significantly expressed in non-transformed tissue or is abundant in immune cells not located in tumor areas, a phenomenon called target mediated drug disposition (TMDD) can be observed. This translates to a reduction of the exposure of the compound as the agent binds first to targets not expressed within tumor areas . This effect has also been termed “sink effect” on account of the reduction of the compound in plasma. To avoid this phenomenon, more frequent administrations of the agent are needed during the first cycles to saturate target binding in non-tumor areas . TMDD is observed frequently with many IO agents including most of the CD3 T-cell engagers, CD73 inhibitors or 4-1BB bi-specific antibodies, among others . An additional problem is the development and presence of anti-drug antibodies (ADA) against biologic or protein-based drugs. Although there are several non-clinical pharmacology methods to predict the development of ADA in humans, it is impossible to accurately predict the potential impact that ADAs will have in patients by neutralizing the new compound . Overall, complex protein structures that do not mimic human formats have higher chances for the development of ADAs . Recent examples have demonstrated how the production of ADAs can limit the development of novel agents particularly when their presence modifies the PK exposure and therefore impacts target engagement . In this case, only the administration of doses that can saturate the capacity to produce ADAs can overcome this limitation. This requires administration of the agent at higher doses, but this can only be achieved if there is a sufficient therapeutic index, a condition not observed with all agents . Of note, agents that activate CD4+ T-cells and therefore support humoral response can have a higher probability to induce ADA describes elements that can influence PK and therefore affect target engagement. Finally, management of side effects is particularly important for T-cell activators/engagers, where presence of cytokine release syndrome (CRS), neurologic toxicity or infusion reactions (IR) can limit their development . With this regard, premedication with steroids, treatment with anti-IL-6 inhibitors, step up schedule approaches, subcutaneous administrations or the pre-administration with anti-CD20 antibodies, have been implemented in an intent to reduce toxicity and facilitate the development of these agents . describes strategies to optimize and reach optimal biological active doses overcoming the main limitation of toxicity.
Only two types of IO compounds have been approved for the treatment of hematologic malignancies and solid tumors; and those include TCE and ICI. TCE are bi-specific antibodies that link CD3 or any other T-cell functional receptor with a tumor associated antigen (TAA) to induce tumor cell death by the effector immune cell . Here, a differential expression of TAA is mandatory to avoid non-tumor, on-target toxicity. Current approved TCE are designed against well-defined TAA in selected indications, for instance CD3-CD19 bi-specifics including blinatumomab in Philadelphia chromosome-negative relapsed or refractory precursor B-cell acute lymphoblastic leukemia (R/R ALL) , and adults and children with B-cell precursor acute lymphoblastic leukemia (BCP ALL) in first or second complete remission with minimal residual disease (MRD) , or more recently teclistamab, in relapse or refractory Multiple Myeloma for B-cell maturation antigen (BCMA)-CD3 . CD3-CD20 mosunetuzumab has received accelerated approval for the treatment of relapsed or refractory follicular lymphoma after two or more lines of treatment . Epcoritamab, a bispecific antibody targeting CD3 and CD20, has received FDA priority review for the treatment of relapsed/refractory diffuse large B cell Lymphoma . The development of TCE can be more successful in hematologic malignancies where monoclonal expansion of tumor cells drives the disease, and TAA are homogeneously expressed (e.g. CD19 or CD20 in B cell lymphoma) . Identification of specific TAA in solid tumors is more challenging, due to intra- and inter-tumor heterogeneity . However, interestingly some TAA in solid tumors, are specifically expressed like KLK2 in prostate cancer or LY6G6D in colorectal cancer. These are therefore promising candidates for the development of TCE . Regarding the line of treatment and backbone partner, given that these compounds induce a profound T-cell activation with significant immunologic toxicity, later lines of treatment are chosen for evaluation, and usually are administered in monotherapy, and only evaluated in combination once the optimal dose, schedule and route of administration is clearly defined . As described before, ICI such as anti PD(L)1, anti CTLA4 have been part of the therapeutic armamentarium for over a decade. More recently the anti-LAG3 antibody relatlimab was approved in first line melanoma . Most of these agents have demonstrated activity in late lines of treatment particularly in patients with immune reactive tumors . Once these agents show activity in patients pretreated after several lines of standard treatments, evaluation of efficacy in earlier lines, either alone or in combination with standard of care agents is warranted. These include combinations with chemotherapy regimens in first-line gastric, esophageal, non-small cell lung cancer (NSCLC) or triple negative breast, among other tumors. Additionally, examination as monotherapy in PD-L1 enriched populations like in Head and Neck Squamous Cell Carcinoma (HNSCC) or NSCLC is also of interest . In line with previous data, currently, most early-stage clinical studies particularly with IO use a master protocol approach for early development . This includes a single protocol in which several dose escalation parts alone or in combination, are followed by a multiple dose expansion, single-arm cohorts to identify early signs of activity . This approach also aligns with several recent FDA requirements aiming at developing clinical strategies to better identify the dose selected for registration studies . This initiative has been called the Optimus project . Of note, options for dose optimization vary depending on the mechanism of action of the compound, safety profile and combination strategies, and can include evaluation of different dose levels in the dose escalation phase using back-fill patients or selection of two different expansion cohorts with different dose levels. Dose optimization studies should be performed at biologically active doses where activity has been identified, and in indications with potential to detect clinical efficacy . The method for dose escalation is also relevant. Despite the availability of modern Bayesian designs for dose escalation, some studies still use the 3 + 3 design. This poses a significant problem for IO agents with stochastic toxicities that can appear in dose levels already previously thought to be safe and at times which can exceed the period of observation for dose limiting toxicity. Protocols with 3 + 3 design that do not take into account dose-limiting toxicities during the PK-PD expansion can result in challenges in dose selection. Bayesian Optimal Interval Design (BOIN) or modified toxicity probability interval (mTPI) are examples of dose escalation methods more appropriate for these studies . displays a summary of dose escalation phase I designs. Finally, the specific pattern of response to IO agents must be taken in consideration particularly the expected changes in cross-sectional imaging. Modification to the response evaluation criteria in solid tumors to account for different response patterns in tumors than classic chemotherapy drugs (iRECIST) is of interest, but currently, these criteria are not considered by regulatory bodies for drug approvals .
Randomized clinical trials with a time-to-event endpoint like overall survival (OS) or surrogates thereof such as progression free survival (PFS), have been the gold standard for the approval of novel anti-cancer agents . More recently, in indications that constituted an unmet medical need or with low prevalence, single-arm phase 2 studies have been used to demonstrate clinical activity and support accelerated regulatory approval . Typically in these cases, the endpoint selected has been overall response rate (ORR) and/or median duration of response (mDoR), and only if benefit observed was considered as substantial especially in a particular clinical scenario where no active treatment was available, regulatory bodies have provided conditional approval . Of note this approach is not specific to immunotherapy. Recently exceptional pathological complete responses (pCR) in specific tumor types have been considered as adequate for regulatory drug approval . However, in most of these situations a time to event endpoint such as PFS or OS was required for conversion to full approval, thereby requiring the completion of a phase III post-registration study comparing the new agent against the standard of care (SOC) . Examples are many in solid and hematologic malignancies, for instance the full approval of pembrolizumab in MSI-H colorectal cancer . In most cases, beneficial effect was confirmed in definitive phase 3 studies, but in some benefit could not be confirmed and this resulted in withdrawal of the approval of that agent for that indication . Examples of withdrawal include pembrolizumab (Keynote-604) or nivolumab in extensive-stage small cell lung cancer (SCLC) (checkMate 451 and 331) that did not reach OS benefit in the phase III study .
In early clinical studies, once the biologically active dose has been identified, several expansion cohorts in specific tumor indications are initiated with the aim of finding signs of clinical activity. Expansion cohorts designed to explore for signals of activity should include a well-established population of patients powered with enough patients to detect activity. If an optimal study design is not followed, there is a risk that the potential benefit of the compound will be diluted as the most responsive population will not be included . It has been established that in a population of patients pretreated with anti-PD-(L)1 therapies, when rechallenging with anti-PD1 or other IO agent, response rates higher than 20% can be considered as meaningful, taken into consideration that single agent activity of anti PD-(L)1 antibodies produces responses in less than 10-15% of the patients . Therefore, it is generally accepted that a 20% ORR compared with SOC historical controls is the minimum necessary to consider the new agent with potential for further clinical development. Recently several examples have met this threshold. For instance, the anti-NKG2A antibody monalizumab demonstrated ORR of more than 20% in second line treatment in PD1-pretreated HNSCC patients in combination with cetuximab . Similarly, the ITL4 inhibitor MK4830 showed an ORR of more than 20% in PD1-pretreated patients in different solid tumors . Other examples include the anti-TIGIT antibody tiragolumab with significant activity in a specific expansion cohort of NSCLC patients (50% ORR and 80% disease control rate) or the anti-LAG3 antibody relatlimab that demonstrated clinical activity in later treatment lines in melanoma before being explored in first-line . Very recently an anti-CTLA4 with an Fc enhanced fraction has demonstrated a very high rate of responses in tumors with relatively low immune-reactivity including ovarian cancer, sarcoma and microsatellite stable colorectal cancer . In this case, responses were higher than 30% in a heavily pretreated population where immunotherapy have never demonstrated clinical efficacy . In addition, activity has also been observed with anti-CD47 antibodies particularly in Myelodysplastic Syndrome (MDS) . An in-depth description of these studies is outside the scope of this review. However, in all these cases, the observed data support the further evaluation of these agents in more definitive trials.
Once signs of clinical activity have been identified in early clinical studies, a late-stage clinical development plan is necessary. Either a randomized phase II study to confirm activity, or a phase II-III study with registration purposes can be designed. The anti-TIGIT antibody tiragolumab demonstrated significant clinical activity in a randomized phase II study in first-line PD-L1 positive NSCLC in combination with atezolizumab versus atezolizumab alone. The combination showed a median PFS of 5.4 months versus 3.6 months in the placebo plus atezolizumab group . These data support the development of a registration phase III study in first line NSCLC with two co-primary endpoints PFS and OS . Data for PFS and OS are expected to be released next year although the first interim analysis of PFS did not reach the defined threshold of activity . Similarly, negative results have been reported in combination with chemotherapy in first-line extensive stage small cell lung cancer (SCLC) . A different approach was taken for the development for the anti-LAG3 relatlimab where a combined phase II-III registration study was designed in first-line melanoma with a predefined futility analysis for activity in the phase II part . Of note, some drugs have been explored in the early-stage/curable setting before demonstrating activity in metastatic/palliative patients. This can be due to strategic reasons from sponsors or may be guided by biological principles. In the field of small molecules only neratinib has received approval in the adjuvant setting before demonstration of benefit in the advanced stage . In the IO space, the anti-NKG2A monalizumab and the anti-CD73 oleclumab have been evaluated in locally advanced NSCLC in combination with durvalumab after chemoradiotherapy in stage III NSCLC . For both combinations an increase in ORR was observed compared with durvalumab alone after chemoradiotherapy thereby supporting the current evaluation in larger phase III registration studies.
For a robust anti-tumor immune response, the existing patient immune system plays a central role, and modulation of the target outside tumor areas is key . This requisite has to be added to the presence of an immunoreactive tumor with high expression of the targets like PD-L1, TIGIT, or LAG3, as examples . For TCE therapies, recent data suggest the importance of the presence of pretreatment associated T-cell density with an important role of CD8+ T- cells and a negative implication of CD4+ T-cells or the presence of exhausted-like CD8+ T-cells . Identification of biomarkers in liquid biopsy using circulating tumor DNA (ctDNA) has been used for stratification of risk and therefore potential response to anti-PD(L)1 therapies, like in locally advanced bladder cancer . However, this is just an indirect measure of the tumor burden and not a direct evaluation of target engagement or correlates of the activated immune system. In line with this, inflammation is directly linked with a dysfunctional immune response . High pre-treatment levels of neutrophil to lymphocyte ratio (NLR) is an indirect measure of inflammation and can predict detrimental response to ICI . Furthermore, the evaluation of the soluble form of PD-L1 in liquid biopsy has been implicated in detrimental response, but this finding was tumor dependent and need further validation. In the future, it will be desirable to identify biomarkers of response but also biomarkers that could predict efficacy over time and that could be easily measured in plasma. In line with this, given the fact that identification of a predictive biomarker is challenging, most agents under development are evaluated as a single agent or in combination with anti-PD(L)1 agents, in immune reactive tumors where anti-PD(L)1 agents are given alone in first line, including indications like PDL1+NSCLC or HNSCC tumors. Then, if activity is detected in single arm cohorts, expansion to other indications is explored. Description of novel combinations are beyond the scope of this work. However, it is important to mention those that act on exhausted T-cells as a principal cause of resistance, including 4-1BB or CD28 agonists .
Given the lack of reliable animal models to predict efficacy in humans, decisions regarding the development of a particular agent, and the selection of indications to be explored, are usually based on the following criteria: i) the biological rationale of the target ii) the preclinical in vitro activity alone or in combination and iii) the presence of the target and the specific immune population in a particular tumor type. In case these criteria have been met for a particular agent, the potential for development of that compound will depend mainly on the mechanism of action and potential toxicity profile. Of note, toxicity will also depend on the mechanism of action. Substantial differences in toxicity have been observed with agents that modulate the myeloid compartment compared with those that activate T-cells. Toxicity of T-cell activating agents like T-cell engagers or bi-specific PDL1-41BB antibodies include severe infusion reactions or cytokine release syndrome, among others, rarely observed with the other type of agents . For an adequate trial design and an early clinical development plan, all these concepts must be taken in consideration including dose escalation, dose optimization and dose expansion strategies, in addition to the expected magnitude of benefit by indication. In summary, the clinical development strategy for a particular compound should be designed from the early beginning, taken in consideration some of the topics that have been commented in this review.
AO and AP have designed the study. AO, EA, and AT have contributed providing material. All authors contributed to the article and approved the submitted version.
|
The Neurodata Without Borders ecosystem for neurophysiological data science | f46b0faf-e029-46b5-9f2a-36fe41797516 | 9531949 | Physiology[mh] | The immense diversity of life on Earth has always provided both inspiration and insight for biologists. For example, in neuroscience, the functioning of the brain is studied in species ranging from flies, to mice, to humans . Because brains evolved to produce a plethora of behaviors that advance organismal survival, neuroscientists monitor brain activity with a variety of different tasks and neural recording techniques. . These technologies provide complementary views of the brain, and creating a coherent model of how the brain works will require synthesizing data generated by these heterogeneous experiments. However, the extreme heterogeneity of neurophysiological experiments impedes the integration, reproduction, interchange, and reuse of diverse neurophysiology data. As other fields of science, such as climate science , astrophysics , and high-energy physics have demonstrated, community-driven standards for data and metadata are a critical step in creating robust data and analysis ecosystems, as well as enabling collaboration and reuse of data across laboratories. A standardized language for neurophysiology data and metadata (i.e., a data language) is required to enable neuroscientists to effectively describe and communicate about their experiments, and thus share the data. The extreme heterogeneity of neurophysiology experiments is exemplified in . Diverse experiments are designed to investigate a variety of neural functions, including sensation, perception, cognition, and action. Tasks include running on balls or treadmills (e.g. pictures, ; ), memory-guided navigation of mazes , production of speech , and memory formation . The use of different species in neuroscience is driven, in part, by the applicability of specific neurophysiological recording techniques . For example, the availability of genetically modified mice makes this species ideal to monitor the activity of genetically defined neurons using calcium sensors (e.g. with GCaMP; optophysiology, ‘o-phys’; ). On the other hand, intracranially implanted electrophysiology probes (‘e-phys’) with large numbers of electrodes enable monitoring the activity of many single neurons at millisecond resolution from different brain regions simultaneously in freely behaving rats . Likewise, in human epilepsy patients, arrays of electrodes on the cortical surface (i.e. electrocorticography, ECoG) provides direct electrical recording of mesoscale neural activity at high-temporal resolution across multiple brain areas (e.g., speech sensorimotor cortex ‘SMC’; ; ). Additionally, to understand the intracellular functioning of single neurons, scientists measure membrane potentials (ic-ephys), for example, via patch clamp recordings (see Appendix 1). As a final example, to study the detailed workings of complete neural circuits, supercomputers are used for biophysically detailed simulation of the intracellular membrane potentials of a large variety of neurons organized in complex networks ( ; Raikov and Soltesz, unpublished data; ). Although the heterogeneity described above is most evident across labs, it is present in a reduced form within single labs; lab members can use new equipment or different techniques in custom experiments to address specific hypotheses. As such, even within the same laboratory, storage and descriptions of data and metadata often vary greatly between experiments, making archival sharing and reuse of data a significant challenge. Across species and tasks, different acquisition technologies measure different neurophysiological quantities from multiple spatial locations over time. Thus, the numerical data itself can commonly be described in the form of space-by-time matrices, the storage of which has been optimized (for space and rapid access) by computer scientists for decades. It is the immense diversity of metadata required to turn those numbers into knowledge that presents the outstanding challenge. Scientific data must be thought of in the context of the entire data lifecycle, which spans planning, acquisition, processing, and analysis to publication and reuse . In this context, a ‘data ecosystem’ is a shared market for scientific data, software, and services that are able to work together. Such an ecosystem for neurophysiology would empower users to integrate software components and products from across the ecosystem to address complex scientific challenges. Foundational to realizing a data ecosystem is a common ‘language’ that enables seamless exchange of data and information between software components and users. Here, the principles of Findable, Accessible, Interoperable, and Reusable (i.e. FAIR) data management and stewardship are widely accepted as essential to ensure that data can flow reliably between the components of a data ecosystem. Traditionally, data standards are often understood as rigid and static data models and formats. Such standards are particularly useful to enable the exchange of specific data types (e.g. image data), but are insufficient to address the diversity of data types generated by constantly evolving experiments. Together, these challenges and requirements necessitate a conceptual departure from the traditional notion of a rigid and static data standard. That is, we need a ‘language’ where fundamental structures can be reused and combined in new ways to express novel concepts and experiments. A data language for neurophysiology will enable precise communication about neural data that can co-evolve with the needs of the neuroscience community. We created the Neurodata Without Borders (NWB) data language (i.e. a standardized language for describing data) for neurophysiology to address the challenges described above. NWB(v2) accommodates the massive heterogeneity and evolution of neurophysiology data and metadata in a unified framework through the development of a novel data language that can co-evolve with neurophysiology experiments. We demonstrate this through the storage of multimodal neurophysiology data, and derived products, in a single NWB file with easy visualization tools. This generality was enabled by the development of a robust, extensible, and sustainable software architecture based on our Hierarchical Data Modeling Framework (HDMF) . To facilitate new experimental paradigms, we developed methods for creating and sharing NWB Extensions that permit the NWB data language to co-evolve with the needs of the community. NWB is foundational for the Distributed Archives for Neurophysiology Data Integration (DANDI) data repository to enable collaborative data sharing and analysis. Together, NWB and DANDI make neurophysiology data FAIR. Indeed, NWB is integrated with a growing ecosystem of state-of-the-art analysis tools to provide a unified storage platform throughout the data life cycle. Through extensive and coordinated efforts in community engagement, software development, and interdisciplinary governance, NWB is now being utilized by more than 53 labs and research organizations. Across these groups, NWB is used for all neurophysiology data modalities collected from species ranging from flies to humans during diverse tasks. Together, the capabilities of NWB provide the basis for a community-based neurophysiology data ecosystem. The processes and principles we utilized to create NWB provide an exemplar for biological data ecosystems more broadly. NWB enables unified description and storage of multimodal data and derived products NWB files contain all of the measurements for a single experiment, along with all of the necessary metadata to understand that data. Neurophysiology experiments often contain multiple simultaneous streams of data, for example, via simultaneous recording of neural activity, sensory stimuli, behavioral tracking, and direct neural modulation. Furthermore, neuroscientists are increasingly leveraging multiple neurophysiology recording modalities simultaneously (e.g. ephys and ophys), which offer complementary information not achievable in a single modality. These distinct raw data input types often require processing, further expanding the multiplicity of data types that need to be described and stored. A key capability of NWB is to describe and store many data sources (including neurophysiological recordings, behavior, and stimulation information) in a unified way that is readily analyzed with all time bases aligned to a common clock. For each data source, raw acquired signals and/or preprocessed data can be stored in the same file. illustrates a workflow for storing and processing electrophysiology and optical physiology in NWB . Raw voltage traces ( , top) from an extracellular electrophysiology recording and image sequences from an optical recording ( , bottom) can both be stored in the same NWB file, or separate NWB files synchronized to each other. Extracellular electrophysiology data often goes through spike sorting, which processes the voltage traces into putative single units and action potential (a.k.a., spikes) times for those units ( , top). The single unit spike times can then also be written to the NWB file. Similarly, optical physiology is generally processed using segmentation algorithms to identify regions of the image that correspond to neurons and extract fluorescence traces for each neuron ( , bottom). The fluorescence traces can also be stored in the NWB file, resulting in raw and processed data for multiple input streams. The timing of these streams is each defined separately, allowing streams with different sampling rates and starting times to be registered to the same common clock. As illustrated in , NWB can also store raw and processed behavioral data as well as stimuli, such as animal location and amplitude/frequency of sounds. The multi-modal capability of NWB is critical for capturing the diverse types of data simultaneously acquired in many neurophysiology experiments, particularly if those experiments involve multiple simultaneous neural recording modalities. Having pre-synchronized data in the same format enables faster and less error-prone development of analysis and visualizations tools that provide simultaneous views across multiple streams. shows an interactive dashboard for exploring a dataset of simultaneously recorded optical physiology and electrophysiology data published by the Allen Institute . This dashboard illustrates the simultaneous exploration of five data elements all stored in a single NWB file. The microscopic image panel ( , far left) shows a frame of the video recorded by the microscope. The red outline overlaid on that image shows the region-of-interest where a cell has been identified by the experimenter. The fluorescence trace (dF/F) shows the activation of the region-of-interest over time. This activity is displayed in line with electrophysiology recordings of the same cell (ephys), and extracted spikes (below ephys). Interactive controls ( , bottom) allow a user to explore the complex and important relationship between these data sources. Visualizations of multiple streams of data is a common need across different types of neurophysiology data. Another NWBWidgets dashboard is described in , which demonstrates a dashboard for viewing human body position tracking with simultaneously acquired ECoG data, as well as a panel for viewing the 3D position of electrodes on the participant’s brain. Dashboards for specific experiment types can be constructed using NWBWidgets, a library for interactive web-based visualizations of NWB data, that provides tools for navigating and combining data across modalities. The NWB software architecture modularizes and integrates all components of a data language Neuroscientists use NWB through a core software stack with four modularized components: the specification language, the data standard schema, data use APIs, and storage backends The identification and modularization of these components was a core conceptual advance of the NWB software. This software architecture provides flexible accommodation of the heterogenous use cases and needs of NWB users. Modularizing the software in this way allows extending the schema to handle new types of data, to implement APIs in new programming languages, and to store NWB using different backends, all while maintaining compliance with NWB and providing a stable interface for users to interact with. First, we describe the specification language used to define hierarchical data models. The YAML-based specification language defines four primitive structures: Groups, Datasets, Attributes, and Links. Each of these primitive structures has characteristics to define their names and parameters (e.g. the allowable shapes of a Dataset). Importantly, these primitives are abstract, and are not tied to any particular data storage backend. The specification language also uses object-oriented principles to define neurodata types that, like classes, can be reused through inheritance and combined through composition to build more complex structures. The NWB core schema uses the primitives defined in the specification language to define more complicated structures and requirements for particular types of neurophysiology data. For instance, an ElectricalSeries is a neurodata type that defines the data and metadata for an intracranially recorded voltage time series in an extracellular electrophysiology experiment. ElectricalSeries extends the TimeSeries neurodata type, which is a generic structure designed for any measurement that is sampled over time, and defines fields, such as, data, units of measurement, and sample times (specified either with timestamps or sampling rate and start time). ElectricalSeries also requires an electrodes field, which provides a reference to a table of electrodes describing the locations and characteristics of the electrodes used to record the data. The NWB core schema defines many neurodata types in this way, building from generic concepts to specific data elements. The neurodata types have rigorous metadata requirements that ensure a sufficiently rich description of the data for reanalysis. The neurodata types are divided into modules such as ecephys (extracellular electrophysiology), icephys (intracellular electrophysiology), ophys (optical physiology), and behavior. Importantly, the core schema is defined on its own and is agnostic to APIs and programming languages. This allows for the creation of an API in any programming language, which will allow NWB to stay up to date as programming technologies advance. Application Programming Interfaces (APIs) allow convenient interfaces for writing and reading data according to the NWB schema. The development team maintains APIs in Python (PyNWB) and MATLAB (MatNWB), the two most widely used programming languages in neurophysiology. These APIs are governed by the NWB schema and use an object-oriented design in which neurodata types (e.g. ElectricalSeries or TowPhotonSeries) are represented by a dedicated interface class. Both APIs are fully compliant with the NWB standard and are, hence, interoperable (i.e. files generated by PyNWB can be read via MatNWB and vice versa). Both APIs also support advanced data Input/Output (I/O) features, such as lazy data read, compression, and iterative data write for data streaming. A key difference in the design of PyNWB and MatNWB is the implementation of the data translation process. PyNWB uses a dynamic data translation process based on data builders . The data builders are classes that mirror the NWB specification language primitives and provide an interoperability layer where data from different storage backends can be mapped using object mappers into a uniform API. In contrast, MatNWB implements a static translation process that generates the MATLAB API classes automatically from the schema . The MatNWB approach simplifies updating of the API to support new versions of the NWB schema and extensions, and helps minimize cost for development, but with reduced flexibility in supported storage backend and API. The difference in the data translation process between the APIs (i.e. static vs. dynamic) is a reflection of the different target uses . MatNWB primarily targets data conversion and analysis. In contrast, PyNWB additionally targets integration with data archives and web technologies and is used heavily for development of extensions and exploration of new technologies, such as alternate storage backends and parallel computing libraries. Finally, the specification of data storage backends deals with translating NWB data models to/from storage on disk. Data storage is governed by formal specifications describing the translation of NWB data primitives (e.g. groups or datasets) to primitives of the particular storage backend format (e.g. HDF5) and is implemented as part of the NWB user APIs. HDF5 is our primary backend, chosen for its broad support across scientific programming languages, its sophisticated tools for handling large datasets, and its ability to express very complex hierarchical structures in relatively few files. The interoperability afforded by the PyNWB builders allows for other backends, and we have a prototype for storing NWB in the Zarr format. Together, these four components (specification language, standard schema, APIs, and storage backend) and the interaction between them constitute a sophisticated software infrastructure that is applicable beyond neuroscience and could be useful to many other domains. Therefore, we have factored out the domain-agnostic components of each of these four components into a Python software package called the Hierarchical Data Modeling Framework (HDMF) . Much of the infrastructure described here, including the specification language, fundamental structures of the core schema, base classes for the object mapper and builder layers, and base classes of the PyNWB API are defined in the HDMF package. With its modular architecture and open-source model, the NWB software stack instantiates the NWB data language and makes NWB accessible to users and developers. The NWB software design illustrates the complexity of creating a data language and provides reusable components (e.g. HDMF and the HDMF Common Schema) that can be applied more broadly to facilitate development of data languages for other biological fields in the future. All NWB software is open source, managed and versioned using Git, and released using a permissive BSD license via GitHub. NWB uses automated continuous integration for testing on all major architectures (MacOS, Windows, and Linux) and all core software can be installed via common package managers (e.g. pip and conda). The suite of NWB core software tools enables users to easily read and write NWB files, extend NWB to integrate new data types, and builds the foundation for integration of NWB with community software. NWB data can also be easily accessed in other programming languages (e.g. IGOR or R) using the HDF5 APIs available across modern scientific programming languages. NWB enables creation and sharing of extensions to incorporate new use cases As with all of biology, neurophysiological discovery is driven in large part by new tools that can answer previously unconsidered questions. Thus, a language for neurophysiology data must be able to co-evolve with the experiments being performed and provide customization capability while maintaining stability. NWB enables the creation and sharing of user-defined extensions to the standard that support new and specialized data types . Neurodata Extensions (NDX) are defined using the same formal specification language used by the core NWB schema. Extensions can build off of data types defined in the core schema or other extensions through inheritance and composition. This enables the reuse of definitions and associated code, facilitates the integration with existing tools, and makes it easier to contextualize new data types. NWB provides a comprehensive set of tools and services for developing and using neurodata extensions. The NWB Specification API, HDMF DocUtils, and the PyNWB and MatNWB user APIs work with extensions with little adjustment . In addition, the NDX Template makes it easy for users to develop new extensions. Appendix 2 demonstrates the steps outlined in for the ndx-simulation-output extension shown in . The Neurodata Extensions Catalog then provides a centralized listing of extensions for users to publish, share, find, and discuss extensions across the community. Appendix 3 provides a more detailed overview of the extension workflow as part of the NDX Catalog. Several extensions have been registered in the Neurodata Extensions Catalog , including extensions to support the storage of the cortical surface mesh of an electrocorticography experiment subject, storage of fluorescence resonance energy transfer (FRET) microscopy data, and metadata from an intracellular electrophysiology acquisition system. The catalog also includes the ndx-simulation-output extension for the storage of the outputs of large-scale simulations. Large-scale network models that are described using the new SONATA format can be converted to NWB using this extension. The breadth of these extensions demonstrates that NWB will be able to accommodate new experimental paradigms in the future. As particular extensions gain traction within the community, they may be integrated into the core NWB format for broader use and standardization . NWB has a formal, community-driven review process for refining the core format so that NWB can adapt to evolving data needs in neuroscience. The owners of the extension can submit a community proposal for the extension to the NWB Technical Advisory Board, which then evaluates the extension against a set of metrics and best practices published on the catalog website. The extension is then tested and reviewed by both a dedicated working group of potential stakeholders and the general public before it is approved and integrated into the core NWB format. Key advantages of the extension approach are to allow iterative development of extensions and complete implementation and vetting of new data types under several use cases before they become part of the core NWB format. The NWB extension mechanism thus enables NWB to provide a unified data language for all data related to an experiment, allows describing of data from novel experiments, and supports the process of evolving the core NWB standard to fit the needs of the neuroscientific community. NWB is foundational for the DANDI data repository to enable collaborative data sharing Making neurophysiology data accessible supports published findings and allows secondary reuse. To date, many neurophysiology datasets have been deposited into a diverse set of repositories (e.g. CRCNS, Figshare, Open Science Framework, Gin). However, no single data archive provides the neuroscientific community the capacity and the domain specificity to store and access large neurophysiology datasets. Most current repositories have specific limits on data sizes and are often generic, and therefore lack the ability to search using domain specific metadata. Further, for most neuroscientists, these archives often serve as endpoints associated with publishing, while research is typically an ongoing and collaborative process. Few data archives support a collaborative research model that allows data submission prospectively, analysis of data directly in the archive, and opening the conversation to a broader community. Enabling reanalysis of published data was a key challenge identified by the BRAIN Initiative. Together, these issues impede access and reuse of data, ultimately decreasing the return on investment into data collection by both the experimentalist and the funding agencies. To address these and other challenges associated with neurophysiology data storage and access, we developed DANDI, a Web-based data archive that also serves as a collaboration space for neurophysiology projects . The DANDI data archive ( https://dandiarchive.org ) is a cloud-based repository for cellular neurophysiology data and uses NWB as its core data language . Users can organize collections of NWB files (e.g. recorded from multiple sessions) into DANDI datasets (so called Dandisets). Users can view the public Dandisets using a Web browser and search for data from different projects, people, species, and modalities. This search is over metadata that has been extracted directly from the NWB files where possible. Users can interact with the data in the archive using a JupyterHub Web interface to explore, visualize and analyze data stored in the archive. Using the DANDI Python client, users can organize data locally into the structure required by DANDI as well as download data from and upload data to the archive . Software developers can access information about Dandisets and all the files it contains using the DANDI Representational State Transfer (REST) API ( https://api.dandiarchive.org/ ). The REST API also allows developers to create software tools and database systems that interact with the archive. Each Dandiest is structured by grouping files belonging to different biosamples, with some relevant metadata stored in the name of each file, and thus aligning itself with the BIDS standard . Metadata in DANDI is stored using the JSON-LD format, thus allowing graph-based queries and exposing DANDI to Google Dataset Search. Dandiset creators can use DANDI as a living repository and continue to add data and analyses to an existing Dandiset. Released versions of Dandisets are immutable and receive a digital object identifier (DOI). The data are presently stored on an Amazon Web Services Public dataset program bucket, enabling open access to the data over the Web, and backed up on institutional repositories. DANDI is working with hardware platforms (e.g. OpenEphys), database software (e.g. DataJoint) and various data producers to generate and distribute NWB datasets to the scientific community. DANDI provides neuroscientists and software developers with a Platform as a Service (PAAS) infrastructure based on the NWB data language and supports interaction via the web browser or through programmatic clients, software, and other services. In addition to serving as a data archive and providing persistence to data, it supports continued evolution of Dandisets. This allows scientists to use the archive to collect data toward common projects across sites, and engage collaborators actively, directly at the onset of data collection rather than at the end. DANDI also provides a computational interface to the archive to accelerate analytics on data and links these Dandisets to eventual publications when generated. The code repositories for the entire infrastructure are available on Github under an Apache 2.0 license. NWB is integrated with state-of-the-art analysis tools throughout the data life cycle The goal of NWB is to accelerate the rate and improve the quality of scientific discovery through rigorous description and high-performance storage of neurophysiology data. Achieving this goal requires us to consider not just NWB, but the entire data life cycle, including planning, acquisition, processing, and analysis to publication and reuse . NWB provides a common language for neurophysiology data collected using existing and emerging neurophysiology technologies integrated into a vibrant neurophysiology data ecosystem. We describe the software relating to NWB as an ‘ecosystem’, because it is a marketplace of a diverse set of tools each playing a different role, from data acquisition, to visualization, analysis, and publication. NWB allows scientists to identify an unmet need and contribute new tools to address this need. This is critical to truly make neurophysiology data Findable, Accessible, Interoperable, and Reusable (i.e. FAIR) . NWB supports experiment planning by helping users to clearly define what metadata to collect and how the data will be formatted and managed . To support data acquisition, NWB allows for the storage of unprocessed acquired electrical and optical physiology signals. Storage of these signals requires either streaming the data directly to the NWB file from the acquisition system or converting data from other formats after acquisition . Some acquisition systems, such as the MIES intracellular electrophysiology acquisition platform, already support direct recording to NWB and the community is actively working to expand support for direct recording to NWB, for example, via ScanImage and OpenEphys . To allow utilization of legacy data and other acquisition systems, a variety of tools exist for converting neurophysiology data to NWB. For extracellular electrophysiology, the SpikeInterface package provides a uniform API for conversion and processing data that supports conversion for 19+different proprietary acquisition formats to NWB. For intracellular electrophysiology, the Intrinsic Physiology Feature Extractor (IPFX) package supports conversion of data acquired with Patchmaster. Direct conversion of raw data to NWB at the beginning of the data lifecycle facilitates data re-processing and re-analysis with up-to-date methods and data re-use more broadly. The NWB community has been able to grow and integrate with an ecosystem of software tools that offer convenient methods for processing data from NWB files (and other formats) and writing the results into an NWB file . NWB allows these tools to be easily accessed, compared, and used interoperably. Furthermore, storage of processed data in NWB files allows direct re-analysis of activation traces or spike times via novel analysis methods without having to reproduce time-intensive pre-processing steps. For example, the SpikeInterface API supports export of spike sorting results to NWB across nine different spike sorters and customizable data curation functions for interrogation of results from multiple spike sorters with common metrics. For optical physiology, several popular state-of-the-art software packages, such as CaImAn , suite2p , ciapkg , and EXTRACT help users build processing pipelines that segment optical images into regions of interest corresponding to putative neurons, and write these results to NWB. There is also a range of general and application-specific tools emerging for analysis of neurophysiology data in NWB . The NWBWidgets library enables interactive exploration of NWB files via web-based views of the NWB file hierarchy and dynamic plots of neural data, for example visualizations of spike trains and optical responses. NWB Explorer developed by MetaCell in collaboration with OpenSourceBrain, is a web app that allows a user to explore any publicly hosted NWB file and supports custom visualizations and analysis via Jupyter notebooks, as well as use of the NWBWidgets. These tools allow neuroscientists to inspect their own data for quality control, and enable data reusers to quickly understand the contents of a published NWB file. In addition to these general-purpose tools, many application-specific tools, for example, RAVE , ecogVIS , Brainstorm , Neo and others are already supporting or are in the process of developing support for analysis of NWB files. Many journals and funding agencies are beginning to require that data be made FAIR. For publication and preservation archives are an essential component of the NWB ecosystem, allowing data producers to document data associated with publications and share that data with others. NWB files can be stored in many popular archives, such as FigShare and Collaborative Research in Computational Neuroscience (CRCNS.org). As described earlier, in the context of the NIH BRAIN Initiative, the DANDI archive has been specifically designed to publish and validate NWB files and leverage their structure for searching across datasets. In addition, several other archives, for example, DABI and OpenSourceBrain , are also supporting publication of NWB data. Data archives also play a crucial role in discovery and reuse of data . In addition to providing core functionality for data storage and search, archives increasingly also provide compute capacity for reanalysis. For example, DANDI Hub provides users a familiar JupyterHub interface that supports interactive exploration and processing of NWB files stored in the archive. The NWB data APIs, validation, and inspection tools also play a critical role in data reuse by enabling access and ensuring data validity. Finally, the Neurodata Extension Catalog described earlier facilitates accessibility and reuse of data files that use NWB extensions. NWB integrates with (not competes with) existing and emerging technologies across the data lifecycle, creating a flourishing NWB software ecosystem that enables users to access state-of-the-art analysis tools, share and reuse data, improve efficiency, reduce cost, and accelerate and enable the scientific discovery process. See also Appendix 4 for an overview of NWB-enabled tools organized by application area and environment. Thus, NWB provides a common language to describe neurophysiology experiments, data, and derived data products that enables users to maintain and exchange data throughout the data lifecycle and access state-of-the-art software tools. NWB and DANDI build the foundation for a FAIR neurophysiology data ecosystem There have been previous efforts to standardize neurophysiology data, such as NWB(v1.0) and NIX . While NWB(v1.0) drafted a standard for neurophysiology, it lacked generality which limited its scope, and did not have a reliable and rigorous software strategy and APIs, making it hard to use and unreliable in practice. In contrast, NIX defines a generic data model for storage of annotated scientific datasets, with APIs in C++ and Python and bindings for Java and MATLAB. As such, NIX provides important functionality towards building a FAIR data ecosystem. However, the NIX data model lacks specificity with regard to neurophysiology, leaving it up to the user to define appropriate schema to facilitate FAIR compliance. Due to this lack of specificity, NIX files can also be more varied in structure and naming conventions, which makes it difficult to aggregate across NIX datasets from different labs. In , we assess and compare the compliance of different solutions for sharing neurophysiology data with FAIR data principles. The assessment for NIX is based on the INCF review for SPP endorsement . We also include a more in-depth breakdown of the assessment in Appendix 5. With increasing specificity of data models and standard schema—that is, as we move from general, self-describing formats (e.g. Zarr or HDF5) to generic data models (e.g. NIX) to application-specific standards (e.g. NWB)—compliance with FAIR principles and rigidness of the data specification increases. In practice, the various approaches often focus on different data challenges. As such, this is not an assessment of the quality of a product per-se, but an assessment of the out-of-the-box compliance with FAIR principles in the context of neurophysiology. For example, while self-describing data formats (like HDF5 or Zarr) lack specifics about (meta)data related to neurophysiology, they provide important technical solutions towards enabling high-performance data management and storage. Complementary to standardization of data, software packages, e.g., Neo , SpikeInterface and others, aim to simplify programmatic interaction with neurophysiology data in diverse formats and/or tools with diverse programming interfaces (e.g. for spike sorting) by providing common software interfaces for interacting with the data/tools. This strategy provides an important conduit to enable access to and facilitate integration with a diversity of data and tools. However, this approach does not address (nor does it aim to address) the issue of compliance of data with FAIR principles, but it rather aims to improve interoperability between and interaction with a diversity of tools and data formats. Ultimately, standardization of data and creation of common software interfaces are not competing strategies, but are synergistic approaches that together help create a more integrated data ecosystem. Indeed, tools such as SpikeInterface , are an important component of the larger NWB software ecosystem that help create an accessible neurophysiology data ecosystem by making it easier for users to integrate their data and tools with NWB and facilitating access and interoperability of diverse tools. Data standards build the foundation for an overall data strategy to ensure compliance with FAIR data principles. Ultimately, however, ensuring FAIR data sharing and use depends on an ecosystem of data standards and data management, analysis, visualization, and archive tools as well as laws, regulations, and data governance structures—for example the NIH BRAIN Initiative Data Sharing Policy or the OMB Open Data Policy —all working together . As illustrates, it is ultimately the combination of NWB and DANDI working together that enable compliance with FAIR principles. Here, certain aspects, such as usage licenses (R.1.1), indexing and search (F.4), authenticated access (A1.2), and long-term availability of metadata (A2) are explicitly the role of the archive. As this table shows, together, NWB and DANDI make neurophysiology data FAIR. Coordinated community engagement, governance, and development of NWB The neurophysiology community consists of a large diversity of stakeholders with vested interests and broad use cases. Inclusive engagement and outreach with the community are central to achieve acceptance and adoption of NWB and to ensure that NWB meets user needs. Thus, development of scientific data languages is as much a sociological challenge as it is a technological challenge. To address this challenge, NWB has adopted a modern open-source software strategy with community resources and governance, and a variety of engagement activities. Execution of this strategy requires coordinating efforts across stakeholders, use cases, and standard technologies to prioritize software development and resolve potential conflicts. Such coordination necessitates a governance structure reflecting the values of the project and the diverse composition of the community . The NWB Executive Board consists of diverse experimental and computational neuroscientists from the community and serves as the steering committee for developing the long-term vision and strategy. The NWB Executive Board (see Acknowledgements) was established in 2007 as an independent body from the technical team and PIs of NWB grants. The NWB Core Technology Team leads and coordinates the development of the NWB data language and software infrastructure to ensure quality, stability, and consistency of NWB technologies, as well as timely response to user issues. The Core Technology Team reports regularly to and coordinates with the Executive Board. Neuroscience Application Teams, consisting of expert users and core developers lead, engagement with targeted neuroscience areas in electrophysiology, optical physiology, and other emerging applications. These application teams are responsible for developing extensions to the data standard, new features, and technology integration together with Community Working Groups. The working groups allow for an agile, community-driven development and evaluation of standard extensions and technologies, and allow users to directly engage with the evolution of NWB. The broader neuroscience community further contributes to NWB via issue tickets, contributions to NWB software and documentation, and by creating and publishing data in NWB. This governance and development structure emphasizes a balance between the stability of NWB technologies that ensure reliable production software, direct engagement with the community to ensure that NWB meets diverse stakeholder needs, and agile response to issues and emerging technologies. The balance between stability, diversity, and agility is also reflected in the overall timeline of the NWB project . The NWB 1.0 prototype focused on evaluation of existing technologies and community needs and development of a draft data standard. Building on this prototype, the NWB 2.0 project initially focused on the redesign and productization of NWB, emphasizing the creation of a sustainable software architecture, reliable data standard, and software ready for use. Following the first full release of NWB 2.0 in January 2019, the emphasis then shifted to adoption and integration of NWB. The goal has been to grow a community and software ecosystem as well as maintenance and continued refinement of NWB. Together, these technical and community engagement efforts have resulted in a vibrant and growing ecosystem of public NWB data (Appendix 6) and tools utilizing NWB. The core NWB software stack has continued to grow steadily since the release of NWB 2.0 in January 2019, illustrating the need for continued development and maintenance of NWB . See also Appendix 7 for an overview of the software release process and history for the NWB schema and APIs. In 2020 more than 600 scientists participated in NWB developer and user workshops and we have seen steady growth in attendance at NWB events over time . At the same time, the global reach of NWB has also been increasing over time . The NWB team also provides extensive online training resources, including video and code tutorials, detailed documentation, as well as guidelines and best practices (see and Materials and methods). Community liaisons provide expert consultation for labs adopting NWB and for creating customized data conversion software for individual labs. As the table in Appendix 6 shows, despite its young age relative to the neurophysiology community, NWB 2.0 is being adopted by a growing number of neuroscience laboratories and projects led by diverse principal investigators, creating a representative community where users can exchange and reuse data, with NWB as a common data language . NWB files contain all of the measurements for a single experiment, along with all of the necessary metadata to understand that data. Neurophysiology experiments often contain multiple simultaneous streams of data, for example, via simultaneous recording of neural activity, sensory stimuli, behavioral tracking, and direct neural modulation. Furthermore, neuroscientists are increasingly leveraging multiple neurophysiology recording modalities simultaneously (e.g. ephys and ophys), which offer complementary information not achievable in a single modality. These distinct raw data input types often require processing, further expanding the multiplicity of data types that need to be described and stored. A key capability of NWB is to describe and store many data sources (including neurophysiological recordings, behavior, and stimulation information) in a unified way that is readily analyzed with all time bases aligned to a common clock. For each data source, raw acquired signals and/or preprocessed data can be stored in the same file. illustrates a workflow for storing and processing electrophysiology and optical physiology in NWB . Raw voltage traces ( , top) from an extracellular electrophysiology recording and image sequences from an optical recording ( , bottom) can both be stored in the same NWB file, or separate NWB files synchronized to each other. Extracellular electrophysiology data often goes through spike sorting, which processes the voltage traces into putative single units and action potential (a.k.a., spikes) times for those units ( , top). The single unit spike times can then also be written to the NWB file. Similarly, optical physiology is generally processed using segmentation algorithms to identify regions of the image that correspond to neurons and extract fluorescence traces for each neuron ( , bottom). The fluorescence traces can also be stored in the NWB file, resulting in raw and processed data for multiple input streams. The timing of these streams is each defined separately, allowing streams with different sampling rates and starting times to be registered to the same common clock. As illustrated in , NWB can also store raw and processed behavioral data as well as stimuli, such as animal location and amplitude/frequency of sounds. The multi-modal capability of NWB is critical for capturing the diverse types of data simultaneously acquired in many neurophysiology experiments, particularly if those experiments involve multiple simultaneous neural recording modalities. Having pre-synchronized data in the same format enables faster and less error-prone development of analysis and visualizations tools that provide simultaneous views across multiple streams. shows an interactive dashboard for exploring a dataset of simultaneously recorded optical physiology and electrophysiology data published by the Allen Institute . This dashboard illustrates the simultaneous exploration of five data elements all stored in a single NWB file. The microscopic image panel ( , far left) shows a frame of the video recorded by the microscope. The red outline overlaid on that image shows the region-of-interest where a cell has been identified by the experimenter. The fluorescence trace (dF/F) shows the activation of the region-of-interest over time. This activity is displayed in line with electrophysiology recordings of the same cell (ephys), and extracted spikes (below ephys). Interactive controls ( , bottom) allow a user to explore the complex and important relationship between these data sources. Visualizations of multiple streams of data is a common need across different types of neurophysiology data. Another NWBWidgets dashboard is described in , which demonstrates a dashboard for viewing human body position tracking with simultaneously acquired ECoG data, as well as a panel for viewing the 3D position of electrodes on the participant’s brain. Dashboards for specific experiment types can be constructed using NWBWidgets, a library for interactive web-based visualizations of NWB data, that provides tools for navigating and combining data across modalities. Neuroscientists use NWB through a core software stack with four modularized components: the specification language, the data standard schema, data use APIs, and storage backends The identification and modularization of these components was a core conceptual advance of the NWB software. This software architecture provides flexible accommodation of the heterogenous use cases and needs of NWB users. Modularizing the software in this way allows extending the schema to handle new types of data, to implement APIs in new programming languages, and to store NWB using different backends, all while maintaining compliance with NWB and providing a stable interface for users to interact with. First, we describe the specification language used to define hierarchical data models. The YAML-based specification language defines four primitive structures: Groups, Datasets, Attributes, and Links. Each of these primitive structures has characteristics to define their names and parameters (e.g. the allowable shapes of a Dataset). Importantly, these primitives are abstract, and are not tied to any particular data storage backend. The specification language also uses object-oriented principles to define neurodata types that, like classes, can be reused through inheritance and combined through composition to build more complex structures. The NWB core schema uses the primitives defined in the specification language to define more complicated structures and requirements for particular types of neurophysiology data. For instance, an ElectricalSeries is a neurodata type that defines the data and metadata for an intracranially recorded voltage time series in an extracellular electrophysiology experiment. ElectricalSeries extends the TimeSeries neurodata type, which is a generic structure designed for any measurement that is sampled over time, and defines fields, such as, data, units of measurement, and sample times (specified either with timestamps or sampling rate and start time). ElectricalSeries also requires an electrodes field, which provides a reference to a table of electrodes describing the locations and characteristics of the electrodes used to record the data. The NWB core schema defines many neurodata types in this way, building from generic concepts to specific data elements. The neurodata types have rigorous metadata requirements that ensure a sufficiently rich description of the data for reanalysis. The neurodata types are divided into modules such as ecephys (extracellular electrophysiology), icephys (intracellular electrophysiology), ophys (optical physiology), and behavior. Importantly, the core schema is defined on its own and is agnostic to APIs and programming languages. This allows for the creation of an API in any programming language, which will allow NWB to stay up to date as programming technologies advance. Application Programming Interfaces (APIs) allow convenient interfaces for writing and reading data according to the NWB schema. The development team maintains APIs in Python (PyNWB) and MATLAB (MatNWB), the two most widely used programming languages in neurophysiology. These APIs are governed by the NWB schema and use an object-oriented design in which neurodata types (e.g. ElectricalSeries or TowPhotonSeries) are represented by a dedicated interface class. Both APIs are fully compliant with the NWB standard and are, hence, interoperable (i.e. files generated by PyNWB can be read via MatNWB and vice versa). Both APIs also support advanced data Input/Output (I/O) features, such as lazy data read, compression, and iterative data write for data streaming. A key difference in the design of PyNWB and MatNWB is the implementation of the data translation process. PyNWB uses a dynamic data translation process based on data builders . The data builders are classes that mirror the NWB specification language primitives and provide an interoperability layer where data from different storage backends can be mapped using object mappers into a uniform API. In contrast, MatNWB implements a static translation process that generates the MATLAB API classes automatically from the schema . The MatNWB approach simplifies updating of the API to support new versions of the NWB schema and extensions, and helps minimize cost for development, but with reduced flexibility in supported storage backend and API. The difference in the data translation process between the APIs (i.e. static vs. dynamic) is a reflection of the different target uses . MatNWB primarily targets data conversion and analysis. In contrast, PyNWB additionally targets integration with data archives and web technologies and is used heavily for development of extensions and exploration of new technologies, such as alternate storage backends and parallel computing libraries. Finally, the specification of data storage backends deals with translating NWB data models to/from storage on disk. Data storage is governed by formal specifications describing the translation of NWB data primitives (e.g. groups or datasets) to primitives of the particular storage backend format (e.g. HDF5) and is implemented as part of the NWB user APIs. HDF5 is our primary backend, chosen for its broad support across scientific programming languages, its sophisticated tools for handling large datasets, and its ability to express very complex hierarchical structures in relatively few files. The interoperability afforded by the PyNWB builders allows for other backends, and we have a prototype for storing NWB in the Zarr format. Together, these four components (specification language, standard schema, APIs, and storage backend) and the interaction between them constitute a sophisticated software infrastructure that is applicable beyond neuroscience and could be useful to many other domains. Therefore, we have factored out the domain-agnostic components of each of these four components into a Python software package called the Hierarchical Data Modeling Framework (HDMF) . Much of the infrastructure described here, including the specification language, fundamental structures of the core schema, base classes for the object mapper and builder layers, and base classes of the PyNWB API are defined in the HDMF package. With its modular architecture and open-source model, the NWB software stack instantiates the NWB data language and makes NWB accessible to users and developers. The NWB software design illustrates the complexity of creating a data language and provides reusable components (e.g. HDMF and the HDMF Common Schema) that can be applied more broadly to facilitate development of data languages for other biological fields in the future. All NWB software is open source, managed and versioned using Git, and released using a permissive BSD license via GitHub. NWB uses automated continuous integration for testing on all major architectures (MacOS, Windows, and Linux) and all core software can be installed via common package managers (e.g. pip and conda). The suite of NWB core software tools enables users to easily read and write NWB files, extend NWB to integrate new data types, and builds the foundation for integration of NWB with community software. NWB data can also be easily accessed in other programming languages (e.g. IGOR or R) using the HDF5 APIs available across modern scientific programming languages. As with all of biology, neurophysiological discovery is driven in large part by new tools that can answer previously unconsidered questions. Thus, a language for neurophysiology data must be able to co-evolve with the experiments being performed and provide customization capability while maintaining stability. NWB enables the creation and sharing of user-defined extensions to the standard that support new and specialized data types . Neurodata Extensions (NDX) are defined using the same formal specification language used by the core NWB schema. Extensions can build off of data types defined in the core schema or other extensions through inheritance and composition. This enables the reuse of definitions and associated code, facilitates the integration with existing tools, and makes it easier to contextualize new data types. NWB provides a comprehensive set of tools and services for developing and using neurodata extensions. The NWB Specification API, HDMF DocUtils, and the PyNWB and MatNWB user APIs work with extensions with little adjustment . In addition, the NDX Template makes it easy for users to develop new extensions. Appendix 2 demonstrates the steps outlined in for the ndx-simulation-output extension shown in . The Neurodata Extensions Catalog then provides a centralized listing of extensions for users to publish, share, find, and discuss extensions across the community. Appendix 3 provides a more detailed overview of the extension workflow as part of the NDX Catalog. Several extensions have been registered in the Neurodata Extensions Catalog , including extensions to support the storage of the cortical surface mesh of an electrocorticography experiment subject, storage of fluorescence resonance energy transfer (FRET) microscopy data, and metadata from an intracellular electrophysiology acquisition system. The catalog also includes the ndx-simulation-output extension for the storage of the outputs of large-scale simulations. Large-scale network models that are described using the new SONATA format can be converted to NWB using this extension. The breadth of these extensions demonstrates that NWB will be able to accommodate new experimental paradigms in the future. As particular extensions gain traction within the community, they may be integrated into the core NWB format for broader use and standardization . NWB has a formal, community-driven review process for refining the core format so that NWB can adapt to evolving data needs in neuroscience. The owners of the extension can submit a community proposal for the extension to the NWB Technical Advisory Board, which then evaluates the extension against a set of metrics and best practices published on the catalog website. The extension is then tested and reviewed by both a dedicated working group of potential stakeholders and the general public before it is approved and integrated into the core NWB format. Key advantages of the extension approach are to allow iterative development of extensions and complete implementation and vetting of new data types under several use cases before they become part of the core NWB format. The NWB extension mechanism thus enables NWB to provide a unified data language for all data related to an experiment, allows describing of data from novel experiments, and supports the process of evolving the core NWB standard to fit the needs of the neuroscientific community. Making neurophysiology data accessible supports published findings and allows secondary reuse. To date, many neurophysiology datasets have been deposited into a diverse set of repositories (e.g. CRCNS, Figshare, Open Science Framework, Gin). However, no single data archive provides the neuroscientific community the capacity and the domain specificity to store and access large neurophysiology datasets. Most current repositories have specific limits on data sizes and are often generic, and therefore lack the ability to search using domain specific metadata. Further, for most neuroscientists, these archives often serve as endpoints associated with publishing, while research is typically an ongoing and collaborative process. Few data archives support a collaborative research model that allows data submission prospectively, analysis of data directly in the archive, and opening the conversation to a broader community. Enabling reanalysis of published data was a key challenge identified by the BRAIN Initiative. Together, these issues impede access and reuse of data, ultimately decreasing the return on investment into data collection by both the experimentalist and the funding agencies. To address these and other challenges associated with neurophysiology data storage and access, we developed DANDI, a Web-based data archive that also serves as a collaboration space for neurophysiology projects . The DANDI data archive ( https://dandiarchive.org ) is a cloud-based repository for cellular neurophysiology data and uses NWB as its core data language . Users can organize collections of NWB files (e.g. recorded from multiple sessions) into DANDI datasets (so called Dandisets). Users can view the public Dandisets using a Web browser and search for data from different projects, people, species, and modalities. This search is over metadata that has been extracted directly from the NWB files where possible. Users can interact with the data in the archive using a JupyterHub Web interface to explore, visualize and analyze data stored in the archive. Using the DANDI Python client, users can organize data locally into the structure required by DANDI as well as download data from and upload data to the archive . Software developers can access information about Dandisets and all the files it contains using the DANDI Representational State Transfer (REST) API ( https://api.dandiarchive.org/ ). The REST API also allows developers to create software tools and database systems that interact with the archive. Each Dandiest is structured by grouping files belonging to different biosamples, with some relevant metadata stored in the name of each file, and thus aligning itself with the BIDS standard . Metadata in DANDI is stored using the JSON-LD format, thus allowing graph-based queries and exposing DANDI to Google Dataset Search. Dandiset creators can use DANDI as a living repository and continue to add data and analyses to an existing Dandiset. Released versions of Dandisets are immutable and receive a digital object identifier (DOI). The data are presently stored on an Amazon Web Services Public dataset program bucket, enabling open access to the data over the Web, and backed up on institutional repositories. DANDI is working with hardware platforms (e.g. OpenEphys), database software (e.g. DataJoint) and various data producers to generate and distribute NWB datasets to the scientific community. DANDI provides neuroscientists and software developers with a Platform as a Service (PAAS) infrastructure based on the NWB data language and supports interaction via the web browser or through programmatic clients, software, and other services. In addition to serving as a data archive and providing persistence to data, it supports continued evolution of Dandisets. This allows scientists to use the archive to collect data toward common projects across sites, and engage collaborators actively, directly at the onset of data collection rather than at the end. DANDI also provides a computational interface to the archive to accelerate analytics on data and links these Dandisets to eventual publications when generated. The code repositories for the entire infrastructure are available on Github under an Apache 2.0 license. The goal of NWB is to accelerate the rate and improve the quality of scientific discovery through rigorous description and high-performance storage of neurophysiology data. Achieving this goal requires us to consider not just NWB, but the entire data life cycle, including planning, acquisition, processing, and analysis to publication and reuse . NWB provides a common language for neurophysiology data collected using existing and emerging neurophysiology technologies integrated into a vibrant neurophysiology data ecosystem. We describe the software relating to NWB as an ‘ecosystem’, because it is a marketplace of a diverse set of tools each playing a different role, from data acquisition, to visualization, analysis, and publication. NWB allows scientists to identify an unmet need and contribute new tools to address this need. This is critical to truly make neurophysiology data Findable, Accessible, Interoperable, and Reusable (i.e. FAIR) . NWB supports experiment planning by helping users to clearly define what metadata to collect and how the data will be formatted and managed . To support data acquisition, NWB allows for the storage of unprocessed acquired electrical and optical physiology signals. Storage of these signals requires either streaming the data directly to the NWB file from the acquisition system or converting data from other formats after acquisition . Some acquisition systems, such as the MIES intracellular electrophysiology acquisition platform, already support direct recording to NWB and the community is actively working to expand support for direct recording to NWB, for example, via ScanImage and OpenEphys . To allow utilization of legacy data and other acquisition systems, a variety of tools exist for converting neurophysiology data to NWB. For extracellular electrophysiology, the SpikeInterface package provides a uniform API for conversion and processing data that supports conversion for 19+different proprietary acquisition formats to NWB. For intracellular electrophysiology, the Intrinsic Physiology Feature Extractor (IPFX) package supports conversion of data acquired with Patchmaster. Direct conversion of raw data to NWB at the beginning of the data lifecycle facilitates data re-processing and re-analysis with up-to-date methods and data re-use more broadly. The NWB community has been able to grow and integrate with an ecosystem of software tools that offer convenient methods for processing data from NWB files (and other formats) and writing the results into an NWB file . NWB allows these tools to be easily accessed, compared, and used interoperably. Furthermore, storage of processed data in NWB files allows direct re-analysis of activation traces or spike times via novel analysis methods without having to reproduce time-intensive pre-processing steps. For example, the SpikeInterface API supports export of spike sorting results to NWB across nine different spike sorters and customizable data curation functions for interrogation of results from multiple spike sorters with common metrics. For optical physiology, several popular state-of-the-art software packages, such as CaImAn , suite2p , ciapkg , and EXTRACT help users build processing pipelines that segment optical images into regions of interest corresponding to putative neurons, and write these results to NWB. There is also a range of general and application-specific tools emerging for analysis of neurophysiology data in NWB . The NWBWidgets library enables interactive exploration of NWB files via web-based views of the NWB file hierarchy and dynamic plots of neural data, for example visualizations of spike trains and optical responses. NWB Explorer developed by MetaCell in collaboration with OpenSourceBrain, is a web app that allows a user to explore any publicly hosted NWB file and supports custom visualizations and analysis via Jupyter notebooks, as well as use of the NWBWidgets. These tools allow neuroscientists to inspect their own data for quality control, and enable data reusers to quickly understand the contents of a published NWB file. In addition to these general-purpose tools, many application-specific tools, for example, RAVE , ecogVIS , Brainstorm , Neo and others are already supporting or are in the process of developing support for analysis of NWB files. Many journals and funding agencies are beginning to require that data be made FAIR. For publication and preservation archives are an essential component of the NWB ecosystem, allowing data producers to document data associated with publications and share that data with others. NWB files can be stored in many popular archives, such as FigShare and Collaborative Research in Computational Neuroscience (CRCNS.org). As described earlier, in the context of the NIH BRAIN Initiative, the DANDI archive has been specifically designed to publish and validate NWB files and leverage their structure for searching across datasets. In addition, several other archives, for example, DABI and OpenSourceBrain , are also supporting publication of NWB data. Data archives also play a crucial role in discovery and reuse of data . In addition to providing core functionality for data storage and search, archives increasingly also provide compute capacity for reanalysis. For example, DANDI Hub provides users a familiar JupyterHub interface that supports interactive exploration and processing of NWB files stored in the archive. The NWB data APIs, validation, and inspection tools also play a critical role in data reuse by enabling access and ensuring data validity. Finally, the Neurodata Extension Catalog described earlier facilitates accessibility and reuse of data files that use NWB extensions. NWB integrates with (not competes with) existing and emerging technologies across the data lifecycle, creating a flourishing NWB software ecosystem that enables users to access state-of-the-art analysis tools, share and reuse data, improve efficiency, reduce cost, and accelerate and enable the scientific discovery process. See also Appendix 4 for an overview of NWB-enabled tools organized by application area and environment. Thus, NWB provides a common language to describe neurophysiology experiments, data, and derived data products that enables users to maintain and exchange data throughout the data lifecycle and access state-of-the-art software tools. There have been previous efforts to standardize neurophysiology data, such as NWB(v1.0) and NIX . While NWB(v1.0) drafted a standard for neurophysiology, it lacked generality which limited its scope, and did not have a reliable and rigorous software strategy and APIs, making it hard to use and unreliable in practice. In contrast, NIX defines a generic data model for storage of annotated scientific datasets, with APIs in C++ and Python and bindings for Java and MATLAB. As such, NIX provides important functionality towards building a FAIR data ecosystem. However, the NIX data model lacks specificity with regard to neurophysiology, leaving it up to the user to define appropriate schema to facilitate FAIR compliance. Due to this lack of specificity, NIX files can also be more varied in structure and naming conventions, which makes it difficult to aggregate across NIX datasets from different labs. In , we assess and compare the compliance of different solutions for sharing neurophysiology data with FAIR data principles. The assessment for NIX is based on the INCF review for SPP endorsement . We also include a more in-depth breakdown of the assessment in Appendix 5. With increasing specificity of data models and standard schema—that is, as we move from general, self-describing formats (e.g. Zarr or HDF5) to generic data models (e.g. NIX) to application-specific standards (e.g. NWB)—compliance with FAIR principles and rigidness of the data specification increases. In practice, the various approaches often focus on different data challenges. As such, this is not an assessment of the quality of a product per-se, but an assessment of the out-of-the-box compliance with FAIR principles in the context of neurophysiology. For example, while self-describing data formats (like HDF5 or Zarr) lack specifics about (meta)data related to neurophysiology, they provide important technical solutions towards enabling high-performance data management and storage. Complementary to standardization of data, software packages, e.g., Neo , SpikeInterface and others, aim to simplify programmatic interaction with neurophysiology data in diverse formats and/or tools with diverse programming interfaces (e.g. for spike sorting) by providing common software interfaces for interacting with the data/tools. This strategy provides an important conduit to enable access to and facilitate integration with a diversity of data and tools. However, this approach does not address (nor does it aim to address) the issue of compliance of data with FAIR principles, but it rather aims to improve interoperability between and interaction with a diversity of tools and data formats. Ultimately, standardization of data and creation of common software interfaces are not competing strategies, but are synergistic approaches that together help create a more integrated data ecosystem. Indeed, tools such as SpikeInterface , are an important component of the larger NWB software ecosystem that help create an accessible neurophysiology data ecosystem by making it easier for users to integrate their data and tools with NWB and facilitating access and interoperability of diverse tools. Data standards build the foundation for an overall data strategy to ensure compliance with FAIR data principles. Ultimately, however, ensuring FAIR data sharing and use depends on an ecosystem of data standards and data management, analysis, visualization, and archive tools as well as laws, regulations, and data governance structures—for example the NIH BRAIN Initiative Data Sharing Policy or the OMB Open Data Policy —all working together . As illustrates, it is ultimately the combination of NWB and DANDI working together that enable compliance with FAIR principles. Here, certain aspects, such as usage licenses (R.1.1), indexing and search (F.4), authenticated access (A1.2), and long-term availability of metadata (A2) are explicitly the role of the archive. As this table shows, together, NWB and DANDI make neurophysiology data FAIR. The neurophysiology community consists of a large diversity of stakeholders with vested interests and broad use cases. Inclusive engagement and outreach with the community are central to achieve acceptance and adoption of NWB and to ensure that NWB meets user needs. Thus, development of scientific data languages is as much a sociological challenge as it is a technological challenge. To address this challenge, NWB has adopted a modern open-source software strategy with community resources and governance, and a variety of engagement activities. Execution of this strategy requires coordinating efforts across stakeholders, use cases, and standard technologies to prioritize software development and resolve potential conflicts. Such coordination necessitates a governance structure reflecting the values of the project and the diverse composition of the community . The NWB Executive Board consists of diverse experimental and computational neuroscientists from the community and serves as the steering committee for developing the long-term vision and strategy. The NWB Executive Board (see Acknowledgements) was established in 2007 as an independent body from the technical team and PIs of NWB grants. The NWB Core Technology Team leads and coordinates the development of the NWB data language and software infrastructure to ensure quality, stability, and consistency of NWB technologies, as well as timely response to user issues. The Core Technology Team reports regularly to and coordinates with the Executive Board. Neuroscience Application Teams, consisting of expert users and core developers lead, engagement with targeted neuroscience areas in electrophysiology, optical physiology, and other emerging applications. These application teams are responsible for developing extensions to the data standard, new features, and technology integration together with Community Working Groups. The working groups allow for an agile, community-driven development and evaluation of standard extensions and technologies, and allow users to directly engage with the evolution of NWB. The broader neuroscience community further contributes to NWB via issue tickets, contributions to NWB software and documentation, and by creating and publishing data in NWB. This governance and development structure emphasizes a balance between the stability of NWB technologies that ensure reliable production software, direct engagement with the community to ensure that NWB meets diverse stakeholder needs, and agile response to issues and emerging technologies. The balance between stability, diversity, and agility is also reflected in the overall timeline of the NWB project . The NWB 1.0 prototype focused on evaluation of existing technologies and community needs and development of a draft data standard. Building on this prototype, the NWB 2.0 project initially focused on the redesign and productization of NWB, emphasizing the creation of a sustainable software architecture, reliable data standard, and software ready for use. Following the first full release of NWB 2.0 in January 2019, the emphasis then shifted to adoption and integration of NWB. The goal has been to grow a community and software ecosystem as well as maintenance and continued refinement of NWB. Together, these technical and community engagement efforts have resulted in a vibrant and growing ecosystem of public NWB data (Appendix 6) and tools utilizing NWB. The core NWB software stack has continued to grow steadily since the release of NWB 2.0 in January 2019, illustrating the need for continued development and maintenance of NWB . See also Appendix 7 for an overview of the software release process and history for the NWB schema and APIs. In 2020 more than 600 scientists participated in NWB developer and user workshops and we have seen steady growth in attendance at NWB events over time . At the same time, the global reach of NWB has also been increasing over time . The NWB team also provides extensive online training resources, including video and code tutorials, detailed documentation, as well as guidelines and best practices (see and Materials and methods). Community liaisons provide expert consultation for labs adopting NWB and for creating customized data conversion software for individual labs. As the table in Appendix 6 shows, despite its young age relative to the neurophysiology community, NWB 2.0 is being adopted by a growing number of neuroscience laboratories and projects led by diverse principal investigators, creating a representative community where users can exchange and reuse data, with NWB as a common data language . Investigating the myriad functions of the brain across species necessitates a massive diversity and complexity of neurophysiology experiments. This diversity presents an outstanding barrier in meaningful sharing and collaborative analysis of the collected data, and ultimately prevents the data from being FAIR. To overcome this barrier, we developed a data ecosystem based on the Neurodata Without Borders (NWB) data language and software. NWB is being utilized by more than 36 labs to enable unified storage and description of intracellular, extracellular, LFP, ECoG, and Ca 2+ data in fly, mice, rats, monkeys, humans, and simulations. To support the entire data lifecycle, NWB natively operates with processing, analysis, visualization, and data management tools, as exemplified by the ability to store both raw and pre-processed simultaneous electrophysiology and optophysiology data. Formal extension mechanisms enable NWB to co-evolve with the needs of the community. NWB enables DANDI to provide a data archive that also serves as a collaboration space for neurophysiology projects. Together, these technologies greatly enhance the FAIRness of neurophysiology data. We argue that there are several key challenges that, until NWB, have not been successfully addressed and which ultimately hindered wide-spread adoption of a common standard by the diverse neurophysiology community. Conceptually, the complexity of the problem necessitates an interdisciplinary approach of neuroscientists, data and computer scientists, and scientific software engineers to identify and disentangle the components of the solution. Technologically, the software infrastructure instantiating the standard must integrate the separable components of user-facing interfaces (i.e. Application Programming Interfaces, APIs), data modeling, standard specification, data translation, and storage format. This must be done while maintaining sustainability, reliability, stability, and ease of use for the neurophysiologist. Furthermore, because science is advanced by both development of new acquisition techniques and experimental designs, mechanisms for extending the standard to unforeseen data and metadata are essential. Sociologically, the neuroscientific community must accept and adopt the standard, requiring coordinated community engagement, software development, and governance. NWB directly addresses these challenges. NWB as the lingua franca of neurophysiology data Making neurophysiology data FAIR requires a paradigm shift in how we conceptualize the solution. Scientists need more than a rigid data format, but instead require a flexible data language. Such a language should enable scientists to communicate via data. Natural languages evolve with the concepts of the societies that use them, while still providing a stable basis that enables communication of common concepts. Similarly, a scientific data language should evolve with the scientific research community, and at the same time provide a standardized core that expresses common and established methods and data types. NWB is such a data language for neurophysiology experiments. There are many parallels between NWB and natural languages as used today. The NWB specification language provides the basic tools and rules for creating the core concepts required to describe neurophysiology data, much like an alphabet and phonetic rules in natural language describe the creation of words. Likewise, the format schema provides the words and phrases (neurodata_types) of the data language and rules for how to compose them to form data documents (NWB files), much like a dictionary and grammatical rules for sentence and document structure. Similarly, flexible data storage methods allow NWB to manage and share data in different forms depending on the application, much like we store natural languages in many different mediums (e.g. via printed books, electronic records, or handwritten notes). User APIs (here HDMF, PyNWB, and MatNWB) provide the community with tools to create, read, and modify data documents and interact with core aspects of the language, similarly to text editors for natural language. NWB Extensions provide a mechanism to create, publish, and eventually integrate new modules into NWB to ensure it co-evolves with the tools and needs of the neurophysiology community, just as communities create new words to communicate emerging concepts. Finally, DANDI provides a cloud-based platform for archiving, sharing, and collaborative analysis of NWB data, much like a bookstore or Wikipedia. Together, these interacting components provide the basis of a data language and exchange medium neuroscience community that enables reproduction, interchange, and reuse of diverse neurophysiology data. NWB is community driven, professionally developed, and democratizes neurophysiology Today, there are many data formats and tools used by the neuroscience community that are not interoperable. Often, formats and tools are specific to the lab and even the researcher. This level of specificity is a major impediment to sharing data and reproducing results, even within the same lab. More broadly, the resulting fragmentation of the data space reinforces siloed research, and makes it difficult for datasets or software to be impactful on a community level. Our goal is for the NWB data language to be foundational in deepening collaborations between the community of neuroscientists. The current NWB software is the result of an intense, community-led, years-long collaboration between experimental and computational neuroscientists with data scientists and computer scientists. Core to the principles and success of NWB is to account for diverse perspectives and use cases in the development process, integration with community tools, and engage in community outreach and feedback collection. NWB is governed by a diverse group to ensure both the integrity of the software and that NWB continues to meet the needs of the neuroscience community. As with all sophisticated scientific instruments, there is some training required to get a lab’s data into NWB. Several training and outreach activities provide opportunities for the neuroscience community to learn how to most effectively utilize NWB. Tutorials, hackathons and user training events allow us to bring together neuroscientists who are passionate about open data and data management. These users bring their own data to convert or their own tools to integrate, which in turn makes the NWB community more diverse and representative of the overall neuroscience community. NWBs digital presence has accelerated during the COVID pandemic, and has allowed the community to grow internationally and at an exponential rate. Updates on Twitter and the website ( https://www.nwb.org/ ), tutorials on YouTube, and free virtual hackathons, all are universally accessible and have helped achieve a global reach, interacting with scientists from countries that are too often left out. Together, these outreach activities combined with NWB and DANDI democratizes both neurophysiology data and analysis tools, as well as the extracted insights. The future of NWB To address the next frontier in grand challenges associated with understanding the brain, the neuroscience community must integrate information across experiments spanning several orders of magnitude in spatial and temporal scales . This issue is particularly relevant in the current age of massive neuroscientific data sets generated by emerging technologies from the US BRAIN Initiative, Human Brain Project, and other brain research initiatives worldwide. Advanced data processing, machine learning, and artificial intelligence algorithms are required to make discoveries based on such massive volumes of data . Currently, different domains of neuroscience (e.g. genomic/transcriptomic, anatomy, neurophysiology, etc.,) are supported by standards that are not coordinated. Building bridges across neuroscience domains will necessitate interaction between the standards, and will require substantial future efforts. There are nascent activities for compatibility between NWB and the Brain Imaging Data Structure (BIDS), for example, as part of the BIDS human intracranial neurophysiology ECoG/iEEG extension , but further efforts in this and other areas are needed. It is notoriously challenging to make neurophysiology data FAIR. Together, the NWB data language and the NWB-based DANDI data archive support a data science ecosystem for neurophysiology. NWB provides the underlying cohesion of this ecosystem through a common language for the description of data and experiments. However, like all languages, NWB must continue to adapt to accommodate advances in neuroscience technologies and the evolving community using that language. As adoption of NWB continues to grow, new needs and opportunities for further harmonization of metadata arise. A key ongoing focus area is on development and integration of ontologies with NWB to enhance specificity, accuracy, and interpretability of data fields. For example, there are NWB working groups on genotype and spatial coordinate representation, as well as the INCF Electrophysiology Stimulation Ontology working group . Another key area is extending NWB to new areas, such as the ongoing working groups on integration of behavioral task descriptions with NWB (e.g. based on BEADL ) and enhanced integration of simulations with NWB. We strongly advocate for funding support of all aspects of the data-software life cycle (development, maintenance, integration, and distribution) to ensure the neuroscience community fully reaps the benefits of investment into neurophysiology tools and data acquisition. Core design principles and technologies for biological data languages The problems addressed by NWB technologies are not unique to neurophysiology data. Indeed, as was recently discussed in , lack of standards in genomics data is threatening the promise of that data. Many of the tools and concepts of the NWB data language can be applied to enhance standardization and exchange of data in biology more broadly. For example, the specification language, HDMF, the concept of extensions and the extension catalog are all general and broadly applicable technologies. Therefore, the impact of the methods and concepts we have described here has the potential to extend well beyond the boundaries of neurophysiology. We developed design and implementation principles to create a robust, extensible, maintainable, and usable data ecosystem that embraces and enables FAIR data science across the breadth of neurophysiology data. Across biology, experimental diversity and data heterogeneity are the rule, not the exception . Indeed, as biology faces the daunting frontier of understanding life from atoms to organisms, the complexity of experiments and multimodality of data will only increase. Therefore, the principles developed and deployed by NWB may provide a blueprint for creating data ecosystems across other fields of biology. lingua franca of neurophysiology data Making neurophysiology data FAIR requires a paradigm shift in how we conceptualize the solution. Scientists need more than a rigid data format, but instead require a flexible data language. Such a language should enable scientists to communicate via data. Natural languages evolve with the concepts of the societies that use them, while still providing a stable basis that enables communication of common concepts. Similarly, a scientific data language should evolve with the scientific research community, and at the same time provide a standardized core that expresses common and established methods and data types. NWB is such a data language for neurophysiology experiments. There are many parallels between NWB and natural languages as used today. The NWB specification language provides the basic tools and rules for creating the core concepts required to describe neurophysiology data, much like an alphabet and phonetic rules in natural language describe the creation of words. Likewise, the format schema provides the words and phrases (neurodata_types) of the data language and rules for how to compose them to form data documents (NWB files), much like a dictionary and grammatical rules for sentence and document structure. Similarly, flexible data storage methods allow NWB to manage and share data in different forms depending on the application, much like we store natural languages in many different mediums (e.g. via printed books, electronic records, or handwritten notes). User APIs (here HDMF, PyNWB, and MatNWB) provide the community with tools to create, read, and modify data documents and interact with core aspects of the language, similarly to text editors for natural language. NWB Extensions provide a mechanism to create, publish, and eventually integrate new modules into NWB to ensure it co-evolves with the tools and needs of the neurophysiology community, just as communities create new words to communicate emerging concepts. Finally, DANDI provides a cloud-based platform for archiving, sharing, and collaborative analysis of NWB data, much like a bookstore or Wikipedia. Together, these interacting components provide the basis of a data language and exchange medium neuroscience community that enables reproduction, interchange, and reuse of diverse neurophysiology data. Today, there are many data formats and tools used by the neuroscience community that are not interoperable. Often, formats and tools are specific to the lab and even the researcher. This level of specificity is a major impediment to sharing data and reproducing results, even within the same lab. More broadly, the resulting fragmentation of the data space reinforces siloed research, and makes it difficult for datasets or software to be impactful on a community level. Our goal is for the NWB data language to be foundational in deepening collaborations between the community of neuroscientists. The current NWB software is the result of an intense, community-led, years-long collaboration between experimental and computational neuroscientists with data scientists and computer scientists. Core to the principles and success of NWB is to account for diverse perspectives and use cases in the development process, integration with community tools, and engage in community outreach and feedback collection. NWB is governed by a diverse group to ensure both the integrity of the software and that NWB continues to meet the needs of the neuroscience community. As with all sophisticated scientific instruments, there is some training required to get a lab’s data into NWB. Several training and outreach activities provide opportunities for the neuroscience community to learn how to most effectively utilize NWB. Tutorials, hackathons and user training events allow us to bring together neuroscientists who are passionate about open data and data management. These users bring their own data to convert or their own tools to integrate, which in turn makes the NWB community more diverse and representative of the overall neuroscience community. NWBs digital presence has accelerated during the COVID pandemic, and has allowed the community to grow internationally and at an exponential rate. Updates on Twitter and the website ( https://www.nwb.org/ ), tutorials on YouTube, and free virtual hackathons, all are universally accessible and have helped achieve a global reach, interacting with scientists from countries that are too often left out. Together, these outreach activities combined with NWB and DANDI democratizes both neurophysiology data and analysis tools, as well as the extracted insights. To address the next frontier in grand challenges associated with understanding the brain, the neuroscience community must integrate information across experiments spanning several orders of magnitude in spatial and temporal scales . This issue is particularly relevant in the current age of massive neuroscientific data sets generated by emerging technologies from the US BRAIN Initiative, Human Brain Project, and other brain research initiatives worldwide. Advanced data processing, machine learning, and artificial intelligence algorithms are required to make discoveries based on such massive volumes of data . Currently, different domains of neuroscience (e.g. genomic/transcriptomic, anatomy, neurophysiology, etc.,) are supported by standards that are not coordinated. Building bridges across neuroscience domains will necessitate interaction between the standards, and will require substantial future efforts. There are nascent activities for compatibility between NWB and the Brain Imaging Data Structure (BIDS), for example, as part of the BIDS human intracranial neurophysiology ECoG/iEEG extension , but further efforts in this and other areas are needed. It is notoriously challenging to make neurophysiology data FAIR. Together, the NWB data language and the NWB-based DANDI data archive support a data science ecosystem for neurophysiology. NWB provides the underlying cohesion of this ecosystem through a common language for the description of data and experiments. However, like all languages, NWB must continue to adapt to accommodate advances in neuroscience technologies and the evolving community using that language. As adoption of NWB continues to grow, new needs and opportunities for further harmonization of metadata arise. A key ongoing focus area is on development and integration of ontologies with NWB to enhance specificity, accuracy, and interpretability of data fields. For example, there are NWB working groups on genotype and spatial coordinate representation, as well as the INCF Electrophysiology Stimulation Ontology working group . Another key area is extending NWB to new areas, such as the ongoing working groups on integration of behavioral task descriptions with NWB (e.g. based on BEADL ) and enhanced integration of simulations with NWB. We strongly advocate for funding support of all aspects of the data-software life cycle (development, maintenance, integration, and distribution) to ensure the neuroscience community fully reaps the benefits of investment into neurophysiology tools and data acquisition. The problems addressed by NWB technologies are not unique to neurophysiology data. Indeed, as was recently discussed in , lack of standards in genomics data is threatening the promise of that data. Many of the tools and concepts of the NWB data language can be applied to enhance standardization and exchange of data in biology more broadly. For example, the specification language, HDMF, the concept of extensions and the extension catalog are all general and broadly applicable technologies. Therefore, the impact of the methods and concepts we have described here has the potential to extend well beyond the boundaries of neurophysiology. We developed design and implementation principles to create a robust, extensible, maintainable, and usable data ecosystem that embraces and enables FAIR data science across the breadth of neurophysiology data. Across biology, experimental diversity and data heterogeneity are the rule, not the exception . Indeed, as biology faces the daunting frontier of understanding life from atoms to organisms, the complexity of experiments and multimodality of data will only increase. Therefore, the principles developed and deployed by NWB may provide a blueprint for creating data ecosystems across other fields of biology. NWB GitHub organizations All NWB software is available open source via the following three GitHub organizations. The Neurodata Without Borders GitHub organization is used to manage all software resources related to core NWB software developed by the NWB developer community, for example, the PyNWB and MatNWB reference APIs. The HDMF development Github organization is used to publish all software related to the Hierarchical Data Modeling Framework (HDMF), including, HDMF, HDMF DocUtils, HDMF Common Schema and others. Finally, the NWB Extensions GitHub organization is used to manage all software related to the NDX Catalog, including all extension registrations. Note, the catalog itself only stores metadata about NDXs, the source code of NDXs are often managed by the creators in dedicated repositories in their own organizations. HDMF software HDMF software is available on GitHub using an open BSD licence model. H ierarchical D ata M odeling F ramework (HDMF) is a Python package for working with hierarchical data. It provides APIs for specifying data models, reading and writing data to different storage backends, and representing data with Python objects. HDMF builds the foundation for the implementation of PyNWB and specification language. [Source] [Documentation] [Web] . HDMF Documentation Utilities (hdmf-docutils) are a collection of utility tools for creating documentation for data standard schema defined using HDMF. The utilities support generation of reStructuredText (RST) documents directly from standard schema which can be compiled to a large variety of common document formats (e.g. HTML, PDF, epub, man and others) using Sphinx. [Source] . HDMF Common Schema defines a collection of common reusable data structures that build the foundation for modeling of advanced data formats, e.g., NWB. APIs for the HDMF common data types are implemented as part of the hdmf.common module in the HDMF library. [Source] [Documentation] . HDMF Schema Language provides an easy-to-use language for defining hierarchical data standards. APIs for creating and interacting with HDMF schema are implemented in HDMF. [Documentation] . NWB software NWB software is available on GitHub using an open BSD licence model. PyNWB is the Python reference API for NWB and provides a high-level interface for efficiently working with Neurodata stored in the NWB format. PyNWB is used by users to create and interact with NWB and neuroscience tools to integrate with NWB. [Source] [Documentation] . MatNWB is the MATLAB reference API for NWB and provides an interface for efficiently working with Neurodata stored in the NWB format. MatNWB is used by both users and developers to create and interact with NWB and neuroscience tools to integrate with NWB. [Source] [Documentation] . NWBWidgets is an extensible library of widgets for visualizing NWB data in a Jupyter notebook (or lab). The widgets support navigation of the NWB file hierarchy and visualization of specific NWB data elements. [Source] . NWB Schema defines the complete NWB data standard specification. The schema is a collection of YAML files in the NWB specification language describing all neurodata_types supported by NWB and their organization in an NWB file. [Source] [Documentation] . NWB Schema Language is a specialized variant of the HDMF schema language. The language includes minor modifications (e.g., use of the term neurodata_type instead of data_type) to make the language more intuitive for neuroscience users, but it is otherwise identical to the HDMF schema language. Dedicated interfaces for creating and interacting with NWB schema are available in PyNWB. [Documentation] . NWB Storage defines the mapping of NWB specification language primitives to HDF5 for storage of NWB files. [Documentation] . Neurodata Extensions Catalog (NDX Catalog) is a community-led catalog of Neurodata Extensions (NDX) to the NWB data standard. All extensions mentioned in the text can be accessed directly via the catalog. [Source] [Online] . NWB Extensions Template (ndx-template) provides an easy-to-use template based on the Cookiecutter library for creating Neurodate Extensions (NDX) for the NWB data standard. [Source] . NWB Staged Extensions is a repository for submitting new extensions to the NDX catalog [Source] . DANDI The DANDI archive was created by developing and integrating several opensource projects and BRAIN Initiative data standards (NWB, BIDS, NIDM). The Web browser application is built using the VueJS framework and the DANDI command line interface is built using Python and PyNWB. The initial backend of the archive was built on top of the Girder data management system, and is transitioning to a Django-based framework. The DANDI analysis hub is built using Jupyterhub deployed over a Kubernetes cluster. The different components of the archive are hosted on Amazon Web Services and the Heroku platform. The code repositories for the entire infrastructure are available on Github under an Apache 2.0 license. All NWB software is available open source via the following three GitHub organizations. The Neurodata Without Borders GitHub organization is used to manage all software resources related to core NWB software developed by the NWB developer community, for example, the PyNWB and MatNWB reference APIs. The HDMF development Github organization is used to publish all software related to the Hierarchical Data Modeling Framework (HDMF), including, HDMF, HDMF DocUtils, HDMF Common Schema and others. Finally, the NWB Extensions GitHub organization is used to manage all software related to the NDX Catalog, including all extension registrations. Note, the catalog itself only stores metadata about NDXs, the source code of NDXs are often managed by the creators in dedicated repositories in their own organizations. HDMF software is available on GitHub using an open BSD licence model. H ierarchical D ata M odeling F ramework (HDMF) is a Python package for working with hierarchical data. It provides APIs for specifying data models, reading and writing data to different storage backends, and representing data with Python objects. HDMF builds the foundation for the implementation of PyNWB and specification language. [Source] [Documentation] [Web] . HDMF Documentation Utilities (hdmf-docutils) are a collection of utility tools for creating documentation for data standard schema defined using HDMF. The utilities support generation of reStructuredText (RST) documents directly from standard schema which can be compiled to a large variety of common document formats (e.g. HTML, PDF, epub, man and others) using Sphinx. [Source] . HDMF Common Schema defines a collection of common reusable data structures that build the foundation for modeling of advanced data formats, e.g., NWB. APIs for the HDMF common data types are implemented as part of the hdmf.common module in the HDMF library. [Source] [Documentation] . HDMF Schema Language provides an easy-to-use language for defining hierarchical data standards. APIs for creating and interacting with HDMF schema are implemented in HDMF. [Documentation] . NWB software is available on GitHub using an open BSD licence model. PyNWB is the Python reference API for NWB and provides a high-level interface for efficiently working with Neurodata stored in the NWB format. PyNWB is used by users to create and interact with NWB and neuroscience tools to integrate with NWB. [Source] [Documentation] . MatNWB is the MATLAB reference API for NWB and provides an interface for efficiently working with Neurodata stored in the NWB format. MatNWB is used by both users and developers to create and interact with NWB and neuroscience tools to integrate with NWB. [Source] [Documentation] . NWBWidgets is an extensible library of widgets for visualizing NWB data in a Jupyter notebook (or lab). The widgets support navigation of the NWB file hierarchy and visualization of specific NWB data elements. [Source] . NWB Schema defines the complete NWB data standard specification. The schema is a collection of YAML files in the NWB specification language describing all neurodata_types supported by NWB and their organization in an NWB file. [Source] [Documentation] . NWB Schema Language is a specialized variant of the HDMF schema language. The language includes minor modifications (e.g., use of the term neurodata_type instead of data_type) to make the language more intuitive for neuroscience users, but it is otherwise identical to the HDMF schema language. Dedicated interfaces for creating and interacting with NWB schema are available in PyNWB. [Documentation] . NWB Storage defines the mapping of NWB specification language primitives to HDF5 for storage of NWB files. [Documentation] . Neurodata Extensions Catalog (NDX Catalog) is a community-led catalog of Neurodata Extensions (NDX) to the NWB data standard. All extensions mentioned in the text can be accessed directly via the catalog. [Source] [Online] . NWB Extensions Template (ndx-template) provides an easy-to-use template based on the Cookiecutter library for creating Neurodate Extensions (NDX) for the NWB data standard. [Source] . NWB Staged Extensions is a repository for submitting new extensions to the NDX catalog [Source] . The DANDI archive was created by developing and integrating several opensource projects and BRAIN Initiative data standards (NWB, BIDS, NIDM). The Web browser application is built using the VueJS framework and the DANDI command line interface is built using Python and PyNWB. The initial backend of the archive was built on top of the Girder data management system, and is transitioning to a Django-based framework. The DANDI analysis hub is built using Jupyterhub deployed over a Kubernetes cluster. The different components of the archive are hosted on Amazon Web Services and the Heroku platform. The code repositories for the entire infrastructure are available on Github under an Apache 2.0 license. |
Neurophysiological Evaluation of Neural Transmission in Brachial Plexus Motor Fibers with the Use of Magnetic versus Electrical Stimuli | 9201673b-aee4-4e81-b800-4315d8b3a127 | 10146775 | Physiology[mh] | The anatomical complexity of the brachial plexus and its often multilevel damage require specialized in-depth diagnostics. The purpose is to select the appropriate treatment, assess its effectiveness, and provide prognostic information about its course . Imaging of the brachial plexus, such as ultrasound or magnetic resonance imaging, provides important information about the nerve structures and surrounding tissues. Contemporary studies emphasize the importance of these tests, but they do not mention assessing the brachial plexus function . Besides the clinical examination , the diagnostic standard for brachial plexus function should include clinical neurophysiology tests. Electroneurography (ENG) studies are used to assess the function of motor fibers and peripheral sensory nerves. Somatosensory evoked potentials are used to evaluate afferent sensory pathways. Needle electromyography analyses the bioelectrical activity of the muscles innervated by peripheral nerves originating from the brachial plexus. The results of the tests above determine the extent, type, and severity of the damage. ENG of motor fibers uses a specific low-voltage electrical stimulus. It stimulates the nerve motor fibers, causing their depolarization, and the excitation spreads to the muscle, resulting in the generation of compound muscle action potential (CMAP). The strength of the electrical stimulus should be supramaximal, of sufficient intensity to generate CMAP with the highest amplitude and shortest latency. The CMAP amplitude reflects the number of conducting motor axons, and latency refers to the function of the myelin sheath and the rate of depolarization, mainly in fast-conducting axons . Despite the advantages of this type of stimulation, it has limitations due to the physical properties of the electrical stimulus. The main limitation is the inability to penetrate through the bone structures surrounding the brachial plexus in its proximal part, at the level of the spinal roots, at the spinal nerves in the neck, and often at Erb’s point. Stimulation at Erb’s point may be complicated due to the individual anatomy of the examined person, such as obesity, extensive musculature, or past injuries at this level. This can significantly affect the CMAP parameters and give false positive results indicating pathology of the assessed motor fibers. In contrast to ENG, magnetic stimulus is used to induce motor evoked potential (MEP) . Its use in brachial plexus diagnostics overcomes these limitations, which is of great clinical importance . The propagation of excitation along the axon and elicitation of motor potential using a magnetic stimulus is similar to electrical stimulation. However, as some authors indicate, the applied magnetic stimulus may be submaximal due to magnetic stream dispersion or insufficient power generated by the stimulation coil. Therefore, the assessment of MEP parameters may not reflect the actual number of excitable axons, and the interpretation of the results may incorrectly determine the functional status of the brachial plexus. An MEP study can provide important information regarding the location of the injury, especially in cases of traumatic damage to the brachial plexus where there may be multiple levels of impairment. The physical properties of the magnetic stimulus released from the generator device to penetrate bone structures would have to allow an assessment of the proximal part of the brachial plexus, especially at the level of the spinal roots. Scientific studies are mainly concerned with MEP efferent conduction studies in patients with disc–root conflict and other neurological disorders . Little attention has been paid to assessing the peripheral part of the lower motoneurone, including injuries of brachial plexus using MEP, studies of which may constitute the novum among the aims of the presented study. The main concern has been high-voltage electrical stimulation applied over the vertebrae . To the best of our knowledge, apart from studies by Schmid et al. and Cros et al. from 1990, this paper is one of the few sources of reference values. Therefore, it makes a practical contribution to the routine neurophysiological diagnosis of brachial plexus injuries. The aim of this study was to reinvestigate the hypothesis concerning the usefulness of the MEP test applied both over the vertebrae and at Erb’s point to assess the neural transmission of the brachial plexus motor fibers, with special attention to the functional evaluation of the short brachial plexus branches. The latter element has not been examined in detail ; most of the studies have been devoted to the evaluation of the long nerves, such as the median or ulnar. In addition, we formulated the following secondary goals: to compare the parameters of electrically evoked potentials (CMAP) with the parameters generated by magnetic stimulus (MEP), and to analyze whether these stimulation methods have compatible effectiveness and whether they could be used interchangeably during an examination. This would make it possible to select a method by taking into account the individual patient’s needs and the examination targets. Moreover, the additional aim of our work was to confirm that magnetic stimulation induces supramaximal potentials with the same parameters as during electrical stimulation, which was previously considered a methodological limitation . A further study aim was to confirm an assumption that magnetic stimulation is less painful than electrical stimulation and better tolerated by patients during neurophysiological examinations, which has never before been examined. 2.1. Study Design, Participants, and Clinical Evaluation Seventy-five volunteer subjects were randomly chosen to participate in the research. The ethical considerations of the study were compliant with the Declaration of Helsinki. Approval was granted by the Bioethical Committee of the University of Medical Sciences in Poznań, Poland (resolution no. 554/17). All the subjects signed a written consent form to voluntarily participate in the study without financial benefit. The consent included all the information necessary to understand the purpose of the study, the scope of the diagnostic procedures, and their characteristics. Before the study began, fifteen subjects declined to participate. The subjects in the study group (N = 60) were enrolled based on the results of clinical studies performed independently by a clinical neurophysiologist and a neurologist. The exclusion criteria included craniocerebral, cervical spine, shoulder girdle, brachial plexus, or upper extremity injuries and other systemic disorders under treatment. The contraindications to undergoing neurophysiological tests were pregnancy, stroke, oncological disorder, epilepsy, metal implants in the head or spine, and implanted cardiac pacemaker or cochlear implant because of the use of magnetic stimulation. The results were analyzed blindly, satisfying intra-rater reliability. The medical history and clinical studies consisted of evaluating the sensory perception of the upper extremities according to the C5-C8 dermatomes and peripheral nerve sensory distribution, based on von Frey’s monofilament method . The maximal strength of the upper extremity muscles was assessed using Lovett’s scale . A bilateral clinical examination of each volunteer was performed once. Based on the clinical examination and medical history, the neurologist classified the subjects in the research group as healthy volunteers. After excluding 14 participants who did not meet the inclusion criteria and declining 4 others during the neurophysiological exams, the final group included 42 subjects. The characteristics of the study group (N = 42) and a flowchart of the diagnostic algorithm proposed in this study are presented in and . There were 40 right-handed participants and only 2 left-handed. 2.2. Neurophysiological Examination All the participants were examined bilaterally once according to the same neurophysiological schedule. Each time, we used both magnetic and electrical stimuli to assess the function of the peripheral nerve and magnetic stimulus to evaluate neural transmission from the cervical spinal root. We applied stimulation three times at Erb’s point and at the selected level of the cervical segment, checking the repeatability of the evoked potential. The compound muscle action potentials (CMAP) recording during electroneurography (ENG) and motor evoked potential (MEP) induced by magnetic stimulation were analyzed. During the neurophysiological examination, the subjects were in a seated position, with relaxed muscles of the upper extremities and shoulder girdle, and in a quiet environment. The KeyPoint Diagnostic System (Medtronic A/S, Skøvlunde, Denmark) was used for the MEP and CMAP recordings. External magnetic stimulus for the MEP studies was applied by a MagPro X100 magnetic stimulator (Medtronic A/S, Skøvlunde, Denmark) via a circular coil (C-100, 12 cm in diameter) ( A,B). The strength of the magnetic field stream was 100% of the maximal stimulus output, which means 1.7 T for each pulse. The recordings were performed at an amplification of 20 mV/D and a time base of 5–8 ms/D. For the CMAP recording, a bipolar stimulation electrode and a single rectangular electric stimulus with a duration of 0.2 ms at 1 Hz frequency was used. The intensity of the electrical stimulus was 100 mA to evoke the supramaximal CMAP amplitude at Erb’s point. Such strength is obligatory and is determined by anatomical conditions and the fact that the nerve structures of the brachial plexus lie deep in the supraclavicular fossa. In the ENG studies, the time base was set to 5 ms/D, the sensitivity of recording to 2 mV/D, and 10 Hz upper and 10 kHz lower filters were used in the recorder amplifier. A bipolar stimulation electrode was used, the pools of which were moisturized with a saline solution (0.9% NaCl). The skin where the ground electrode and recording electrodes were placed was disinfected with a 70% alcohol solution; along with the conductive gel, this reduced the resistance between the skin and the recording sensors. The impedance did not exceed 5 kΩ. In the ENG examination, the bipolar stimulation electrode was applied at Erb’s point over the supraclavicular region, along an anatomical passage of the brachial plexus motor fibers. If repetitive CMAP with the shortest latency and the highest amplitude was evoked at this point, the spot became the starting point for the application of magnetic stimulation at this level (hot spot). To assess the MEP from the spinal roots of the cervical segment, the magnetic coil was applied 0.5 cm laterally and slightly below the spinous process in accordance with the anatomical location of the spinal roots (C5–C8). In this way, the cervical roots were selectively stimulated. For the recording of CMAP and MEP, standard disposable Ag/AgCl surface sensors with an active surface of 5 mm 2 were used in the same location for both electrical and magnetic stimulus. The active electrode was placed over the muscle belly innervated by the peripheral nerve, taking the origin from the superior, middle, and inferior trunk of brachial plexus. The same selected muscles also represented a specific root domain in accordance with the innervation of the upper extremity through the cervical segment of the spine. The reference electrode was placed distal to the active ones, depending on the muscle, i.e., on the olecranon or the tendon . A list of the tested muscles and their innervation (peripheral pathway and root domain), as well as the location of electrodes are given in . The same parameters were analyzed for both the CMAP and MEP recordings. The amplitude of the negative deflection (from baseline to negative peak, measured in mV), distal latency (DL) (from visible stimulating artefact to negative deflection of potential, measured in ms), and standardized latency (SL) were calculated by the equation SL = DL/LNS where LNS is the length of the nerve segment between the stimulation point (Erb’s point) and the recording area on the muscle (measured in cm). A reliable value of standardized latency depends on an accurate distance measurement. Therefore, a pelvimeter, which reduces the risk of error in measuring the distance between the stimulation point and the recording electrode, was used in the research. This makes it possible to consider the anatomical curvature of the brachial plexus nerves. The standardized latency indicates a direct correlation between latency and distance. This is important in assessing the conduction of the brachial plexus short branches with regard to various anthropometric features of the examined subjects, such as the length of the upper extremities relative to height. In standard neurophysiological tests of short nerve branches, the F wave is not assessed, hence the calculation of the root conduction time for nerves such as axillary, musculocutaneous, etc., is not possible. In order to assess conduction in the proximal part of these nerves, the value of standardized latency was also calculated (proximal standardized latency, PSL) using the following equation: PSL = (MRL − MEL)/D where MRL is the latency of MEP from the root level stimulation (measured in ms), MEL is the latency of MEP elicited from Erb’s point stimulation (measured in ms), and D is the distance between these two stimulation points (measured in cm). Therefore, the PSL value reflects the conduction between the cervical root and Erb’s point for each examined nerve. Distal latency and standardized latency correspond to speed conduction in the fastest axons. The amplitude of the recorded potentials and their morphology reflects the number of conducting motor fibers . After undergoing neurophysiological tests, the subjects reported which of the applied stimuli (electrical or magnetic) evoked a painful sensation, as scored on a 10-point visual analogue scale (VAS) . 2.3. Statistical Analysis The statistical data were analyzed using Statistica 13.3 software (StatSoft, Kraków, Poland) and are presented with descriptive statistics: minimal and maximal values (range), and mean and standard deviation (SD) for measurable values. The Shapiro–Wilk test was performed to assess the normality of distribution, and Leven’s test was used to define the homogeneity of variance in some cases. The results from the neurophysiological studies were compared to determine the differences between the sides (left and right), genders (female and male), stimulation techniques (electrical and magnetic), and stimulation areas (Erb’s point and cervical root). The changes in evoked the potential parameters between the groups of men and women were calculated with an independent Student’s t -test. In cases where the distribution was not normal, a Mann–Whitney U test was used. The dependent Student’s t -test (paired difference t -test) or Wilcoxon’s test (in the absence of distribution normality) was used to compare the differences between the stimulation methods, stimulation areas, and sides of the body. p -values less than 0.05 were considered statistically significant. The percentage of difference was expressed for each variable. An analysis of lateralization influence was not performed because there was only one left-handed volunteer. With regard to the results of the clinical tests, including pain measured by a 0–10 point visual analogue scale (VAS) and muscle strength measured by the 0–5 point Lovett’s scale, the minimum and maximum values (range) and mean and standard deviation (SD) are presented. At the beginning of the pilot study, statistical software was used to determine the required sample size using the amplitudes from the MEP and ENG recordings with a power of 80% and a significance level of 0.05 (two-tailed) as the primary outcome variable. The mean and standard deviation (SD) were calculated using the data from the first 10 patients of each gender, and the software estimated that at least 20 patients were needed as a sample size for the purposes of this study. Seventy-five volunteer subjects were randomly chosen to participate in the research. The ethical considerations of the study were compliant with the Declaration of Helsinki. Approval was granted by the Bioethical Committee of the University of Medical Sciences in Poznań, Poland (resolution no. 554/17). All the subjects signed a written consent form to voluntarily participate in the study without financial benefit. The consent included all the information necessary to understand the purpose of the study, the scope of the diagnostic procedures, and their characteristics. Before the study began, fifteen subjects declined to participate. The subjects in the study group (N = 60) were enrolled based on the results of clinical studies performed independently by a clinical neurophysiologist and a neurologist. The exclusion criteria included craniocerebral, cervical spine, shoulder girdle, brachial plexus, or upper extremity injuries and other systemic disorders under treatment. The contraindications to undergoing neurophysiological tests were pregnancy, stroke, oncological disorder, epilepsy, metal implants in the head or spine, and implanted cardiac pacemaker or cochlear implant because of the use of magnetic stimulation. The results were analyzed blindly, satisfying intra-rater reliability. The medical history and clinical studies consisted of evaluating the sensory perception of the upper extremities according to the C5-C8 dermatomes and peripheral nerve sensory distribution, based on von Frey’s monofilament method . The maximal strength of the upper extremity muscles was assessed using Lovett’s scale . A bilateral clinical examination of each volunteer was performed once. Based on the clinical examination and medical history, the neurologist classified the subjects in the research group as healthy volunteers. After excluding 14 participants who did not meet the inclusion criteria and declining 4 others during the neurophysiological exams, the final group included 42 subjects. The characteristics of the study group (N = 42) and a flowchart of the diagnostic algorithm proposed in this study are presented in and . There were 40 right-handed participants and only 2 left-handed. All the participants were examined bilaterally once according to the same neurophysiological schedule. Each time, we used both magnetic and electrical stimuli to assess the function of the peripheral nerve and magnetic stimulus to evaluate neural transmission from the cervical spinal root. We applied stimulation three times at Erb’s point and at the selected level of the cervical segment, checking the repeatability of the evoked potential. The compound muscle action potentials (CMAP) recording during electroneurography (ENG) and motor evoked potential (MEP) induced by magnetic stimulation were analyzed. During the neurophysiological examination, the subjects were in a seated position, with relaxed muscles of the upper extremities and shoulder girdle, and in a quiet environment. The KeyPoint Diagnostic System (Medtronic A/S, Skøvlunde, Denmark) was used for the MEP and CMAP recordings. External magnetic stimulus for the MEP studies was applied by a MagPro X100 magnetic stimulator (Medtronic A/S, Skøvlunde, Denmark) via a circular coil (C-100, 12 cm in diameter) ( A,B). The strength of the magnetic field stream was 100% of the maximal stimulus output, which means 1.7 T for each pulse. The recordings were performed at an amplification of 20 mV/D and a time base of 5–8 ms/D. For the CMAP recording, a bipolar stimulation electrode and a single rectangular electric stimulus with a duration of 0.2 ms at 1 Hz frequency was used. The intensity of the electrical stimulus was 100 mA to evoke the supramaximal CMAP amplitude at Erb’s point. Such strength is obligatory and is determined by anatomical conditions and the fact that the nerve structures of the brachial plexus lie deep in the supraclavicular fossa. In the ENG studies, the time base was set to 5 ms/D, the sensitivity of recording to 2 mV/D, and 10 Hz upper and 10 kHz lower filters were used in the recorder amplifier. A bipolar stimulation electrode was used, the pools of which were moisturized with a saline solution (0.9% NaCl). The skin where the ground electrode and recording electrodes were placed was disinfected with a 70% alcohol solution; along with the conductive gel, this reduced the resistance between the skin and the recording sensors. The impedance did not exceed 5 kΩ. In the ENG examination, the bipolar stimulation electrode was applied at Erb’s point over the supraclavicular region, along an anatomical passage of the brachial plexus motor fibers. If repetitive CMAP with the shortest latency and the highest amplitude was evoked at this point, the spot became the starting point for the application of magnetic stimulation at this level (hot spot). To assess the MEP from the spinal roots of the cervical segment, the magnetic coil was applied 0.5 cm laterally and slightly below the spinous process in accordance with the anatomical location of the spinal roots (C5–C8). In this way, the cervical roots were selectively stimulated. For the recording of CMAP and MEP, standard disposable Ag/AgCl surface sensors with an active surface of 5 mm 2 were used in the same location for both electrical and magnetic stimulus. The active electrode was placed over the muscle belly innervated by the peripheral nerve, taking the origin from the superior, middle, and inferior trunk of brachial plexus. The same selected muscles also represented a specific root domain in accordance with the innervation of the upper extremity through the cervical segment of the spine. The reference electrode was placed distal to the active ones, depending on the muscle, i.e., on the olecranon or the tendon . A list of the tested muscles and their innervation (peripheral pathway and root domain), as well as the location of electrodes are given in . The same parameters were analyzed for both the CMAP and MEP recordings. The amplitude of the negative deflection (from baseline to negative peak, measured in mV), distal latency (DL) (from visible stimulating artefact to negative deflection of potential, measured in ms), and standardized latency (SL) were calculated by the equation SL = DL/LNS where LNS is the length of the nerve segment between the stimulation point (Erb’s point) and the recording area on the muscle (measured in cm). A reliable value of standardized latency depends on an accurate distance measurement. Therefore, a pelvimeter, which reduces the risk of error in measuring the distance between the stimulation point and the recording electrode, was used in the research. This makes it possible to consider the anatomical curvature of the brachial plexus nerves. The standardized latency indicates a direct correlation between latency and distance. This is important in assessing the conduction of the brachial plexus short branches with regard to various anthropometric features of the examined subjects, such as the length of the upper extremities relative to height. In standard neurophysiological tests of short nerve branches, the F wave is not assessed, hence the calculation of the root conduction time for nerves such as axillary, musculocutaneous, etc., is not possible. In order to assess conduction in the proximal part of these nerves, the value of standardized latency was also calculated (proximal standardized latency, PSL) using the following equation: PSL = (MRL − MEL)/D where MRL is the latency of MEP from the root level stimulation (measured in ms), MEL is the latency of MEP elicited from Erb’s point stimulation (measured in ms), and D is the distance between these two stimulation points (measured in cm). Therefore, the PSL value reflects the conduction between the cervical root and Erb’s point for each examined nerve. Distal latency and standardized latency correspond to speed conduction in the fastest axons. The amplitude of the recorded potentials and their morphology reflects the number of conducting motor fibers . After undergoing neurophysiological tests, the subjects reported which of the applied stimuli (electrical or magnetic) evoked a painful sensation, as scored on a 10-point visual analogue scale (VAS) . The statistical data were analyzed using Statistica 13.3 software (StatSoft, Kraków, Poland) and are presented with descriptive statistics: minimal and maximal values (range), and mean and standard deviation (SD) for measurable values. The Shapiro–Wilk test was performed to assess the normality of distribution, and Leven’s test was used to define the homogeneity of variance in some cases. The results from the neurophysiological studies were compared to determine the differences between the sides (left and right), genders (female and male), stimulation techniques (electrical and magnetic), and stimulation areas (Erb’s point and cervical root). The changes in evoked the potential parameters between the groups of men and women were calculated with an independent Student’s t -test. In cases where the distribution was not normal, a Mann–Whitney U test was used. The dependent Student’s t -test (paired difference t -test) or Wilcoxon’s test (in the absence of distribution normality) was used to compare the differences between the stimulation methods, stimulation areas, and sides of the body. p -values less than 0.05 were considered statistically significant. The percentage of difference was expressed for each variable. An analysis of lateralization influence was not performed because there was only one left-handed volunteer. With regard to the results of the clinical tests, including pain measured by a 0–10 point visual analogue scale (VAS) and muscle strength measured by the 0–5 point Lovett’s scale, the minimum and maximum values (range) and mean and standard deviation (SD) are presented. At the beginning of the pilot study, statistical software was used to determine the required sample size using the amplitudes from the MEP and ENG recordings with a power of 80% and a significance level of 0.05 (two-tailed) as the primary outcome variable. The mean and standard deviation (SD) were calculated using the data from the first 10 patients of each gender, and the software estimated that at least 20 patients were needed as a sample size for the purposes of this study. The research group was homogeneous in terms of age. We found statistically significant differences between the women and men concerning height, weight, and BMI . In the clinical study, the Lovett’s muscle strength score was found to be 5 on average for both men and women. This cumulative result applies to all assessed muscles bilaterally, i.e., deltoid, biceps brachii, triceps brachii, and abductor digiti minimi, and reflects the proper maximal muscle contraction against the applied resistance. The results of the sensory perception studies of the upper extremities, according to dermatomes C5–C8, were about the normal outcomes in the study group. There were no significant differences in the CMAP and MEP between the right and left sides among women (N = 21) and men (N = 21). Hence, further comparative analysis of CMAP and MEP between the two groups refers to the cumulative number of tests performed (N = 42). The results are presented in . The significantly prolonged latency of evoked potential in the men compared to the women is related to the greater distance between the stimulation point and the recording level, due to anthropometric features such as the length of the extremities, which are longer in men. However, this does not determine the value of standardized latency reflecting conduction in a particular segment. These values are comparable in the two groups for both types of stimulation (electrical and magnetic) and levels of stimulation (Erb’s point and cervical root) with generally no statistical differences. The exception is the C5 spinal root and Erb’s point stimulation (both electrical and magnetic) for the radial nerve. In the cases above, the standardized latency was significantly longer in the group of men. However, the percentage difference is only 8–11% and the numerical difference is only about 0.02 ms/cm, and these differences are not clinically significant. Similarly, there were significant differences in the amplitude of evoked potentials between women and men. In the assessment of the musculocutaneous nerve, CMAP and MEP generated from Erb’s point showed higher values in the men, while those generated from the ulnar nerve had higher values in the women. The difference is also between 10 and 16%, without clinical significance, and may have resulted from a measurement error, such as the cursor setting during the analysis of potentials. Because the conduction parameters in the groups of women and men were comparable, further statistical analysis was conducted on 84 tests (both groups were combined). The parameters of potentials generated by electrical stimulus (CMAP) were compared with those of potentials generated by magnetic impulse (MEP). Stimulation in both cases was applied at Erb’s point. The data are presented in and . The amplitude of CMAP was significantly higher after electrical stimulation than MEP after magnetic stimulation for all the examined nerves, in the range of 3–7%. This may have been due to the wider dispersion of electrical stimulation according to the rule of electrical field spread. The latency of the evoked potentials was significantly shorter after magnetic stimulation, which is related to the shorter standardized latency. Note that the difference in potential latency values using the two types of stimulation did not exceed 5%. This may be a result of the deeper and more selective penetration of magnetic impulses into tissues (based on the rule of magnetic field spread) and through the bone structures, and, thus, faster depolarization of the brachial plexus fibers. presents examples of CMAP and MEP recordings following electrical and magnetic stimulation at Erb’s point. The repeatability of the morphology of potentials with the use of both types of excitation is noteworthy. The brachial plexus trunks are stimulated at Erb’s point in the supraclavicular area. In the area over the vertebrae, the spinous processes of the vertebrae are points of reference for the corresponding spinal root locations. In the cervical spine, according to the anatomical structure, the spinal roots emerge from the spinal cord above the corresponding numbered vertebrae. A,B presents magnetic coil placements during the MEP study, while gives data results. The results show significantly higher amplitudes of the potentials after stimulation of the cervical roots compared to the potentials evoked at Erb’s point for C5 and C6. In the case of C8, the amplitude was lower than the potentials evoked at Erb’s point. It should be noted, however, that these values varied in the range of 9–16%, which, as explained above, is not clinically relevant. We also note the comparable values of proximal standardized latency (PSL) in the cervical root–Erb’s point segment for all the stimulated nerves. presents the MEP recordings after magnetic stimulation of the C5 to C8 cervical spinal roots. The MEPs recorded from the cervical roots have a repetitive and symmetrical morphology. The MEPs have a lower amplitude at the C8 level than in the other studied segments (see and ). After undergoing neurophysiological tests, the subjects indicated the degree of pain sensation during stimulation according to a 10-point visual analogue scale (VAS) (see ). The results indicate that they felt more pain or discomfort during electrical stimulation. The subjects described it as a burning sensation. They also indicated that magnetic stimulation was perceptible as the feeling of being hit, causing a more highly expressed motor action (contraction of the muscle as the effector of the stimulated nerve). Neuroimaging and basic clinical examinations of sensory perception and muscle strength are still the primary approaches for evaluating brachial plexus injury symptoms . Neurophysiological diagnostics is considered supplementary, with the aim of confirming the results of the clinical evaluation. The main novelty of the present study is that it proves the similar importance of magnetic and peripheral electrical stimulation over the vertebrae in evaluating the functional status of brachial plexus motor fiber transmission. The pros of our research are the neurophysiological assessment of the function of brachial plexus short branches, which are part of its trunks. Our studies prove the similarity of results obtained with the two mentioned methods following the excitation of nerve structures at Erb’s point. The latency and amplitude values of the potentials (CMAP, MEP) evoked at this level by two types of stimuli differed in the range of 2–7%. In routine diagnostic tests, this range of difference would not significantly affect the interpretation of the results of neurophysiological tests. Hence, we conclude that magnetic and electrical stimuli could be used interchangeably during an examination. We also proved that the range of excitation of motor fibers by a magnetic impulse may be supramaximal due to the stable and comparable MEP and CMAP amplitudes. The properties of supramaximal motor potential with the shortest latency were, in previous studies, attributed to the effects of electrical stimulation, which is commonly used in neurophysiological research. Many authors pointed to the limited diagnostic possibilities of the magnetic stimulus , the pros of which were examined in detail in this paper. This is crucial because of the different anthropometric features of patients and the possible extent of damage to the structures surrounding the brachial plexus. Past fractures, swelling, or post-surgical conditions at this level may limit the excitation of axons by electrical stimulus. The benefit of magnetic-induced MEP is that it is less invasive than electrical stimulation, as concluded from the VAS pain scores (see ). The movement artifact associated with the magnetic stimulation may influence the quality of the MEPs recording, which should be considered during the interpretation of the diagnostic test results . MEP studies allow evaluation of the proximal part of the peripheral motor pathway, between the cervical roots, contrary to low-voltage electrical stimulation. The comparable amplitudes of MEPs induced by magnetic stimulus recorded over the vertebrae with those recorded at Erb’s point, as shown in our study, could be the basis for the diagnosis of a conduction block in the area between the spinal root and Erb’s point. By definition, in a neurophysiological examination, conduction block is considered to have occurred when the amplitude of the proximal potential is reduced by 50% relative to the distal potential. In the opinion of Öge et al. , the amplitude of evoked potentials induced by stimulation of the cervical roots compared with potentials recorded distally using electrical stimulation may help to reveal a possible conduction block at this level. According to Matsumoto et al. , the constant latency of MEP induced by magnetic stimulation of the cervical roots was comparable with potentials induced by high-voltage electrical stimulation. In our opinion, similar to the method mentioned above, combining two research techniques using magnetic stimulation of the cervical roots or Erb’s point and conventional peripheral electrical stimulation is valid for neurophysiological assessment of the brachial plexus. Previous studies on a similar topic by Cros et al. involving healthy subjects revealed parameters of MEPs recorded from proximal and distal muscles of the upper extremities with the best “hot spots” from C4–C6 during stimulation over the vertebrae. They found that the root potentials were characterized by similar latencies, while the amplitudes recorded from the abductor digiti minimi muscle were the lowest following excitation at the C6 neuromere, contrary to our study, in which they were evoked the most effectively but with the smallest amplitudes following stimulation at C8 (see ). We similarly recorded the largest amplitudes for MEPs evoked from the proximal muscles of the upper extremity. However, our study only involved magnetic stimulation over the vertebrae and not electrical stimulation, which was considered painful. In another study by Schmid et al. , magnetic excitation over the vertebrae at C7-T1 evoked MEPs with smaller amplitudes from distal muscles than proximal muscles compared to high-voltage electrical stimulation applied to the same area. Similar to our study, for MEPs following magnetic versus low-voltage electrical stimulation at Erb’s point, latencies were shorter and amplitudes were smaller, and the morphology was the same (see and ). The standardized latencies were comparable for both types of stimulation, which was not reported by Schmid et al. . In our opinion, when interpreting the results of neurophysiological tests of the brachial plexus, the reference values show a trend in terms of whether the parameters of the recorded potentials are within the normal range or indicate pathology . When interpreting the results, special consideration should be given to comparing them with the asymptomatic side, which is the reference for the recorded outcome on the damaged side . The results of the present study can be directly transferred to the clinical neurophysiology practice, due to the possibility of using two different stimuli in diagnostics to evoke the potentials with the same parameters that are recorded by non-invasive surface sensors. Magnetic stimulation appears to be less painful due to the non-excitation of the afferent component, contrary to electrical stimulation, where antidromically excited nociceptive fibers may be involved . One of the study limitations that may have influenced the results, especially the parameters of latencies of potentials, was the anthropometric differences between women and men included in the study group. However, the gender proportions were equal, making the whole population of participants typical for European countries. Considering the number of participants examined in this study, it should be mentioned that due to comparable conduction parameters in the groups of women and men, the final statistical analysis covered 84 tests to compare the parameters of potentials evoked with electrical or magnetic impulses. Moreover, as mentioned in , at the beginning of the pilot study, statistical software was used to determine the required sample size, and it was estimated that at least 20 patients were needed for the purposes of this study. This study reveals that the parameters of evoked potentials in CMAP and MEP recordings from the same muscles after the application of magnetic and electrical stimuli applied to the nerves of the brachial plexus are comparable. Magnetic field stimulation is an adequate technique that enables the recording of supramaximal potential (instead of the submaximal, which was reported in other studies ), which is the result of stimulation of the entire axonal pool of the tested motor path, similar to testing with an electric stimulus. We found that the two types of stimulation can be used interchangeably during an examination, depending on the diagnostic protocol for the individual patient, and the parameters of evoked potentials can be compared. Moreover, in the case of patients sensitive to stimulation with an electric field, which is considered to cause pain in neurophysiological diagnostics, it is crucial to have the possibility of changing the type of stimulus. Magnetic stimulus is painless in comparison with electrical stimulus. We can conclude that the use of magnetic stimulation makes it possible to eliminate diagnostic limitations resulting from individual anatomical conditions or anthropometric features (such as large muscle mass or obesity). MEP studies allow us to evaluate the proximal part of the peripheral motor pathway (between the cervical root level and Erb’s point, and via trunks of the brachial plexus to the target muscles) following the application of stimulus over the vertebrae, which is the main clinical advantage of this study. It may be of particular importance in the case of damage to the proximal part of the brachial plexus. As a study of brachial plexus function, MEP should be compared to imaging studies in order to obtain full data on the patient’s functional and structural status. |
A 53-year-old Man with Idiopathic Bilateral Chylothorax Refractory to Lymphaticovenular Anastomosis | a9d25559-1b4f-4461-9fab-9f7101f4a007 | 11867756 | Surgical Procedures, Operative[mh] | Chylothorax, characterized by milky pleural effusion and a high triglyceride concentration, is an uncommon condition caused by lymphatic leakage in the thoracic cavity . This condition is typically caused by lymphatic flow disruption or obstruction due to infections (e.g., tuberculosis and bacterial pneumonia), tumors, mechanical irritation from coughing and vomiting, surgery, chest trauma, sarcoidosis, amyloidosis, and lymphangioleiomyomatosis. Among these causes, idiopathic chylothorax accounts for 6.4% of all cases , 63% of which affect children below 16 years old, especially newborns and infants in the Japanese literature. Thus, compared to the pediatric population, idiopathic chylothorax is rare in adults. Despite documented successes in conservative and surgical treatment of idiopathic chylothorax, the rarity of this condition hinders the establishment of standardized treatment protocols. We herein report a 53-year-old man with idiopathic bilateral chylothorax. In this case, lymphatic scintigraphy located the leakage point, and appropriate interventions were performed, including lymphaticovenular anastomosis (LVA), a fat-restricted diet, octreotide administration, and physiotherapy. However, the chyle leak remained refractory, thus resulting in a poor patient outcome. In addition, we reviewed previous case reports and discussed possible causes in order to further understand the course of the present patient.
A 53-year-old man presented to our hospital in March 2019 with a 2-month history of chronic dyspnea and persistent cough caused by a common cold prior to symptom onset. His medical history was positive for diabetes mellitus and hypertension at 50 years old, but he had no history of smoking or allergies. His family history was significant for the diagnosis of Sturge-Weber syndrome in his son. He had no symptoms or physical findings suggestive of cardiovascular or collagen diseases. On admission, chest radiography and computed tomography (CT) revealed bilateral pleural effusion . A laboratory examination showed elevated triglycerides (390 mg/dL) and a high antinuclear antibody titer (1:640, homogeneous and speckled pattern), whereas cardiovascular and collagen diseases were excluded . Subsequent thoracentesis confirmed milky pleural effusion, with elevated triglyceride levels of 448 mg/dL (right side) and 519 mg/dL (left side). After excluding all possible causes of chylothorax, the patient was diagnosed with idiopathic chylothorax. Initial treatment with diuretics and a fat-restricted diet yielded no significant improvement in the bilateral pleural effusion, necessitating frequent pleural drainage. Approximately five months after the initial treatment, the patient underwent lymphatic scintigraphy, confirming the accumulation of 99m Tc-labeled human serum albumin ( 99m Tc-HSA) in the left venous angle . This indicated potential obstruction or leakage of the lymphatic pathway into the left subclavian vein. Accordingly, LVA at the level of the left supraclavicular fossa was performed in October 2019 under microscopic guidance. The operation was performed by end-to-end anastomosis of the thoracic duct and left external jugular vein with a 9-0 nylon thread. Although no definitive leakage points were visualized in the surgical field, the left pleural effusion was controlled postoperatively. However, the right pleural effusion remained uncontrolled on chest radiography and CT at eight months postoperatively. Although the patient received monthly octreotide administration, his right pleural effusion continued to increase, leading to chronic pre-LVA dyspnea and frequent right-sided pleural drainage. Furthermore, gradual edema progressed from the abdomen to the bilateral lower legs, which were refractory to physiotherapy measures (e.g., elastic stockings, rehabilitation, and lymphatic massage). Lymphatic scintigraphy performed one year postoperatively showed the resolution of the 99m TC-HSA accumulation in the left venous angle but demonstrated additional accumulation extending from the abdomen to both lower legs . LVA was performed a second time at four sites on the bilateral inner thighs and bilateral inner lower limbs, with lymphatic vessels and superficial veins end-to-end anastomosed with a 12-0 nylon thread. Nonetheless, the patient continued to receive octreotide and physiotherapy despite persistent edema and no improvement in pleural effusion. The patient's clinical condition steadily declined due to chronic pleural effusion and worsening leg edema, resulting in dyspnea and negatively affecting the quality of life. The development of scrotal edema and urinary retention further complicate the patient's health. Approximately three years postoperatively, the patient was admitted to our hospital because of CO 2 narcosis and acute renal failure. Following discussions, the patient and his family declined active treatment with ventilators or dialysis. He continued to receive symptomatic and supportive care before succumbing to complications a day later. An autopsy was not performed at the request of the family.
In this report, we described a case of adult-onset idiopathic chylothorax with no identified causes. Despite two surgical interventions, the patient's pleural effusion did not resolve, and the bilateral lower extremity edema worsened. In particular, the fact that chyle leak persisted even after the identification of the leakage site by lymphatic scintigraphy and subsequent comprehensive treatments is our central inquiry. Elucidation of the causative factors of non-resolving chylothorax is warranted. This investigation should also discuss the potential presence of inherent abnormalities within the thoracic lymphatic system, the effectiveness of different imaging techniques in maximizing leakage point visibility, and the determination of LVA as the optimal treatment for this specific patient. Chylothorax is characterized by the accumulation of chyle effusion in the pleural cavity due to disruption or obstruction of the thoracic lymphatic system, primarily in the thoracic duct. Generally, the thoracic duct originates at the abdominal chyle cistern, traverses the thoracic cavity through the aortic hiatus, and runs along the right side of the descending aorta, before entering the left subclavian vein near the venous angle. Given its course, chyle leakage can stem from several factors, including thoracic duct failure, pleural lymphatic effusion, peritoneal fluid migration from ascites , and genetic predispositions, such as dysplasia. To address these inquiries, we reviewed previous case reports and studies. Our review identified 20 cases of adult-onset idiopathic chylothorax in both the Japanese and English literature . These cases had a wide age range and a predominance of women, with approximately 25% exhibiting bilateral pleural effusions. Notably, all patients achieved resolution or reduction of their effusions, with no reported mortalities during the study period. Among them, 15 (75%) required open or thoracoscopic surgery, mainly thoracic duct ligation or embolization. Conversely, conservative medical treatment was successful in only 5 (25%) cases, while spontaneous resolution was observed in only 2 (10%) cases. Among the surgical cases, pleural effusions re-accumulated within one month postoperatively in only two cases, while the remaining cases showed no recurrence for an extended period. Preoperative lymphatic scintigraphy and lymphangiography were performed for 10 patients. Leakage localization was achieved via lymphatic scintigraphy in three cases and lymphangiography in two cases. Furthermore, all cases that underwent preoperative lymphatic scintigraphy had successful surgical confirmation of the actual leakage site. In our case, despite the use of preoperative lymphatic scintigraphy, the recurrence of postoperative pleural effusion may suggest underlying abnormalities in the thoracic duct anatomy. Multiple variations in thoracic duct anatomy were reported in a previous study . Magnetic resonance thoracic ductography (MRTD) showed that while the right thoracic duct typically drains into the left venous angle, configuration variations occur in up to 14% of cases . Compared to lymphangiography and MRTD, lymphatic scintigraphy offers lower resolution, making it more difficult to diagnose anatomic subtypes. Another possible consideration is congenital dysplasia or fragility of venous and lymphatic systems. This was considered based on the diagnosis of Sturge-Weber syndrome in the patient's son, which may have been caused by congenital dysplasia of the lymphatic and blood vessels. However, we did not investigate this association in this study. Recent research has implicated somatic mosaic mutations as the causative factor, while germline mutations are rare . This finding is consistent with the observation that idiopathic chylothorax occurs more frequently in children than in adults. Given the family history of vascular abnormalities in the patient, we may consider the involvement of congenital factors. Despite the inferences made, it was difficult to gather detailed information in this case. Further case and genetic studies are necessary to resolve this issue. In addition to congenital dysplasia of lymphatic and blood vessels, another abnormal pathology may have been present, such as lymphatic thrombus, which is reported to exacerbate lymphedema , although examinations proved no such condition in this case. We must consider the appropriateness of the implementation of LVA in the present case. Despite pharmacological (octreotide) and non-pharmacological (physiotherapy and diet) interventions, the patient's symptoms did not improve. Considering the increasing frequency of repeated thoracentesis, lack of effective treatment options, and desire to return to work, we decided to perform LVA. However, as summarized in , none of the previous patients underwent LVA. Similarly, no cases of LVA secondary to other causes of chylothorax were observed. The paucity of documented LVA outcomes in chylothorax may indicate its limited success rate. LVA creates a peripheral shunt between the lymphatic and venous systems and is usually performed to prevent recurrent cellulitis associated with lymphedema of the legs and arms . In a retrospective analysis of 95 patients with primary and secondary lymphedema , this procedure successfully reduced episodes of cellulitis by improving lymphatic flow; it was more successful in women than in men, in the leg than in the arm, and in patients with secondary lymphedema than in others. LVA may have failed to resolve the chyle leak in our case because the patient was a man, the cause of the leakage was unknown, and most concerning, LVA was created close to the venous angle where the large lymphatic and venous vessels merged. Lymphatic scintigraphy after LVA may have implied that the LVA unexpectedly hampered venous return from the lower extremities. Theoretically, LVA works to reduce intralymphatic pressure driving lymphatic leak but was obviously not effective in completely controlling chyle leak in our case. The cause of death in this patient must also be considered. Although LVA is less invasive than other treatments and has the potential for lymphatic recanalization, unforeseen adverse effects may occur postoperatively. For instance, persistent inflammation or pressure imbalances between the lymphatic and venous vessels after LVA could have caused adverse pathologies, such as increased venous pressure and hampered venous return. In retrospect, we should have considered open surgical or thoracoscopic approaches, which are more invasive approaches than LVA, especially since this procedure has irreversible effects. In conclusion, although most cases of idiopathic chylothorax respond favorably to conservative treatment, intractable cases refractory to conservative treatment are possible. Given the limitations of lymphatic scintigraphy, comprehensive investigations using magnetic resonance imaging, thoracoscopy, and lymphangiography are crucial for accurately localizing lymphatic abnormalities and leak points. Furthermore, evaluating more cases is essential for identifying new etiologies and developing treatments based on pathology.
|
The Combination of Solid-State Chemistry and Medicinal Chemistry as the Basis for the Synthesis of Theranostics Platforms | aee7b010-e0a7-4709-887f-913d5d41ce01 | 8534059 | Pharmacology[mh] | One of the most relevant trends in the development of modern medical chemistry is theranostics: an approach to drug development in which the created compositions have the potential to provide a joint solution to the problems of early diagnosis and targeted therapy for certain diseases. The concept of theranostics has paved the way for the development of multifunctional modified nanoparticles that combine diagnostic and therapeutic properties within a single material with a predefined set of properties. These serve as a basis for designing pharmacological compounds with predetermined effects on the organism. Modern medical practice includes the successful use of nanoparticles with various natures as carriers of drugs and fluorescent dyes. The contemporary level of solid-state chemistry makes it possible to design functional medical nanosystems; however, the search for best practices in synthesizing nanoparticles with the aim of having tailored properties, in accordance with present-day tasks, continues. In particular, these efforts are focused to a large extent on the possibility of using nanomaterials for specific drug delivery directly to the damaged area that requires pharmacological intervention.
The scientific community has identified theranostics as a part of personalized medicine; theranostics refers to precision medicine in which drugs are selected individually for each patient, according to the predicted response thereof or to the risk of a disease . The essence of the phenomenon does not really change, and theranostics, as an original trend, is a combination of early diagnosis and therapy. One example of such an interaction of areas in cardiology is a reduction of the infarction zone and its visualization (see ) . Some modern publications have discussed ways to introduce the special term “theranostics”. In fact, the issue is rather of epistemic nature and regards the interface of chemistry and medicine , as the authors often use the term “paradigm”. This article presents a detailed analysis of publications in the field of theranostics, with the premise that theranostic interventions will allow for simultaneous diagnosis and treatment of diseases. It has been noted that, at present, many treatment methods are essentially theranostics, in the sense that the treatment process proceeds under control, and the effectiveness of treatment should lead to a deeper understanding of the patient’s condition. It can be concluded that such methods have long been known and are currently available in practice, as, in order to treat a disease, one must first diagnose it; therefore, the term “theranostics” is, in fact, unnecessary. Most studies that have focused on the effectiveness of targeted drug delivery based on nanoparticles are concentrated in the field of oncology. The problem of efficient targeted delivery of cardioprotectors to the myocardial ischemia–reperfusion area remains open. The PubMed database presented 6019 publications on theranostics, of which 2689 refer to the treatment for cancer, 99 are in the field of cardiology, and only six articles are related to cardioprotection . In the general case, the development of modern platforms for theranostics is, of course, conducted in terms of chemistry. At least two of its branches are involved: The first branch is solid-state chemistry, as the issues in question are usually nanoparticles and solid-phase synthesis. The second branch of chemistry used for theranostics is medical or medicinal chemistry, as we are actually developing new medicines by using an already-known molecule or a pharmacophore fragment. A general theranostic platform synthesis scheme is illustrated in . First, a nanotransporter is selected, and various nanomaterials have been used as the basis for theranostics systems. The variety of such materials is illustrated in . First, the biodistribution of the base in a living organism and its biodegradation or bio-elimination ability are studied. Then, chemical and physical properties of the nanomaterial surface are studied, and surface chemistry methods are selected for the immobilization of active substances and fluorophores. The latter are fixed on the surface of nanoparticles, using a spacer with a terminal functional group. This procedure is preceded by the screening and selection of drugs and fluorophores. The amount of immobilized substances determines the surface capacity and is characterized by such properties as the grafting density. In general terms, the roles of spacer and immobilization methods are explained in . Spacers with terminal functional groups are designed to immobilize active substances and various contrast agents (e.g., fluorescent, magnetic, X-ray, or ultrasound) are used. A separate important task is to extend the spacer, thus ensuring that distance is maintained between the carrier particle and the contrast agent, in order to avoid contrast damping. The immobilization of active substances and contrasts on the surface of nanocarriers can be achieved by using the functional group or by other methods. The most frequently used functional groups are amino, carboxyl, and (less commonly) glycidine groups . Albumin is sometimes used as a spacer, acting as a transport protein with many functional groups . Frequently used coatings include hydrothermal coatings and coatings from dissolved polymer shells, such as polylactic acid .
At present, highly dispersed silica (HDS) Aerosil is widely used in biomedicine as a platform for theranostics and targeted drug delivery. It possesses a number of valuable properties, including biodegradability, a high specific surface (which determines a high capacity for a drug), and surface chemistry that allows for modification with various targeting ligands and fluorophores. The undoubted advantages of silica are its availability, low cost, and many years of application in medicine. We provide a detailed description of the development of systems for theranostics, using the example of nanosized silica. Aerosil 300 (Polysorb) has received widespread attention. This silica has a specific surface area of about 300 m 2 /g and consists of globular nanoparticles with a size of 8–10 nm that can form larger aggregates . The surface of Aerosil contains Si–OH silanol groups, the concentration of which increases when in contact with water. These groups ensure the variety of interactions between finely divided silica (HDS) and biological objects, as well as its use in medicine. Research has shown that HDS is not genotoxic, and is practically non-toxic to the oral cavity, skin, and eyes . Neither inhalation nor oral administration cause neoplasms (tumors). HDS is not mutagenic, and has shown promise as a basis for theranostic drug development. Moreover, also worthy of note are the properties that allow for the use of HDS in this area, including its biological compatibility, biodegradability in living organisms, option for varying the particle size, and large specific surface area (making it possible to obtain preparations with high content of biologically active substances), as well as the presence of functional groups necessary to create active centers designed for the immobilization of drugs and marker compounds. To assess the prospects for using HDS in theranostics, we identify medical practice areas where it has already found its application below. HDS has been most widely used as an enterosorbent, and it actively interacts with enteropathogenic organisms, such as Escherichia coli , Staphylococcus aureus , Vulgar protea , and Pseudomonas aeruginosa , forming aggregates of sorbent particles containing microbial cells . The formation of agglutinates of microbial cells and SiO 2 particles is accompanied by a sufficiently strong binding of micro-organisms. A study of the effect produced by finely divided silica on the wound microflora has shown that it equally intensively binds both Gram-positive cocci (Staphylococci) and Gram-negative bacilli ( Pseudomonas aeruginosa ). The drug not only binds microbes, but also adsorbs growth and reproduction factors thereof, as well as microbial exotoxins. Thus, highly dispersed silica, while not showing a direct antibacterial effect, significantly limits the manifestation of pathogenic properties of micro-organisms; this effect underlies the therapeutic effect of the drug when treating suppurative wounds and acute intestinal infections. It is well-known that many enterosorbents significantly alter the state of the digestive tract. For example, bran, pectin, guar gum chime, and other substances can change the state of the chyme, the speed of its passage, and (in some cases) the pH of the intestinal lumen . Sorbents can also affect the state of the intestinal wall. With the prolonged intake of cellulose, the length of the small intestine decreases . It has been found that dietary fiber can stimulate the growth of intestinal epithelial cells, thus increasing the mucosal surface area, or exert an “abrasive” effect . Most enterosorbents stimulate intestinal motility, but some of them slow down the food transit time, causing stool delay (polyphepan) or constipation (coal sorbents) in patients . Guar fibers in a therapeutic dose inhibit the secretion of pancreatic glucagon and increase calcium absorption . Cellulose increases the activity of disaccharidases, whereas saponins inhibit lactase activity but increase alpha-amylase activity . Chitin sorbents can modulate the activity of a wide range of enzymes—lipase, amylase, glucokinase, and prostaglandin synthetase . Enterosorbents also affect the intestinal microflora by regulating its growth, both due to the sorption of micro-organisms and by changing their environment. Enterosorption may impair the absorption of trace elements and vitamins: the impact of this effect is determined by the type of sorbent, intake duration, and dose . Thus, enterosorbents can have both positive and negative effects on an organism. Following the requirements of newly developed drugs, the toxicity of HDS has been studied in diverse species of animals (e.g., rats, pigs, and rabbits) in acute and chronic experiments. The results of a comprehensive study have shown that the therapeutic dose of this drug, equal to 100 mg/kg (and even exceeding that by 3–10 times), does not exert a toxic effect . High sorption activity determines the possibility of using HDS in surgical practices. HDS significantly outperforms other sorbents in its ability to sorb protein; it exceeds debrisan by 4.5–5 times, and gelivin and celesorb by more than 10 times. Various blood serum proteins (e.g., albumin and globulins)—and, consequently, wound exudate proteins—are absorbed by HDS equally actively, which means that this substance is not selective by nature . In clinical practice, HDS is used in combination with a gauze dressing for treating purulent wounds. A particularly important element of wound preservation is a multifunctional dressing. Thus, HDS most fully meets the requirements of wound preservation. HDS used as a primary dressing component binds a significant number of micro-organisms and prevents the invasion thereof into deeper tissue. It should be noted that the sorbent does not exhibit selective action against aerobic and anaerobic microflora. High and rapid water absorption by the sorbent promotes dehydration and mummification of non-viable tissues, conversion of wet necrosis into dry necrosis, dehydration of the edematous tissue, and relief of edema. Based on this, a dressing has been developed containing HDS mechanically connected to the gauze base . It is applied over the wound surface and fixed with a bandage. This dressing can be applied at both pre-medical and first-medical aid stages. The sorption properties of HDS and its ability to form a gel upon contact with water make its use in dentistry possible. HDS, in combination with a solution of chlorhexidine and a solution of decamethoxin, turned out to be a highly effective means for the treatment of herpetic stomatitis in children, erythema exsudativum multiforme, and an erosive-ulcerative form of lichen planus . As is well-known, dental caries and their complications are currently treated with antibiotics, sulfonamides, enzymes, harmonious preparations, antiseptics, and herbal remedies. Different dosage forms, including solutions, pastes, emulsions, and ointments, administered by methods of surface application, irrigation, and inhalations for the treatment of dental diseases, and acute and chronic inflammatory processes of periodontal tissues and oral mucosa, do not allow for the fully realized pharmacological potential of these substances. This results from the fact that saliva constantly reduces the concentration of the substances, thus diminishing their effectiveness . Therefore, there is a need to use drugs with pronounced detoxification properties and prolonged duration of action—these are exactly the properties of drugs immobilized on HDS. The use of HDS for drug immobilization provides a solution to the problem of uniform distribution of small amounts of biologically active substances in the system, and allows for extension of their duration of action . Several biologically active drugs immobilized on HDS and intended for the treatment of acute deep caries and periodontal tissue diseases have been clinically tested. An efficient paste was developed that consists of a sorbent that carries biologically active trace elements (e.g., fluorine, copper, zinc, and manganese), antibiotics (a mixture of penicillin, streptomycin, and laevomycetin), and distilled water. This paste possesses high antimicrobial activity against dentin microflora in caries cavities, and the components display synergistic action. The clinical effectiveness of deep-tooth-decay treatment with the developed paste was evaluated after 24 months in 325 patients, where positive results were achieved in 89% of patients. A large proportion of periodontal diseases involve periodontitis or inflammation of the periodontal tissues. HDS-immobilized preparations of plant and synthetic origin have been used for the treatment of periodontitis: eucalyptus, furazolidone, rivanol, lincomycin, etonium with urea, tincture of calendula and calamus, and salvin. The listed drugs serve as the constituents of suspensions and pastes for application to the gum and insertion into the gingival pockets. A total of 1085 patients with mild and moderate periodontitis aged 18–65 years were considered in the study. These patients were treated with both non-immobilized (first group, 285 people) and immobilized (second group, 800 people) drugs. The most pronounced and persistent therapeutic effect was obtained in patients who were treated with HDS-immobilized drugs. This was apparently due to prolongation of the main therapeutic properties of the drugs and the sorption of toxic products on HDS. The results showed that the treatment reduced inflammation in the gums and, in all cases, the effect of the immobilized drug exceeded that of the parent drug. The increase in therapeutic activity of the studied drugs, when immobilized on a sorbent, was probably due to not only by prolongation of their pharmacological action, but also by the HDS sorption properties. These properties allow for the removal of toxins of micro-organisms and tissue-decay products from the abnormal focus. Thus, it was found that the tested immobilized drugs of synthetic and herbal origin exert therapeutic effects for the treatment of teeth diseases, periodontal tissues, and oral mucosa that made it possible to recommend them for a wider dental practice . The abovementioned HDS medical-application areas give us reason to believe that this nanomaterial can also be used as a carrier for targeted delivery of drugs used in theranostics. Obviously, in this case, it is necessary to modify the silica surface with functional groups capable of serving as immobilization centers of biologically active and marker compounds. Let us consider several studies in this area.
A substance that can be used as an initial matrix in theranostics platforms is Aerosil A-380. Preliminary experiments have shown that Aerosil of this brand ensures a higher concentration of engrafted groups and immobilized drugs, as compared with Aerosils A-300 and A-175. The silica modification technique, based on a chemical assembly method, consists of several stages: chemisorption of (3-Aminopropyl)triethoxysilane, hydrolysis of unreacted alkoxy groups, addition of 3-(Boc-amino)octanoic acid as a spacer, deprotection, and deprotonation. Chemosorption of (3-Aminopropyl)triethoxysilane is carried out in a flow-type reactor with a stationary carrier layer in dry nitrogen, at a temperature of 220 °C . Fluorophores indocyanin green and fluorescein, as well as the anticancer drug Zn-protoporphyrin, can be immobilized on the surface of aminated Aerosil . Spacers are prepared by using the methods of solid-phase peptide synthesis on silica matrices. Peptide bond formation is carried out by the symmetric anhydride method. After immobilization of 3-(Boc-amino)octanoic acid, the content of functional groups, determined by the analysis with the use of acid dye bright orange G, was equal to 0.053 mmol/g. Taking into account the content of amino groups after chemisorption of (3-Aminopropyl)triethoxysilane equal to 0.055 mmol/g, it can be concluded that the greater part of grafted amino groups interacted with 3-(Boc-amino)octanoic acid, and the spacers anchored on the Aerosil surface are mainly those shown in . Then, the obtained nanodispersed silica with grafted spacers is used to immobilize the anticancer drug Zn-protoporphyrin and a fluorophore (fluorescein; ). These preparations are conjugated by using the carbodiimide method. The sustained-release carriers are prepared by using matrices with various functional groups that ensure the covalent, ionic, and adsorption binding of biologically active substances . The centers for immobilization of drugs that ensure covalent binding are formed by glutaraldehyde conjugated to the amino group of Aerosil. To ensure immobilization based on ionic bond formation, the aminated Aerosil is treated with succinic anhydride, and carboxyl groups are obtained on the carrier surface. Sorption immobilization is carried out on the surface of Aerosil A-380 containing silanol groups. Adenosine, which is widely used as a cardioprotector, was taken as a model drug . The drug release kinetics was studied by desorption of immobilized adenosine in Krebs–Henseleit buffer, with salt content close to that of the blood composition . The most rapid release of the drug has been observed under adsorption immobilization, while the slowest was observed under ion binding. This fact can be explained by the instability of azomethine groups formed under adenosine binding by glutaraldehyde. The immobilization of bradykinin as a cardioprotector (a peptide hormone) on aminated aerosil has been carried out by the glutaraldehyde method. The obtained samples with immobilized biologically active substances (bradykinin, adenosine, fluorescein, and cardiogrin) were tested for toxicity, biocompatibility, and biodegradability of silica nanoparticles. It was shown that intravenous administration of nanodispersed particles to rats does not cause a significant change in hemodynamic parameters, such as blood pressure and heart rate, which indirectly indicates good tolerability of these drugs. Silicon content measurement by atomic absorption spectroscopy showed that, in 30 days, about 90% of the administrated silica was excreted by rats as a result of biodegradation . It was also found that adenosine (ADN) adsorption on silica nanoparticles increases the infarct-limiting effect of the drug ( a). At the same time, a study on the distribution of silica in healthy rats and during myocardial ischemia has shown a significant increase in silica content in the damaged organ, which allows for the targeted delivery of drugs to the ischemic myocardium, based on modified aerosil nanoparticles . It was found that the adsorption of adenosine and bradykinin on silica nanoparticles surface resulted in significant attenuation of the hypotensive effect. The results of the analyses demonstrated the potential of using nanodispersed silica matrices as targeted drug delivery carriers. Considering the high cost of peptide hormones—in particular, that of bradykinin—we investigated the possibility of obtaining silica-based matrices for peptide synthesis, guided by the idea that, in contrast to the known silica matrices, the obtained matrices can ensure separation of the target product under mild conditions, preventing the possible cleavage of peptide bonds and matrix destruction. The developed synthesis technique includes a multistage process of chemical assembly on the initial silica surface , for which the silica gels KSK-2 and silochrome C-120 have been used . The silica surface was extremely hydroxylated to achieve maximum functionalization at a later stage, through chemisorption of (2-Phenylethyl)trichlorosilane ( , scheme 1). The next step was the hydrolysis of chrosilyl groups ( , scheme 2), followed by chloromethylation of the aromatic ring, using chloromethyl methyl ether in the presence of SnCl 4 ( , scheme 3). The last stage of the matrix synthesis was conjugation of p-hydroxybenzyl alcohol ( , scheme 4), which interacts with chloromethyl groups. It was found that about 50% of all chloromethyl groups entered into the reaction. The silica matrices thus formed were tested in the synthesis of the glycylglycine dipeptide. It should be noted that glycine-based spacers have been widely used to immobilize biologically active compounds. The dipeptide was prepared according to a classical solid-phase synthesis method by using Fmoc-glycine pentafluorophenyl ester. The first amino acid was conjugated by using the activated ester method. Then, the acid was released with morpholine-dimethylformamide solution, and the second amino acid was conjugated by using the same method. The dipeptide thus synthesized was separated from the carrier by reaction with a mildly acidic reagent (trifluoroacetic acid). Consequently, using a chemical assembly method, new silica matrices have been synthesized for the preparation, immobilization, and targeted delivery of biologically active substances. These carriers can readily be used as platforms for developing preparations intended for theranostics, as they allow for the immobilization of marker compounds necessary to visualize lesions and to make diagnoses, as well as to achieve covalent and non-covalent immobilization of cardioprotectors and anticancer drugs. It is also worth noting that the developed synthetic approaches can be used to bind targeting ligands which ensure active targeted drug delivery; for example, the delivery of an increased concentration of silica nanoparticles containing cardioprotectors in the damaged parts of heart muscle can be achieved by immobilizing antibodies to annexin V, which is a highly specific marker of ischemic damage and is expressed on the ischemic focus surface .
There has recently been a significant upsurge in the design of platforms for theranostics, most of which are used for cancer therapy. Additionally, related studies have been devoted to cardiology, diabetes, treatment of the liver, kidneys, autoimmune, inflammatory, and neurological diseases. At the same time, to date, problems remain at the level of scientific research, due to the complexity of the chemical methods used to synthesize such systems. Here, we provide an overview of the most promising developments. One study has proposed a new theranostic technique which includes the targeted delivery of relaxin (RLX) to the liver . As is well-known, RLX has potent anti-fibrotic properties, but, at the same time, it has a suboptimal pharmacokinetic profile and serious side effects. In the above research, RLX was conjugated to PEGylated superparamagnetic iron oxide nanoparticles (RLX–SPION), and the specific binding/absorption of such nanoparticles by hepatic stellate cells (HSCs) was investigated, and the therapeutic effect of RLX–SPION on human HSCs in vitro and in vivo in a CCl4 model of induced liver cirrhosis in mice was evaluated. In a cell culture, RLX–SPION were bound to the surface of TGFβ-activated HSCs, after which the TGFβ-induced HSCs were internalized and inhibited differentiation, migration, and contraction. In vivo RLX–SPION significantly attenuated cirrhosis and showed increased contrast in MRI. In general, the research presented RLX–SPIONs as a novel theranostic platform which provides new opportunities for the diagnosis and treatment of cirrhosis. An anticancer theranostics nanoplatform based on controlled near-infrared radiation (NIR) has been developed by encapsulating up-conversion nanoparticles (UCNPs) and the luminogen 2-(2,6-bis((E)-4-(phenyl(40-(1,2,2-triphenylvinyl)-[1,10-biphenyl]-4-yl)amino)styryl)-4H-pyran-4-ylidene)malononitrile (TTD) with an amphiphilic polymer having the characteristics of aggregation-induced emission (AIEgen) . The cyclic peptide of arginine–glycine–aspartic acid (cRGD) was conjugated to obtain UCNP@TTD-cRGD nanoparticles. As an outcome of the work, the bioimaging and antitumor ability of UCNP@TTD-cRGD nanoparticles were assessed when illuminated with near-infrared radiation in an in vitro three-dimensional (3D) cancer spheroid mouse tumor model. With a close match between UCNP radiation and AIEgen absorption, the synthesized nanoparticles efficiently generated reactive oxygen species (ROS), even when excited through thick tissues. The developed NIR-regulated UCNP@TTD-cRGD can provide selective visualization of cancer cells and significantly inhibit tumor growth during NIR-regulated phototherapy, as compared with white-light excitation. A theranostics nanosystem developed for highly selective therapy against tumors and for in situ tracking of fluorescence during cancer chemotherapy has been described . The developed theranostic agent (RA-S-S-Cy) includes a disulfide bond as a cleavable linker, a near-infrared (NIR) active fluorophore acting as a fluorescent tracker, and the natural RA-V cyclopeptide acting as an active anticancer agent. Upon reaction with a high level of intracellular glutathione (GSH), a disulfide bridge is cleaved, resulting in a concomitant active release of the RA-V drug and a significant increase in NIR fluorescence. To further improve the RA-S-S-Cy tumor targeting and to increase the generation of reactive oxygen species, the RA-S-S-Cy, together with an oxygen-generating agent catalase, were included in the shell of the PLGA lactic acid copolymer targeted by peptide with (RGDfK), in order to obtain RA-S-S-Cy@PLGA nanoparticles. An attempt has been made to combine the advantages of albumin nanoparticles and quantum dots (QD) to improve drug accumulation in tumors and the ability to perform strong fluorescence imaging on a single carrier . Researchers have considered the problem of premature drug release from protein nanoparticles and the high toxicity of QD caused by the leakage of heavy metals. As a result, a cancer theranostics platform has been developed by combining a biocompatible albumin backbone with CdTeQD and mannose fragments, in order to enhance accumulation in tumors and reduce QD toxicity. The chemotherapeutic water-soluble drug pemetrexed (PMT) was conjugated, through a tumor-cleavable bond, to the albumin backbone for specific release into the tumor. In combination with the herbal hydrophobic drug resveratrol (RSV), a phospholipid complex was preliminarily formed, which ensured its physical encapsulation in albumin nanoparticles. The albumin–QD conjugate showed increased cytotoxicity and internalization in breast cancer cells, which can be traced due to their high quantum fluorescence yield and excellent visualization ability. A new theranostic hybrid nanocomposite has been described , which includes an iron oxide core and a mesoporous silica shell (IO@MS) with an average size of 30 nm. The nanocomposite is coated with a layer of human serum albumin (HSA), and can be used for magnetic resonance imaging (MRI) and drug delivery. The porous structure of the IO@MS nanoparticles was loaded with the antitumor drug DOX, with 34 wt.% drug loading. To capture the drug, a dense HSA coating bound by isobutyramide was applied. It has been shown that this protein nanoassembly is destroyed by proteases, thus releasing DOX. The effect has been proven in a three-dimensional cell model, using confocal imaging. Cytotoxicity was observed in studies of spheroid growth inhibition in liver cancer cells. Another study has focused on a metal organic framework (MOF) combined with hollow mesoporous organosilica nanoparticles (HMONs), using an intermediate layer of polydopamine (PDA) to form molecular organic/inorganic hybrid nanocomposites (HMONs–PMOF) . Doxorubicin hydrochloride DOX and indocyanine green (ICG) were separately loaded into the HMON internal cavity and on the MOF outer porous shell. The resulting double drug-loaded nanocomposites (DOX/ICG@HMONs–PMOF) have shown good photothermal properties and pH/NIR-initiated DOX release. In addition, in vitro cell experiments have confirmed that HMONs–PMOFs can efficiently deliver DOX to cancer cells. When released into cancer cells, the photothermal effect of DOX/ICG@HMONs–PMOF can cause lysosomal rupture, thereby facilitating the “exit from lysosomes” process and accelerating DOX diffusion in cytoplasm. To obtain effective DI@HMONs–PMOF accumulation, the tumor location was investigated, in terms of the benefits of using iron ions coordinated on PDA, and ICG enclosed in MOF, magnetic resonance (MR), and photoacoustic (PA) dual-mode imaging. The results also indicated that ICG attached to nanoparticles can improve the capabilities of MR imaging with the prepared nanocomposites. Another research has shown that the use of glucocorticoids (GC), as a component of nanoparticles, can improve delivery to inflamed areas, thus increasing their effectiveness and minimizing the required dose, consequently reducing the associated side-effects . The nanoparticles proposed in the research consist of GC betamethasone phosphate (BMP) and fluorescent dye DY-647 (BMP-IOH-NP). These nanoparticles have been recommended for the more effective treatment of inflammation while monitoring the in vivo delivery. The uptake of BMP-IOH-NP by macrophages was analyzed by using fluorescence and electron microscopy. Lipopolysaccharide-stimulated cells were treated for 48 h with BMP-IOH-NP (1 × 10 −5 —1 × 10 −9 M), BMP, or dexamethasone (Dexa). The drug efficiency was evaluated by measuring the level of interleukin 6l. Mice with zymosan-A-induced limb inflammation were injected intraperitoneally with BMP-IOH-NP (10 mg/kg), and mice with ovalbumin-induced allergic airway inflammation (AAI) were treated intranasally with BMP-IOH-NP, BMP, or Dexa (2.5 mg/kg each). Efficacy assessment was performed in vivo by limb volume measurement and ex vivo by measuring the limb mass in mice injected with zymosan-A, or in the AAI model by in vivo evaluation of lung function by radiography and cell count in bronchoalveolar lavage fluid. BMP-IOH-NP delivery to the lungs of AAI mice was monitored by in vivo optical imaging and fluorescence microscopy. It was shown that the synthesized BMP-IOH-NP nanoparticles can be successfully used in anti-inflammatory theranostics. FHMP nanoparticles (FHMP NPs) have been synthesized for sonodynamic therapy (SDT) and PA imaging of tumors by integrating melanin nanoparticles (MNPs, a component for PA imaging) into the shell of hematoporphyrin monomethyl ether (HMME, a component for improving SDT) . Then, the nanoparticles were protected with poly(lactic-co-glycolic acid) (PLGA) and additionally functionalized with folic acid (FA)—a tumor-oriented ligand. The synthesized FHMP NPs with wide optical absorption not only possess high ability to enhance contrast during PA imaging, but also demonstrate significant SDT efficiency. The PLGA-based nanoplatform improved the HMME light stability and sonodynamic characteristics, as well as facilitating MNP delivery to the tumor area. The sonosensitizer, which is assisted by ultrasound irradiation, generates ROS-mediated cytotoxicity against tumor cells. It has been demonstrated that, at the cellular level in in vitro and in vivo tumor xenograft mouse models with tumors, FHMP NPs contributed to the selective ROS killing effect in tumor cells and played an active role in the suppression of tumor growth. Hydrophobic superparamagnetic iron oxide (SPIO) nanoparticles have been obtained by using the thermal decomposition method . They were coated with 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[methoxy(polyethyleneglycol)-2000] (DSPE-PEG 2000) and DOX, using a thin-film hydration technique, followed by ICG loading into phospholipid layers. In vitro biocompatibility and antitumor efficacy were assessed by using MTT analysis. In vivo fluorescence and magnetic resonance imaging (MR) were used to assess penetration across the blood–brain barrier (BBB) and accumulation in brain tumor tissue. The obtained multifunctional nanoparticles had an average diameter of 22.9 nm, zeta potential of –38.19 mV, and were able to provide a sustained release of DOX. In vitro experiments showed that SPIO@DSPE-PEG/DOX/ICG nanoparticles effectively increased cellular uptake of DOX, as compared to free DOX. In vivo fluorescence and MRI showed that the nanoparticles not only effectively overcame the BBB, but also selectively accumulated at the tumor location. A multifunctional theranostics nanoplatform has been developed on the basis of gold nanoparticles (NF) stabilized by poly(amidoamine) dendrimer of the fifth branching order (G5), to which ultra-fine iron oxide nanoparticles (USIO) were added . This composition can be used for combined photothermal therapy (PTT) and radiation therapy (RT) under the control of multimodal imaging, namely T1-weighted magnetic resonance (MR)/computed tomography (CT)/PA imaging. Gold nanoparticles stabilized by the G5 dendrimer and citric acid–stabilized USIO were obtained separately. Then, they were mixed at a certain molar ratio (Fe:Au) with the formation of complexes. The complexes were exposed to the solution for the growth of gold nanoparticles. The remaining dendrimer terminal amine groups were then acetylated. The resulting DSNF-stabilized Fe 3 O 4 /Au had an average diameter of 99.8 nm, showed good colloidal stability, cytocompatibility, and near-infrared absorption. The unique structure and composition of DSNF Fe 3 O 4 /Au provided high relaxivity r 1 (3.22 mM −1 s −1 ) and a photothermal conversion efficiency (82.7%), which allows them to be used as a theranostics nanoplatform for multimodal MR/CT/PA imaging, PTT, and radiotherapy (RT) of tumors with improved therapeutic efficiency. Red fluorescence ZnO nanoparticles have been synthesized by using the polyol method in boiling trimethylene glycol (TREG) with zinc acetate . ZnO nanoparticles were grafted with a polyglycidol layer by ring-opening polymerization of glycidol (ZnO-PG). As calculated from the thermogravimetric analysis data, the weight ratio of the grafted PG was about 68 wt.%. Then, ZnO-PG was conjugated to the arginine–glycine–aspartate (RGD) peptide through stepwise organic reactions. The anticancer drug DOX was immobilized on ZnO-PG-RGD to form ZnO-PG-RGD/DOX, with particle size of 21.8 ± 0.9 nm. The drug-release rate reached 70.6% within 48 h, at a pH of 5.2, which was more than three times the value at a pH of 7.4. The grafted PG layer not only significantly improved the dispersibility, but also inhibited the uptake of ZnO nanoparticles by U87MG and HeLa cells. In contrast, ZnO-PG-RGD was selectively absorbed by U87MG, rather than by HeLa cells, demonstrating the obvious targeting. In another experiment, a porphyrin grafted lipid (PGL) ring has been used to load DOX and to apply synergistic chemo-PDT . Self-assembled liposomal PGL nanoparticles with hydrophilic cores were used to encapsulate DOX, using a pH gradient. The encapsulation efficiency was ~99%. The resulting PGL–DOX nanoparticles were highly stable and were successfully removed from the endolysosomal compartment after laser irradiation to release DOX in the cytosol. The PGL–DOX nanoparticles had good cellular uptake, a chemo-photodynamic response, and the ability to visualize fluorescence in various cell lines. After exposure to laser radiation, a significant decrease in viability of cells treated with a low molar concentration of PGL–DOX NPs was observed. In addition, in vivo experiments performed on a tumor xenograft model in mice demonstrated the ability of PGL–DOX accumulation in tumors due to passive targeted delivery. Through fluorescence imaging, the PGL–DOX biodistribution in tumors and in major body organs were also easily monitored in real-time in vivo. In Reference , a theranostic platform (MnO 2 -SiO 2 -APTES and Ce6; MSA & C) based on MnO 2 nanoflowers was synthesized, which provides synergistic therapy guided by MRI, including PDT and PTT in the second near-infrared window (NIR-II). Nanoflowers refer to chemical compounds that form structures that, under TEM, resemble flowers (or, in some cases, trees), sometimes called nanobouquets or nanotrees. In the study, MnO 2 nanoflowers have been proposed for the first time as a photothermal NIR-II agent. In the MSA & C system, MnO 2 nanoflowers were used to efficiently load the photosensitizer, relieve tumor hypoxia, and conduct NIR-II PTT tumor imaging. The large amount of photosensitizers, as well as reduced tumor hypoxia and hyperthermia, contributed to the improvement of PDT. Positively charged APTES was used to stimulate cellular uptake, further enhancing the treatment efficacy. Another study focused on the combination of hydrophobic and electrostatic non-covalent interactions for bimodal fluorescence/photoacoustic imaging of breast cancer . The authors integrated multicomponent hyaluronic acid (HA), protamine (PS), nanodiamonds (ND), curcumin (Cur), and ICG into a single nanoplatform (designated as HPNDIC). To achieve this goal, a two-stage build strategy was used. At the first stage, PS was used to modify ND clusters with the formation of positively charged PS @ ND (PND), with the simultaneous encapsulation of natural low molecular weight drug Cur and photosensitive ICG. Second, HA was adsorbed onto the outer surface of the PNDIC through charge complexation, in order to provide tumor-targeting ability. The resulting HPNDIC had uniform size, high drug-loading capacity, and excellent colloidal stability. It has been found that, under near-infrared irradiation conditions, ICG can be used for both PTT and PDT, resulting in an increased efficacy of Cur therapy (both in vitro and in vivo) with good biocompatibility. The presence of ICG and the accumulation of HPNDIC in vivo can be used for imaging by bimodal fluorescence/photoacoustic imaging. It has been shown that multifunctional theranostic nanostructures, consisting of superparamagnetic iron oxide and gold nanoparticles scuffed inside graphene oxide nanoflasts, can be used for double photo/radiation therapy due to near-infrared absorption of graphene oxide for photothermal therapy and radiosensitization by gold nanoparticles for enhanced radiation therapy. At the same time, this nanoplatform can also be detected by MRI imaging, due to the presence of iron compound nanoparticles. In a mouse carcinoma model, the platform showed 1.85 and 1.44 times higher therapeutic efficacy in combined photo and radiation therapy, respectively, compared to pure graphene oxide, which led to the complete destruction of tumors. A study has investigated the possibility of developing chitosan nanococktails containing nanoparticles of both nanocerium and superparamagnetic iron oxide . Nanocerium, which is capable of trapping reactive oxygen species, and iron oxide nanoparticles, used as imaging agents for MRI, were synthesized separately. Theranostics platforms have been constructed through two different mechanisms: electrostatic self-assembly and ionic gelation. These theranostic nanococktails have demonstrated the efficient uptake of reactive oxygen species and MRI contrast as a potential platform for the treatment and diagnosis of various diseases. As a platform for theranostics, dendrimer-modified gold nanorods for combined gene therapy and photothermal therapy for colon cancer have been synthesized . Gold nanorods grafted with polyamidoamine dendrimers (PAMAM, G3) have been modified with the GX1 peptide (cyclic 7-mer peptide, CGNSNPKSC). The resulting nanoplatform has been proposed as a gene-to-gene vector (FAM172A, which regulates the proliferation and apoptosis of colon cancer cells) for combined photothermal and gene therapy for colon cancer cells (i.e., HCT-8 cells). In addition, the computed tomography function using this platform can provide diagnostic data for colon cancer. Porphyrin lipids have been used to create several multimodal nanoparticle platforms, including liposome-like porphysomes (water core), porphyrin nanodrops (liquefied gas core), and ultra-small porphyrin lipoproteins. Porphyrin lipids were used to stabilize the water/oil interface to create porphyrin–lipid nanoemulsions with paclitaxel (PTX) loaded into an oil core (PLNE–PTX). This can facilitate combined PDT and chemotherapy at the same time. PTX (3.1 wt.%) and porphyrin (18.3 wt.%) were efficiently loaded into PLNE–PTX, forming spherical core–shell nanoemulsions that were 120 nm in diameter. PLNE–PTX showed stability upon delivery, resulting in high tumor accumulation (~5.4 ID%/g) in KB tumor mice. The PLNE–PTX combination therapy better inhibited tumor growth (78%) in a selective manner, compared to PDT (44%) or chemotherapy (46%), 16 days after treatment. In addition, the fourfold reduced dose of PTX (1.8 mg PTX/g) in the PLNE–PTX combination therapy platform has shown increased therapeutic efficacy, compared to taxol 7.2 mg PTX/kg, which may reduce the associated side effects. PLNE–PTX fluorescence allows for real-time tracking of the penetration of nanoparticles into the tumor. Researchers have developed metal nanoparticles in combination with cyclodextrin as a new platform to reduce the effects of traditional chemotherapy, such as stomach irritation, hair loss, neurotoxicity, and so on . Encapsulating drugs with metal nanoparticles can help to overcome the limitations of chemotherapy and efficiently transport anticancer drugs to the target site. This is due to various advantages, such as optimal size, surface morphology, higher conductivity, and in vivo stability. Such platforms allow for controlled drug release under the influence of NIR radiation or a magnetic field. Some commonly used chemotherapeutic agents, such as doxorubicin, paclitaxel, methotrexate, and so on, are rapidly degraded due to their hydrophobic nature and are unstable in vivo. Cyclodextrin provides structural compatibility for the encapsulation of such hydrophobic drugs and improves their loading capacity, solubility, and stability without exhibiting any systemic toxicity. Two-dimensional intermetallic PtBi/Pt nanoplates (PtBi NP) have been developed as a therapeutic platform for in situ oxygen production, thus overcoming tumor hypoxia to enhance PTT/RT . As they possess a high X-ray attenuation coefficient, PtBi NPs have demonstrated high sensitization characteristics at RT. PEGylated PtBi NPs (PtBi-PEG) exhibit high biocompatibility, increased circulation time in the blood, and increased accumulation in the tumor. PtBi-PEGs have also been used for tri-modal NIR contrast enhancement, PA, and X-ray imaging.
The concept of theranostics was introduced by Funkhouser in 2002, defined as the integration of two modalities—that is, therapy and medical imaging—into a single “package” of a material intended to overcome undesirable variations in biodistribution and therapeutic efficacy . Theranostic materials provide a window for monitoring the pharmacokinetics and pharmacodynamics of the drug introduced into the body. This concept was initially largely focused on cancer therapy, but it was later expanded to other pathologies. For theranostics applied to oncological diseases, the methods for synthesizing theranostic constructs have been systematized . This scheme is suitable for all theranostic constructs, if, by treatment, we mean drugs in each specific area . In the ever-growing field of personalized medicine, nanotechnology plays a vital role by integrating diagnostic and therapeutic functions into a single system called nanotheranostics . Using theranostic nanomaterials, it is possible to achieve the correct diagnosis and to develop an appropriate therapeutic intervention, while simultaneously targeting the diseased cells during systemic circulation, evading the immune system, and visualizing pathological areas . shows that nanosystems can be used for the individualized treatment of a particular patient and for increasing survival. The theranostic efficiency of nanomaterials gradually increases along with the use of “smart” and novel biomaterials . Theranostic constructs can be produced from materials that respond to biological environment, such as temperature, pH, an enzyme, or a specific target group that provides systemic release of the drug and reduces toxicity in healthy tissues. All of these are questions concerning the future, such as the transfer into clinical practice. At present, a huge number of theranostic constructs have been invented in such areas as cancer therapy, inflammatory diseases, autoimmune disorders, diabetes, cardiovascular diseases, neurological disorders, and liver and kidney therapy. The introduction of these systems into clinical practice is tempered by the associated complexity of synthesis, which means that the time has not yet come for industrial production.
|
null | 56706fb8-a96f-4816-bc3f-ac6b35ea2bda | 11127967 | Microbiology[mh] | Although Antarctica is one of the most pristine areas on Earth, anthropogenic activities such as tourism and the establishment of scientific research stations have led to contamination with persistent organic pollutants (POPs) in the region . Polycyclic aromatic hydrocarbons (PAHs) are a class of POPs that have raised concerns at the global level because of their toxic, mutagenic, teratogenic, and carcinogenic effects on organisms , . PAHs are distributed in all parts of the Antarctic environment, including the atmosphere , water , snow , soil , and sediment . The main sources of PAHs in Antarctica are biomass and coal combustion, with PAHs primarily associated with the activities of scientific stations or transported to the region through atmospheric processes . Han et al. determined the average concentrations of 16 PAHs in environmental samples from Ardley Island, Antarctica. They found that phenanthrene accounted for a high proportion, with concentrations ranging from 1.95 to 42.18 ng/g, representing 24–50% of the total PAHs. The concentrations of PAHs in soil from the vicinity of the Bulgarian Antarctic Station (St. Kliment Ohridski) were studied by Abakumov et al. . In these works, the concentration of PAHs in soils ranged from 170 to 200 μg/kg. Therefore, remediation efforts are essential to mitigate the long-term impact of PAH contamination. Biodegradation of PAHs in the Antarctic region by native microorganisms is considered to be a cost-effective and sustainable approach for PAH contamination remediation . Moreover, the introduction of nonindigenous microbes is not permitted in the Antarctic region . Therefore, the identification of indigenous Antarctic microorganisms with PAH-degrading capabilities is essential for the bioremediation of contaminated Antarctic environments. To date, various PAH-degrading bacteria have been isolated from Antarctica, including Pseudomonas , Rhodococcus , Sphingobium , and Acinetobacter , , which are common genera found in soils from both polar and temperate regions. Dietz-Vargas et al. isolated Acinetobacter sp. OHIG3-2 from soil collected near the Chilean Antarctic Station and found that it could degrade 18% phenanthrene at 28 °C within 4 days. In addition, many studies have indicated that the application of Antarctic bacterial consortia is efficient for bioremediation. To our knowledge, published studies have been focused mainly on the biodegradation of petroleum hydrocarbons by Antarctic microbial consortia , . Although recent studies have been focused on microbial community dynamics and researchers have attempted to identify the key bacterial players involved in the bioremediation process through high-throughput sequencing, there are still research gaps regarding the role of isolated strains in supporting this prediction. Sulbaran-Bracho et al. isolated the diesel-degrading bacterial consortia LR-30 and LR-10 from Antarctic rhizosphere soil. They found that the dominant bacterial genera of the LR-30 community were Achromobacter , Pseudomonas and Rhodanobacter , whereas those of the LR-10 community were Pseudomonas, Candidimonas and Renibacterium . van Dorst et al. investigated the microbial dynamics associated with large-scale bioremediation of hydrocarbon-contaminated soil in Antarctica and reported that the genera Alkanindiges, Arthrobacter, Dietzia , and Rhodococcus were responsible for hydrocarbon degradation. Antarctic environments exhibit extreme climatic conditions characterized by low temperatures and low water availability, and these factors can inhibit or reduce metabolic activity in microorganisms, leading to low contaminant degradation rates. Furthermore, this region is affected by global warming, as evidenced by records showing a rise in surface water temperatures of approximately 1 °C in the Antarctic Peninsula between 1955 and 2004 and up to 2.3 °C over a century. A new all-time temperature record of 18.3 °C over the continental Antarctic region was observed on 6 February 2022 . Therefore, it is essential to have an in-depth understanding of microbial community structures and degradation potential or activity under various environmental conditions. Bacteria adapted to a broader temperature range may offer significant advantages in biotechnological applications. They have high scientific relevance, particularly for environmental monitoring and safeguarding these extreme ecosystems from anthropogenic impacts. In this study, we attempted to enrich PAH-degrading bacterial consortia from Antarctic soils and examined their degradation potential under different environmental conditions. This is crucial because it provides insights into the adaptability and effectiveness of these consortia in degrading PAHs under various scenarios, including under various ranges of temperature and levels of water availability. Additionally, high-throughput sequencing was employed to explore changes in the bacterial community during biodegradation and to identify the key and potential contributors to PAH degradation in the enriched bacterial consortia. PAH-degrading bacterial strains were potentially isolated and investigated for their ability to biodegrade PAHs, both as individual strains and as part of constructed consortia, to facilitate comprehensive research. This study can contribute to the development of efficient bioremediation strategies for PAH-contaminated Antarctic environments.
Soil samples and enumeration of total heterotrophic and PAH-degrading bacteria The surface soil samples used in this study were collected around the Great Wall Station on King George Island in Antarctica during the 30th Chinese Antarctic Research Expedition (CHINARE30) in January 2014. The soil samples were taken at a depth of 0–10 cm from 20 locations (Table ), were stored at 4 °C for enumeration of total heterotrophic and PAH-degrading bacteria as well as enrichment of PAH-degrading bacterial consortia and were stored at − 20 °C for DNA isolation and sequencing analysis. The description of the sampling locations is summarized in Table and in a previous report . The numbers of total heterotrophic and PAH-degrading bacteria present in the soil samples were determined by the most probable number (MPN) method in 96-well microtiter plates, as described by Muangchinda et al. . Total heterotrophic bacteria were counted in Luria–Bertani (LB) medium; low-molecular-weight (LMW) PAH-degrading bacteria were counted in mineral salt medium (MSM) supplemented with 250 mg/L phenanthrene; and high-molecular-weight (HMW) PAH-degrading bacteria were counted in MSM supplemented with 250 mg/L pyrene. The plates were incubated at 15 °C for 7 days (for total heterotrophic bacteria) or for 14 days (for PAH-degrading bacteria). Total heterotrophic bacteria were analyzed in positive wells by turbidity, while PAH-degrading bacteria were analyzed by respiration indicators. The numbers of total heterotrophic and PAH-degrading bacteria were retrieved from an MPN table. Enrichment of PAH-degrading bacterial consortia from Antarctic soil The enrichment of PAH-degrading bacterial consortia was performed following the methods of Sakdapetsiri et al. . Briefly, a total of 10 g of each soil sample was added to 30-mL sterilized glass bottles supplemented with 100 mg/kg of a particular PAH (phenanthrene or pyrene) and incubated at 15 °C for 75 days. After the incubation period, 1 g of soil was taken from a glass bottle and added to 9 mL of MSM. Then, 100 µL of soil solution was spread onto MSM agar plates supplemented with 100 mg/L phenanthrene or pyrene and incubated at 15 °C. After two weeks of incubation, bacterial colonies were picked from the plates, added to 5 mL of MSM supplemented with 50 mg/L phenanthrene or pyrene and shaken at 200 rpm for 14 days at 15 °C. Additional tubes containing only PAH-supplemented MSM served as abiotic controls. After incubation, the culture broth, which appeared turbid and yellow/orange-colored compared with the control, was transferred to fresh medium containing PAH and cultured as described above. This step was repeated five times to obtain PAH-degrading consortia. PAH biodegradation experiment The enriched bacterial consortia were cultivated in 45 mL of tenfold diluted LB and incubated at 15 °C for 2 days. Bacterial cells were centrifuged at 8000 rpm and 4 °C for 10 min, washed twice, and suspended in sterile 0.85% NaCl (w/v) solution. The turbidity of the cell suspension was adjusted to 1.0 for optical density at 600 nm, equivalent to an initial viable cell count of 10 6 colony forming units per milliliter (CFU/mL). The cell suspension was shaken at 200 rpm and 15 °C for 24 h to allow the cells to utilize the accumulated nutrients before the PAH degradation experiment was initiated. The cell suspension (0.5 mL) of the enriched consortia was inoculated in a tube containing 4.5 mL of MSM supplemented with 50 mg/L phenanthrene. The initial cell concentration was 10 5 CFU/mL. The tubes were incubated at 200 rpm and 15 °C. Uninoculated tubes containing MSM supplemented with 50 mg/L phenanthrene served as abiotic controls. Three test tubes were taken on days 0, 3 and 5 to analyze the residual phenanthrene content by performance liquid chromatography (HPLC) as described in a previous study and for DNA extraction as described in “ ”. The HPLC system consisted of an LC-3A pump, an SPD-2A UV–visible detector and a C-RIA recorder (Shimadzu, Japan). Enriched bacterial consortia that exhibited the highest phenanthrene degradation efficiency were chosen for the PAH degradation tests. The tests involved assessing acenaphthene and fluorene (at 50 mg/L) degradation over 5 days and pyrene (20 mg/L) and benzo[ a ] pyrene (10 mg/L) degradation over 35 days. Uninoculated tubes containing MSM supplemented with PAHs served as abiotic controls. The cultures were incubated at 200 rpm and 15 °C. After cultivation, samples were collected for residual PAH analysis and DNA extraction. Effect of temperature and water content on phenanthrene degradation An effective PAH-degrading consortium was selected for investigation of the effects of temperature and water content on phenanthrene degradation. Phenanthrene degradation was performed using the method described above. The bacterial cells were collected to study the bacterial community. The effect of temperature on phenanthrene degradation was determined at 4, 15 and 30 °C. The effect of water content on phenanthrene degradation was evaluated by withholding PEG 6000 or adding 30% (w/v) PEG 6000 to MSM. Samples were collected for residual phenanthrene analysis and DNA extraction. Uninoculated tubes containing MSM supplemented with phenanthrene served as abiotic controls. DNA extraction and bacterial community structure analysis The genomic DNA of the enriched consortia was extracted in triplicate using a method described by Sharma et al. . The extracted DNA was subjected to agarose gel electrophoresis and analysis via a NanoDrop™ 2000 spectrophotometer (Thermo Scientific, USA). The V3-V4 regions of the 16S rRNA gene were amplified using the primers 515F and 806R . High-throughput 16S rRNA gene amplicon sequencing was performed according to previously described methods . DNA libraries with dual indices were sequenced using an Illumina MiSeq platform (Illumina, CA, USA) with a 150 bp paired-end sequencing strategy. The raw sequences were processed and analyzed using QIIME 2 software tools version 2022.11 ( https://library.qiime2.org ). The reads were demultiplexed and qualified using the q2‐demux plugin and denoised with Deblur . Taxonomic assignment was undertaken to obtain amplicon sequence variants (ASVs) using the q2‐feature‐classifier and classify‐sklearn naïve Bayes taxonomic classifier against the SILVA SSU taxonomic data operational taxonomic unit (OTU) reference sequences. The indices of alpha diversity, including the Shannon index, evenness, Faith's PD, and observed OTUs, were determined in QIIME2. Isolation of pure cultures from phenanthrene-degrading consortia A tenfold serial dilution of the enriched consortia was prepared with 0.85% (w/v) NaCl solution. Then, the bacterial suspensions (10 −5 to 10 −7 dilution) were spread on LB plates and incubated at 15 °C for 3 days. Pure culture colonies were selected and then recultivated on LB agar. Colonies with different morphologies were selected, purified, and tested for their ability to degrade phenanthrene. All the isolated strains were subsequently identified via 16S rRNA gene sequence analysis. The genomic DNA was extracted and purified using the GenUP™ Bacterial gDNA Kit (Biotechrabbit GmbH, Germany) following the manufacturer’s instructions. The 16S rRNA gene was amplified using the universal 16S rRNA gene primers 27F (5ʹ-AGAGTTTGATCACTGGCTCAG-3ʹ) and 1492R (5ʹ-CGGCTTACCTTGTTACGACTT-3ʹ). The purified PCR products were sequenced by the Sanger method (First BASE Laboratories, Malaysia). The 16S rRNA genes of all the isolates were pairwise assigned to the reference sequences of the strains available in the EzBioCloud database ( www.ezbiocloud.net/ ). Biodegradation of phenanthrene by the individual strains and constructed consortia All the isolated strains were tested for phenanthrene degradation, and the results were compared between the individual strains and the constructed consortia composed of two isolated strains. The volume ratio of each strain was set to 1:1. The experiments were conducted in test tubes containing 4.5 mL of MSM supplemented with 50 mg/L phenanthrene. All the tubes were incubated at 15 °C and 200 rpm for 15 days. The phenanthrene remaining was extracted and analyzed via HPLC. Uninoculated tubes containing MSM supplemented with 50 mg/L phenanthrene served as abiotic controls. Statistical analysis The residual phenanthrene concentration was expressed as the mean ± standard deviation of at least three replicates. One-way analysis of variance (ANOVA) and Duncan’s test were also conducted using SPSS software version 29 ( https://www.ibm.com/spss ) (SPSS, Inc., Chicago, IL, USA). P ≤ 0.05 was considered to indicate statistically significant differences.
bacteria The surface soil samples used in this study were collected around the Great Wall Station on King George Island in Antarctica during the 30th Chinese Antarctic Research Expedition (CHINARE30) in January 2014. The soil samples were taken at a depth of 0–10 cm from 20 locations (Table ), were stored at 4 °C for enumeration of total heterotrophic and PAH-degrading bacteria as well as enrichment of PAH-degrading bacterial consortia and were stored at − 20 °C for DNA isolation and sequencing analysis. The description of the sampling locations is summarized in Table and in a previous report . The numbers of total heterotrophic and PAH-degrading bacteria present in the soil samples were determined by the most probable number (MPN) method in 96-well microtiter plates, as described by Muangchinda et al. . Total heterotrophic bacteria were counted in Luria–Bertani (LB) medium; low-molecular-weight (LMW) PAH-degrading bacteria were counted in mineral salt medium (MSM) supplemented with 250 mg/L phenanthrene; and high-molecular-weight (HMW) PAH-degrading bacteria were counted in MSM supplemented with 250 mg/L pyrene. The plates were incubated at 15 °C for 7 days (for total heterotrophic bacteria) or for 14 days (for PAH-degrading bacteria). Total heterotrophic bacteria were analyzed in positive wells by turbidity, while PAH-degrading bacteria were analyzed by respiration indicators. The numbers of total heterotrophic and PAH-degrading bacteria were retrieved from an MPN table.
The enrichment of PAH-degrading bacterial consortia was performed following the methods of Sakdapetsiri et al. . Briefly, a total of 10 g of each soil sample was added to 30-mL sterilized glass bottles supplemented with 100 mg/kg of a particular PAH (phenanthrene or pyrene) and incubated at 15 °C for 75 days. After the incubation period, 1 g of soil was taken from a glass bottle and added to 9 mL of MSM. Then, 100 µL of soil solution was spread onto MSM agar plates supplemented with 100 mg/L phenanthrene or pyrene and incubated at 15 °C. After two weeks of incubation, bacterial colonies were picked from the plates, added to 5 mL of MSM supplemented with 50 mg/L phenanthrene or pyrene and shaken at 200 rpm for 14 days at 15 °C. Additional tubes containing only PAH-supplemented MSM served as abiotic controls. After incubation, the culture broth, which appeared turbid and yellow/orange-colored compared with the control, was transferred to fresh medium containing PAH and cultured as described above. This step was repeated five times to obtain PAH-degrading consortia.
The enriched bacterial consortia were cultivated in 45 mL of tenfold diluted LB and incubated at 15 °C for 2 days. Bacterial cells were centrifuged at 8000 rpm and 4 °C for 10 min, washed twice, and suspended in sterile 0.85% NaCl (w/v) solution. The turbidity of the cell suspension was adjusted to 1.0 for optical density at 600 nm, equivalent to an initial viable cell count of 10 6 colony forming units per milliliter (CFU/mL). The cell suspension was shaken at 200 rpm and 15 °C for 24 h to allow the cells to utilize the accumulated nutrients before the PAH degradation experiment was initiated. The cell suspension (0.5 mL) of the enriched consortia was inoculated in a tube containing 4.5 mL of MSM supplemented with 50 mg/L phenanthrene. The initial cell concentration was 10 5 CFU/mL. The tubes were incubated at 200 rpm and 15 °C. Uninoculated tubes containing MSM supplemented with 50 mg/L phenanthrene served as abiotic controls. Three test tubes were taken on days 0, 3 and 5 to analyze the residual phenanthrene content by performance liquid chromatography (HPLC) as described in a previous study and for DNA extraction as described in “ ”. The HPLC system consisted of an LC-3A pump, an SPD-2A UV–visible detector and a C-RIA recorder (Shimadzu, Japan). Enriched bacterial consortia that exhibited the highest phenanthrene degradation efficiency were chosen for the PAH degradation tests. The tests involved assessing acenaphthene and fluorene (at 50 mg/L) degradation over 5 days and pyrene (20 mg/L) and benzo[ a ] pyrene (10 mg/L) degradation over 35 days. Uninoculated tubes containing MSM supplemented with PAHs served as abiotic controls. The cultures were incubated at 200 rpm and 15 °C. After cultivation, samples were collected for residual PAH analysis and DNA extraction.
An effective PAH-degrading consortium was selected for investigation of the effects of temperature and water content on phenanthrene degradation. Phenanthrene degradation was performed using the method described above. The bacterial cells were collected to study the bacterial community. The effect of temperature on phenanthrene degradation was determined at 4, 15 and 30 °C. The effect of water content on phenanthrene degradation was evaluated by withholding PEG 6000 or adding 30% (w/v) PEG 6000 to MSM. Samples were collected for residual phenanthrene analysis and DNA extraction. Uninoculated tubes containing MSM supplemented with phenanthrene served as abiotic controls.
The genomic DNA of the enriched consortia was extracted in triplicate using a method described by Sharma et al. . The extracted DNA was subjected to agarose gel electrophoresis and analysis via a NanoDrop™ 2000 spectrophotometer (Thermo Scientific, USA). The V3-V4 regions of the 16S rRNA gene were amplified using the primers 515F and 806R . High-throughput 16S rRNA gene amplicon sequencing was performed according to previously described methods . DNA libraries with dual indices were sequenced using an Illumina MiSeq platform (Illumina, CA, USA) with a 150 bp paired-end sequencing strategy. The raw sequences were processed and analyzed using QIIME 2 software tools version 2022.11 ( https://library.qiime2.org ). The reads were demultiplexed and qualified using the q2‐demux plugin and denoised with Deblur . Taxonomic assignment was undertaken to obtain amplicon sequence variants (ASVs) using the q2‐feature‐classifier and classify‐sklearn naïve Bayes taxonomic classifier against the SILVA SSU taxonomic data operational taxonomic unit (OTU) reference sequences. The indices of alpha diversity, including the Shannon index, evenness, Faith's PD, and observed OTUs, were determined in QIIME2.
A tenfold serial dilution of the enriched consortia was prepared with 0.85% (w/v) NaCl solution. Then, the bacterial suspensions (10 −5 to 10 −7 dilution) were spread on LB plates and incubated at 15 °C for 3 days. Pure culture colonies were selected and then recultivated on LB agar. Colonies with different morphologies were selected, purified, and tested for their ability to degrade phenanthrene. All the isolated strains were subsequently identified via 16S rRNA gene sequence analysis. The genomic DNA was extracted and purified using the GenUP™ Bacterial gDNA Kit (Biotechrabbit GmbH, Germany) following the manufacturer’s instructions. The 16S rRNA gene was amplified using the universal 16S rRNA gene primers 27F (5ʹ-AGAGTTTGATCACTGGCTCAG-3ʹ) and 1492R (5ʹ-CGGCTTACCTTGTTACGACTT-3ʹ). The purified PCR products were sequenced by the Sanger method (First BASE Laboratories, Malaysia). The 16S rRNA genes of all the isolates were pairwise assigned to the reference sequences of the strains available in the EzBioCloud database ( www.ezbiocloud.net/ ).
All the isolated strains were tested for phenanthrene degradation, and the results were compared between the individual strains and the constructed consortia composed of two isolated strains. The volume ratio of each strain was set to 1:1. The experiments were conducted in test tubes containing 4.5 mL of MSM supplemented with 50 mg/L phenanthrene. All the tubes were incubated at 15 °C and 200 rpm for 15 days. The phenanthrene remaining was extracted and analyzed via HPLC. Uninoculated tubes containing MSM supplemented with 50 mg/L phenanthrene served as abiotic controls.
The residual phenanthrene concentration was expressed as the mean ± standard deviation of at least three replicates. One-way analysis of variance (ANOVA) and Duncan’s test were also conducted using SPSS software version 29 ( https://www.ibm.com/spss ) (SPSS, Inc., Chicago, IL, USA). P ≤ 0.05 was considered to indicate statistically significant differences.
Quantity of total heterotrophic and PAH-degrading bacteria in Antarctic soil samples The numbers of total heterotrophic and PAH-degrading bacteria present in the soil samples collected from twenty locations around the Great Wall Station are reported in Table . As shown in this table, the number of total heterotrophic bacteria in the soil ranged from 5.1 × 10 4 to 1.1 × 10 7 MPN/g soil. Among these samples, PAH-degrading bacteria were detected in 15 samples from 20 sampling sites. The number of phenanthrene-degrading bacteria ranged from 3.0 × 10 4 to 9.3 × 10 5 MPN/g soil, of which the greatest number was found in sample P2. The number of pyrene-degrading bacteria ranged from 3.6 × 10 4 to 2.2 × 10 5 MPN/g soil, and the greatest number was found in sample P13. Location P2 is close to the oil tanks, and location P13 is near the station building. These areas are active areas that may have been affected by human activities and contaminated with petroleum hydrocarbons. Pongpiachan et al. quantified the total concentrations of twelve PAHs, including phenanthrene, anthracene, fluoranthene, pyrene, benz[ a ]anthracene, chrysene, benzo[ b ]fluoranthene, benzo[ k ]fluoranthene, benzo[ a ]pyrene, indeno[1,2,3-cd]pyrene, dibenz[a,h]anthracene, and benzo[g,h,i]perylene, in soils collected around the Great Wall Station. Their results demonstrated that phenanthrene had the highest percentage contribution to these samples at 50%, followed by pyrene (18%) and fluoranthene (15.3%). Additionally, the total concentrations of PAHs varied from 0.296 to 10.4 ng/g. Hydrocarbon contaminants are known to induce an adaptive response in indigenous microbial communities. Therefore, it is of particular interest to isolate native bacteria capable of degrading hydrocarbons for bioremediating contaminated areas in the Antarctic region. This topic has become relevant due to the prohibition of bioaugmentation with foreign organisms in Antarctica. Furthermore, the presence of PAH-degrading bacteria in the environment can serve as an indicator of PAH contamination. Sphingobium xenophagum D43FB demonstrated effective phenanthrene degradation capability, achieving up to 95% degradation of 500 mg/L phenanthrene. This bacterium was also isolated from diesel fuel-contaminated Antarctic soils . Enrichment of PAH-degrading consortia In this study, three phenanthrene-enriched bacterial consortia (C13, C15 and C23) exhibited changes in color in the culture compared to those of the control, indicating potential the biodegradation of phenanthrene (Fig. a). The ability of these consortia to degrade phenanthrene was tested, and the results of the biodegradation experiments are presented in Fig. b. Within 5 days, consortia C13 and C15 degraded 50 mg/L phenanthrene with efficiencies of 82.3% and 85.5%, respectively, at 15 °C, whereas the lowest phenanthrene biodegradation was recorded for consortium C23 (45.3%). Although microbial activity is generally inhibited or is lower at low temperatures , these results demonstrate the capacity of cold-adapted bacteria to biodegrade phenanthrene under low-temperature conditions. However, no changes were observed in the color of the medium of the pyrene-enriched cultures. Sulbaran-Bracho et al. investigated the growth of consortium LR-10 on different PAHs at a concentration of 100 mg/L. LR-10 exhibited growth on anthracene and phenanthrene but not on pyrene after being incubated at 10 °C for 7 days. Pyrene is an HMW PAH, and biodegradation of this compound occurs more slowly than that of LMW PAHs such as phenanthrene . Furthermore, the pyrene concentration in this study was greater than that in the Antarctic environment. Pongpiachan et al. reported that the average pyrene concentration in soils collected from the Great Wall Station was 0.570 ng/g. Although the PAH concentrations in Antarctic environments were previously reported to be low, recent studies have revealed an increased abundance of PAHs . The concentrations of phenanthrene and pyrene used in this study are greater than those typically found in real Antarctic environments. Exposure to elevated PAH concentrations in laboratory settings may induce bacterial adaptation and evolution, resulting in the selection of specialized bacterial strains with enhanced biodegradation capabilities. In this study, consortia C13 and C15 exhibited effective degradation of phenanthrene, but the degradation efficacies of the two consortia were not significantly different. Therefore, these consortia were selected for the biodegradation of other PAHs, including acenaphthene, fluorene, pyrene, and benzo[ a ]pyrene. PAH biodegradation by enriched bacterial consortia The degradation of four PAHs, LMW PAHs (acenaphthene and fluorene) and HMW PAHs (pyrene and benzo[ a ]pyrene), by consortia C13 and C15 was determined. Both consortia were able to degrade 50 mg/L acenaphthene and fluorene at 15 °C within 5 days (Fig. ). Consortium C13 degraded 97% of the acenaphthene, which was significantly greater than the approximately 72% degradation achieved by consortium C15. The degradation percentages of fluorene by consortia C13 and C15 were 70% and 55%, respectively. However, these consortia could not degrade pyrene or benzo[ a ]pyrene. These results were consistent with previous studies in which HMW PAHs were more difficult to biodegrade than LMW PAHs , . Although few studies have reported the Antarctic bacterial consortia-driven biodegradation of PAHs other than phenanthrene, there is research on the biodegradation of the components of PAHs present in petroleum oil. Sulbaran-Bracho et al. reported the degradation of n -alkanes and PAH compounds in diesel oil by consortium LR-30 isolated from Antarctic rhizosphere soil. The consortium metabolized more than 90% of aliphatic compounds and 50% of naphthalene and pyrene after 7 days of incubation. Based on this evidence, we concluded that the obtained consortia consist of efficient degraders of LMW PAHs, which is consistent with the location from which they were isolated, i.e., from soils where LMW PAHs are abundant. In this study, consortium C13 effectively degraded various PAHs; therefore, this consortium was selected for further experiments. Effect of temperature and water availability on phenanthrene degradation The biodegradation of PAHs is influenced by a variety of specific physical factors. Antarctic environments exhibit extreme climatic conditions characterized by low temperatures and low water availability . Therefore, the effects of temperature and water availability on phenanthrene degradation by consortium C13 were evaluated. Temperature was shown to have an impact on biodegradation efficiency and the microbial community. Low temperature reduces biological activity and the rate of hydrocarbon degradation . In this study, the greatest phenanthrene degradation was achieved at the highest incubation temperature (30 °C) (Fig. a). A decrease in temperature led to a delay or decrease in the degradation rate of phenanthrene. Consortium C13 could completely degrade 50 mg/L phenanthrene within 5 days at 30 °C and within 7 days at 15 °C. At 4 °C, 38% phenanthrene biodegradation was observed after 7 days of incubation. Vergeynst et al. studied hydrocarbon biodegradation at low temperatures and reported that the mineralization rates of hydrocarbons were 0.02%, 0.14% and 0.33% per day at 0, 4, and 15 °C, respectively. To evaluate the effect of water availability on phenanthrene degradation, PEG 6000 was used to reduce the water content in the culture media. Poor water availability can limit PAH biodegradation because these conditions can limit the contact necessary between PAHs and microbes for biodegradation . In this study, consortium C13 maintained high biodegradation efficiency under low water availability conditions (Fig. b); it completely degraded 50 mg/L phenanthrene at 15 °C within 7 days, both in the presence and absence of PEG 6000. Liu et al. determined the effects of water availability on the phenanthrene biodegradation rate and reported that the highest rate of mineralization was observed under the highest water content. Low-water-content conditions might limit nutrient diffusion and microbial movement, thus decreasing microbial activity and biodegradation. The ability of this consortium to maintain high biodegradation efficiency even under low water availability is noteworthy and contributes to our understanding of microbial degradation processes in challenging environments. To provide information on how bacterial consortia adapt to such extreme environments, we investigated their community responses. Bacterial community response during PAH degradation The bacterial communities in the enriched consortia and those in the corresponding original soil samples were characterized via high-throughput sequencing of 16S rRNA gene amplicons. The sequencing data can be accessed in the NCBI database (accession number: PRJNA1062590). The alpha diversity indices indicated lower bacterial diversity in the enriched consortia than in the original soil samples (Table ). PAH contamination has been shown to influence and decrease the diversity and abundance of microbial communities . The bacterial community structure of the original soil samples consisted of a total of 27 phyla, while the community in the enriched consortia consisted of a total of 6 phyla. Proteobacteria was the most abundant phylum found in the enriched consortia, comprising 80–89% of the total sequences in a sample. Actinobacteria was a minor phylum, accounting for 11–20% (Fig. ). Proteobacteria and Actinobacteria have been reported to be the most abundant phyla in both Antarctic soils and soils contaminated with PAHs . Figure a,b show all the genera belonging to the Proteobacteria and Actinobacteria phyla, respectively. After exposure to phenanthrene, the bacterial community structure of consortium C13 was dominated by Pseudomonas (Proteobacteria) (81%), Pseudarthrobacter (Actinobacteria) (15%), and Paeniglutamicibacter (Actinobacteria) (4%). In contrast, in consortium C15, Pseudomonas (50%), Polaromonas (Proteobacteria) (29%), and Paeniglutamicibacter (Actinobacteria) (20%) were dominant. On the other hand, the bacterial community structure of consortium C23 was dominated by Pseudomonas (Proteobacteria) (87%) and Pseudarthrobacter (Actinobacteria) (10%). The genera Pseudomonas , Pseudarthrobacter , and Polaromonas have been found in petroleum hydrocarbon-degrading communities isolated from cold environments. Li et al. reported that Pseudomonas exhibited the highest relative abundance in methylcyclohexane-degrading communities derived from Antarctic surface water. Sulbaran-Bracho et al. determined the community composition of bacteria during diesel biodegradation by consortium LR-10 isolated from Antarctic rhizosphere soil and found that the dominant bacterial genera were Pseudomonas , Candidimonas , Rhodanobacter , Renibacterium , Pseudoarthrobacter , and Frateuria . Jurelevicius et al. investigated the microbial communities present in hydrocarbon-contaminated soils from King George Island, Antarctica. They found positive correlations between the abundances of Cytophaga , Methyloversatilis , Polaromonas , and Williamsia and the concentrations of total petroleum hydrocarbons and/or PAHs. The effects of different PAHs on the bacterial community structures were analyzed. The results showed that the community composition of consortium C13 did not change with exposure to different PAHs. However, the bacterial community structure of consortium C15 changed when exposed to different PAHs (Fig. a). The number of Polaromonas in the C15 community decreased when the consortium was exposed to acenaphthene and fluorene. PAHs can significantly impact the composition of bacterial communities, and some bacterial groups respond rapidly to changing environmental conditions. Ahmad et al. reported that PAH type significantly affects bacterial community composition and structure. The bacterial community compositions of the enriched consortia in the pyrene, benzo[ a ]pyrene, and benzo[ a ]fluoranthene treatments were significantly different from those in the phenanthrene treatments. This disparity might result from the greater toxicity of the former compounds compared to the latter or due to the inability of certain bacterial groups to utilize certain compounds as carbon and energy sources. Bacterial community composition of consortium C13 under different environmental conditions Incubation temperature influenced the bacterial communities in consortium C13. At 15 °C, Pseudomonas had the highest relative abundance in the C13 microbial community, suggesting its potential role as the key phenanthrene degrader at this temperature (Fig. b). Several studies have revealed the ability of Pseudomonas to degrade phenanthrene. Ji et al. reported that Pseudomonas sp. Lphe-2 isolated from the aerobic sludge of a coking plant could degrade approximately 20% of phenanthrene (100 mg/L) at 15 ℃ within 7 days. However, when the temperature decreased to 4 ℃, the proportion of Pseudomonas also decreased. After 7 days of incubation, the abundance of Pseudarthrobacter increased, and this change was accompanied by the degradation of phenanthrene. In addition, when the temperature increased to 30 °C, the proportion of Pseudarthrobacter markedly increased, while that of Pseudomonas decreased. Moreover, the number of Rhodococcus bacteria increased under these conditions. Members of the genus Pseudarthrobacter have been shown to degrade phenanthrene. Li et al. reported that Pseudarthrobacter sp. L1SW was able to degrade 96.3% of 500 mg/L phenanthrene within 3 days at 30 °C. Moreover, studies have revealed that members of the genus Pseudarthrobacter can grow under a wide range of temperatures. For example, Pseudarthrobacter albicanus NJ-Z5 T isolated from Antarctic soil has been shown to grow at temperatures ranging from 4 to 28 °C . Similarly, Pseudarthrobacter humi RMG13 T isolated from soil exhibited a capacity to grow within a temperature range of 4–37 °C . Under poor water availability, the bacterial communities in consortium C13 were also dominated by Pseudarthrobacter , similar to the observations reported above (Fig. c). However, consortium C13 maintained high phenanthrene biodegradation efficiency at low water concentrations, indicating that Pseudarthrobacter might play an important role in phenanthrene degradation under conditions unsuitable for most other microorganisms. Muangchinda et al. investigated the impact of environmental conditions on the degradation of mixed PAHs by the SWO consortium. They found that the consortium retained its biodegradation capacity through alterations in the bacterial community structure and through adaptation to changing environmental conditions. Identification of pure strains isolated from phenanthrene-degrading consortia Six cultivable strains were isolated from the phenanthrene-degrading consortia and taxonomically identified based on 16S rRNA gene sequence analysis. The sequences of all the isolates were deposited in the GenBank database under accession numbers OR889009–OR889014 (Table ). Four of the six isolated strains belonged to the genus Pseudomonas . Strains ANT13_1 and ANT15_1 were identified as P. silesiensis , while strains ANT15_3 and ANT23_1 were identified as P. frederiksbergensis and P. fildesensis , respectively. Strain ANT13_2 was proposed as a representative novel species named Paeniglutamicibacter terrestris . Strain ANT23_2 was closely related to the actinobacterium Pseudarthrobacter humi . These findings indicated that strains belonging to the genera Pseudomonas and Pseudarthrobacter, which were identified as the predominant genera in the enriched consortia based on the 16S rRNA gene amplicon sequencing results, were successfully isolated. Members of the genus Pseudomonas are the dominant PAH-degrading bacteria and are cold-adapted indigenous bacteria in Antarctic soils . These isolated species were previously reported as cold-tolerant species , – . Several species, including P. silesiensis and P. frederiksbergensis JAJ28 T , have been reported to be capable of growing and degrading phenanthrene. However, to our knowledge, phenanthrene degradation by P. fildesensis , Paeniglutamicibacter terrestris and Pseudarthrobacter humi at low temperatures has never been studied. To provide evidence for the potential involvement of the isolated strains in phenanthrene degradation, their degradation capabilities were investigated. Synergistic degradation of phenanthrene by the isolated strains Phenanthrene degradation (50 mg/L) during a 15-day incubation at 15 °C was compared among the individual strains and the constructed consortia, which consisted of bacteria in two genera, Pseudomonas and Paeniglutamicibacter or Pseudarthrobacter (Fig. ). Among the individual strains, Pseudomonas sp . ANT13_1 exhibited the greatest phenanthrene degradation (22.4%). Similarly, for phenanthrene degradation by individual strains at low temperatures, Pseudomonas sp. JM2 degraded 12% of phenanthrene (50 mg/L) at 4 °C . Previous studies have reported that Pseudomonas species possess PAH-degrading enzymes as well as cold-adaptive enzymes. For example, Song et al. reported that P. fluorescens S01 could degrade PAHs and heterocyclic PAHs under cold stress. The genome of this strain contains numerous systems for the catabolism of PAHs and heterocyclic PAHs and harbors numerous cold adaptation systems. The constructed consortia significantly enhanced phenanthrene degradation, indicating that the strains had no inhibitory effects on one another. Two constructed consortia, ANT15_3 + ANT23_2 and ANT23_1 + ANT23_2, which consisted of Pseudomonas spp. and Pseudarthrobacter sp., exhibited high efficiencies of phenanthrene degradation, at 43% and 52%, respectively. The constructed consortium ANT23_2 + ANT13_2 ( Pseudomonas sp. and Paeniglutamicibacter sp.) degraded 32.4% phenanthrene. The results indicated that the constructed consortia exhibited significantly greater phenanthrene degradation capabilities than the individual strains. In a previous study, Kocuria flava and Rhodococcus pyridinivorans were shown to degrade pyrene with efficiencies of 53.8% and 56.2%, respectively, within 15 days of incubation. Additionally, a consortium consisting of both strains achieved 56.4% pyrene degradation, indicating that the two strains did not inhibit each other . Dechsakulwatana et al. reported that a constructed consortium consisting of S phingobium naphthae MO2-4 and Bacillus aryabhattai TL01-2 degraded approximately 43% of 50 mg/L phenanthrene within 7 days, while the individual strains degraded approximately 32–38%. Our results corresponded to the bacterial composition profile of the enriched consortia indicated that both Pseudomonas spp. and Pseudarthrobacter sp. are key degraders of phenanthrene at low temperatures (Figs. and ). A few reports have provided information on phenanthrene degradation by Pseudarthrobacter . Li et al. reported that Pseudarthrobacter sp. L1SW degraded 96.3% of 500 mg/L phenanthrene within 3 days at 30 °C. Additionally, there are reports indicating that Pseudarthrobacter species adapt to cold temperatures . Therefore, this study serves as a starting point to show the synergistic ability of Pseudomonas and Pseudarthrobacter to increase phenanthrene degradation. These data are essential for developing potential bioremediation strategies to treat contaminated soil in cold areas for efficient pollutant removal. This is the first report on the use of a synthetic consortium of the genera Pseudomonas and Pseudarthrobacter isolated from Antarctic soil for effective phenanthrene degradation at low temperatures. A possible explanation for the synergistic effect is that when the two strains are cocultured, Pseudomonas spp. may increase the solubilization and enhance the bioavailability of phenanthrene by producing biosurfactants . Furthermore, Pseudomonas spp. may provide protection against phenanthrene toxicity through biofilm formation and exopolysaccharide production. The Antarctic Pseudomonas sp. ID1 was reported to produce exopolysaccharides. These exopolysaccharides have a cryoprotective effect on Pseudomonas sp. ID1 and other bacterial cells . Furthermore, Pseudarthrobacter species have been reported to possess specific survival strategies to cope with extreme environmental conditions, such as cold shock- and heat shock-protection genes , . Moreover, to enhance phenanthrene degradation by synthetic consortia, some minor bacterial genera, such as Rhodococcus and Polaromonas , identified based on bacterial community data of the 16S rRNA gene amplicon results, should be targeted in future isolation attempts from enriched consortia. A common challenge in previous studies was that some bacteria could not be cultivated by the methods used; moreover, slow-growing synergistic partners disappeared during isolation and/or because of the presence of metabolic dependencies in microbial communities , .
bacteria in Antarctic soil samples The numbers of total heterotrophic and PAH-degrading bacteria present in the soil samples collected from twenty locations around the Great Wall Station are reported in Table . As shown in this table, the number of total heterotrophic bacteria in the soil ranged from 5.1 × 10 4 to 1.1 × 10 7 MPN/g soil. Among these samples, PAH-degrading bacteria were detected in 15 samples from 20 sampling sites. The number of phenanthrene-degrading bacteria ranged from 3.0 × 10 4 to 9.3 × 10 5 MPN/g soil, of which the greatest number was found in sample P2. The number of pyrene-degrading bacteria ranged from 3.6 × 10 4 to 2.2 × 10 5 MPN/g soil, and the greatest number was found in sample P13. Location P2 is close to the oil tanks, and location P13 is near the station building. These areas are active areas that may have been affected by human activities and contaminated with petroleum hydrocarbons. Pongpiachan et al. quantified the total concentrations of twelve PAHs, including phenanthrene, anthracene, fluoranthene, pyrene, benz[ a ]anthracene, chrysene, benzo[ b ]fluoranthene, benzo[ k ]fluoranthene, benzo[ a ]pyrene, indeno[1,2,3-cd]pyrene, dibenz[a,h]anthracene, and benzo[g,h,i]perylene, in soils collected around the Great Wall Station. Their results demonstrated that phenanthrene had the highest percentage contribution to these samples at 50%, followed by pyrene (18%) and fluoranthene (15.3%). Additionally, the total concentrations of PAHs varied from 0.296 to 10.4 ng/g. Hydrocarbon contaminants are known to induce an adaptive response in indigenous microbial communities. Therefore, it is of particular interest to isolate native bacteria capable of degrading hydrocarbons for bioremediating contaminated areas in the Antarctic region. This topic has become relevant due to the prohibition of bioaugmentation with foreign organisms in Antarctica. Furthermore, the presence of PAH-degrading bacteria in the environment can serve as an indicator of PAH contamination. Sphingobium xenophagum D43FB demonstrated effective phenanthrene degradation capability, achieving up to 95% degradation of 500 mg/L phenanthrene. This bacterium was also isolated from diesel fuel-contaminated Antarctic soils .
In this study, three phenanthrene-enriched bacterial consortia (C13, C15 and C23) exhibited changes in color in the culture compared to those of the control, indicating potential the biodegradation of phenanthrene (Fig. a). The ability of these consortia to degrade phenanthrene was tested, and the results of the biodegradation experiments are presented in Fig. b. Within 5 days, consortia C13 and C15 degraded 50 mg/L phenanthrene with efficiencies of 82.3% and 85.5%, respectively, at 15 °C, whereas the lowest phenanthrene biodegradation was recorded for consortium C23 (45.3%). Although microbial activity is generally inhibited or is lower at low temperatures , these results demonstrate the capacity of cold-adapted bacteria to biodegrade phenanthrene under low-temperature conditions. However, no changes were observed in the color of the medium of the pyrene-enriched cultures. Sulbaran-Bracho et al. investigated the growth of consortium LR-10 on different PAHs at a concentration of 100 mg/L. LR-10 exhibited growth on anthracene and phenanthrene but not on pyrene after being incubated at 10 °C for 7 days. Pyrene is an HMW PAH, and biodegradation of this compound occurs more slowly than that of LMW PAHs such as phenanthrene . Furthermore, the pyrene concentration in this study was greater than that in the Antarctic environment. Pongpiachan et al. reported that the average pyrene concentration in soils collected from the Great Wall Station was 0.570 ng/g. Although the PAH concentrations in Antarctic environments were previously reported to be low, recent studies have revealed an increased abundance of PAHs . The concentrations of phenanthrene and pyrene used in this study are greater than those typically found in real Antarctic environments. Exposure to elevated PAH concentrations in laboratory settings may induce bacterial adaptation and evolution, resulting in the selection of specialized bacterial strains with enhanced biodegradation capabilities. In this study, consortia C13 and C15 exhibited effective degradation of phenanthrene, but the degradation efficacies of the two consortia were not significantly different. Therefore, these consortia were selected for the biodegradation of other PAHs, including acenaphthene, fluorene, pyrene, and benzo[ a ]pyrene.
The degradation of four PAHs, LMW PAHs (acenaphthene and fluorene) and HMW PAHs (pyrene and benzo[ a ]pyrene), by consortia C13 and C15 was determined. Both consortia were able to degrade 50 mg/L acenaphthene and fluorene at 15 °C within 5 days (Fig. ). Consortium C13 degraded 97% of the acenaphthene, which was significantly greater than the approximately 72% degradation achieved by consortium C15. The degradation percentages of fluorene by consortia C13 and C15 were 70% and 55%, respectively. However, these consortia could not degrade pyrene or benzo[ a ]pyrene. These results were consistent with previous studies in which HMW PAHs were more difficult to biodegrade than LMW PAHs , . Although few studies have reported the Antarctic bacterial consortia-driven biodegradation of PAHs other than phenanthrene, there is research on the biodegradation of the components of PAHs present in petroleum oil. Sulbaran-Bracho et al. reported the degradation of n -alkanes and PAH compounds in diesel oil by consortium LR-30 isolated from Antarctic rhizosphere soil. The consortium metabolized more than 90% of aliphatic compounds and 50% of naphthalene and pyrene after 7 days of incubation. Based on this evidence, we concluded that the obtained consortia consist of efficient degraders of LMW PAHs, which is consistent with the location from which they were isolated, i.e., from soils where LMW PAHs are abundant. In this study, consortium C13 effectively degraded various PAHs; therefore, this consortium was selected for further experiments.
The biodegradation of PAHs is influenced by a variety of specific physical factors. Antarctic environments exhibit extreme climatic conditions characterized by low temperatures and low water availability . Therefore, the effects of temperature and water availability on phenanthrene degradation by consortium C13 were evaluated. Temperature was shown to have an impact on biodegradation efficiency and the microbial community. Low temperature reduces biological activity and the rate of hydrocarbon degradation . In this study, the greatest phenanthrene degradation was achieved at the highest incubation temperature (30 °C) (Fig. a). A decrease in temperature led to a delay or decrease in the degradation rate of phenanthrene. Consortium C13 could completely degrade 50 mg/L phenanthrene within 5 days at 30 °C and within 7 days at 15 °C. At 4 °C, 38% phenanthrene biodegradation was observed after 7 days of incubation. Vergeynst et al. studied hydrocarbon biodegradation at low temperatures and reported that the mineralization rates of hydrocarbons were 0.02%, 0.14% and 0.33% per day at 0, 4, and 15 °C, respectively. To evaluate the effect of water availability on phenanthrene degradation, PEG 6000 was used to reduce the water content in the culture media. Poor water availability can limit PAH biodegradation because these conditions can limit the contact necessary between PAHs and microbes for biodegradation . In this study, consortium C13 maintained high biodegradation efficiency under low water availability conditions (Fig. b); it completely degraded 50 mg/L phenanthrene at 15 °C within 7 days, both in the presence and absence of PEG 6000. Liu et al. determined the effects of water availability on the phenanthrene biodegradation rate and reported that the highest rate of mineralization was observed under the highest water content. Low-water-content conditions might limit nutrient diffusion and microbial movement, thus decreasing microbial activity and biodegradation. The ability of this consortium to maintain high biodegradation efficiency even under low water availability is noteworthy and contributes to our understanding of microbial degradation processes in challenging environments. To provide information on how bacterial consortia adapt to such extreme environments, we investigated their community responses.
The bacterial communities in the enriched consortia and those in the corresponding original soil samples were characterized via high-throughput sequencing of 16S rRNA gene amplicons. The sequencing data can be accessed in the NCBI database (accession number: PRJNA1062590). The alpha diversity indices indicated lower bacterial diversity in the enriched consortia than in the original soil samples (Table ). PAH contamination has been shown to influence and decrease the diversity and abundance of microbial communities . The bacterial community structure of the original soil samples consisted of a total of 27 phyla, while the community in the enriched consortia consisted of a total of 6 phyla. Proteobacteria was the most abundant phylum found in the enriched consortia, comprising 80–89% of the total sequences in a sample. Actinobacteria was a minor phylum, accounting for 11–20% (Fig. ). Proteobacteria and Actinobacteria have been reported to be the most abundant phyla in both Antarctic soils and soils contaminated with PAHs . Figure a,b show all the genera belonging to the Proteobacteria and Actinobacteria phyla, respectively. After exposure to phenanthrene, the bacterial community structure of consortium C13 was dominated by Pseudomonas (Proteobacteria) (81%), Pseudarthrobacter (Actinobacteria) (15%), and Paeniglutamicibacter (Actinobacteria) (4%). In contrast, in consortium C15, Pseudomonas (50%), Polaromonas (Proteobacteria) (29%), and Paeniglutamicibacter (Actinobacteria) (20%) were dominant. On the other hand, the bacterial community structure of consortium C23 was dominated by Pseudomonas (Proteobacteria) (87%) and Pseudarthrobacter (Actinobacteria) (10%). The genera Pseudomonas , Pseudarthrobacter , and Polaromonas have been found in petroleum hydrocarbon-degrading communities isolated from cold environments. Li et al. reported that Pseudomonas exhibited the highest relative abundance in methylcyclohexane-degrading communities derived from Antarctic surface water. Sulbaran-Bracho et al. determined the community composition of bacteria during diesel biodegradation by consortium LR-10 isolated from Antarctic rhizosphere soil and found that the dominant bacterial genera were Pseudomonas , Candidimonas , Rhodanobacter , Renibacterium , Pseudoarthrobacter , and Frateuria . Jurelevicius et al. investigated the microbial communities present in hydrocarbon-contaminated soils from King George Island, Antarctica. They found positive correlations between the abundances of Cytophaga , Methyloversatilis , Polaromonas , and Williamsia and the concentrations of total petroleum hydrocarbons and/or PAHs. The effects of different PAHs on the bacterial community structures were analyzed. The results showed that the community composition of consortium C13 did not change with exposure to different PAHs. However, the bacterial community structure of consortium C15 changed when exposed to different PAHs (Fig. a). The number of Polaromonas in the C15 community decreased when the consortium was exposed to acenaphthene and fluorene. PAHs can significantly impact the composition of bacterial communities, and some bacterial groups respond rapidly to changing environmental conditions. Ahmad et al. reported that PAH type significantly affects bacterial community composition and structure. The bacterial community compositions of the enriched consortia in the pyrene, benzo[ a ]pyrene, and benzo[ a ]fluoranthene treatments were significantly different from those in the phenanthrene treatments. This disparity might result from the greater toxicity of the former compounds compared to the latter or due to the inability of certain bacterial groups to utilize certain compounds as carbon and energy sources.
Incubation temperature influenced the bacterial communities in consortium C13. At 15 °C, Pseudomonas had the highest relative abundance in the C13 microbial community, suggesting its potential role as the key phenanthrene degrader at this temperature (Fig. b). Several studies have revealed the ability of Pseudomonas to degrade phenanthrene. Ji et al. reported that Pseudomonas sp. Lphe-2 isolated from the aerobic sludge of a coking plant could degrade approximately 20% of phenanthrene (100 mg/L) at 15 ℃ within 7 days. However, when the temperature decreased to 4 ℃, the proportion of Pseudomonas also decreased. After 7 days of incubation, the abundance of Pseudarthrobacter increased, and this change was accompanied by the degradation of phenanthrene. In addition, when the temperature increased to 30 °C, the proportion of Pseudarthrobacter markedly increased, while that of Pseudomonas decreased. Moreover, the number of Rhodococcus bacteria increased under these conditions. Members of the genus Pseudarthrobacter have been shown to degrade phenanthrene. Li et al. reported that Pseudarthrobacter sp. L1SW was able to degrade 96.3% of 500 mg/L phenanthrene within 3 days at 30 °C. Moreover, studies have revealed that members of the genus Pseudarthrobacter can grow under a wide range of temperatures. For example, Pseudarthrobacter albicanus NJ-Z5 T isolated from Antarctic soil has been shown to grow at temperatures ranging from 4 to 28 °C . Similarly, Pseudarthrobacter humi RMG13 T isolated from soil exhibited a capacity to grow within a temperature range of 4–37 °C . Under poor water availability, the bacterial communities in consortium C13 were also dominated by Pseudarthrobacter , similar to the observations reported above (Fig. c). However, consortium C13 maintained high phenanthrene biodegradation efficiency at low water concentrations, indicating that Pseudarthrobacter might play an important role in phenanthrene degradation under conditions unsuitable for most other microorganisms. Muangchinda et al. investigated the impact of environmental conditions on the degradation of mixed PAHs by the SWO consortium. They found that the consortium retained its biodegradation capacity through alterations in the bacterial community structure and through adaptation to changing environmental conditions.
Six cultivable strains were isolated from the phenanthrene-degrading consortia and taxonomically identified based on 16S rRNA gene sequence analysis. The sequences of all the isolates were deposited in the GenBank database under accession numbers OR889009–OR889014 (Table ). Four of the six isolated strains belonged to the genus Pseudomonas . Strains ANT13_1 and ANT15_1 were identified as P. silesiensis , while strains ANT15_3 and ANT23_1 were identified as P. frederiksbergensis and P. fildesensis , respectively. Strain ANT13_2 was proposed as a representative novel species named Paeniglutamicibacter terrestris . Strain ANT23_2 was closely related to the actinobacterium Pseudarthrobacter humi . These findings indicated that strains belonging to the genera Pseudomonas and Pseudarthrobacter, which were identified as the predominant genera in the enriched consortia based on the 16S rRNA gene amplicon sequencing results, were successfully isolated. Members of the genus Pseudomonas are the dominant PAH-degrading bacteria and are cold-adapted indigenous bacteria in Antarctic soils . These isolated species were previously reported as cold-tolerant species , – . Several species, including P. silesiensis and P. frederiksbergensis JAJ28 T , have been reported to be capable of growing and degrading phenanthrene. However, to our knowledge, phenanthrene degradation by P. fildesensis , Paeniglutamicibacter terrestris and Pseudarthrobacter humi at low temperatures has never been studied. To provide evidence for the potential involvement of the isolated strains in phenanthrene degradation, their degradation capabilities were investigated.
Phenanthrene degradation (50 mg/L) during a 15-day incubation at 15 °C was compared among the individual strains and the constructed consortia, which consisted of bacteria in two genera, Pseudomonas and Paeniglutamicibacter or Pseudarthrobacter (Fig. ). Among the individual strains, Pseudomonas sp . ANT13_1 exhibited the greatest phenanthrene degradation (22.4%). Similarly, for phenanthrene degradation by individual strains at low temperatures, Pseudomonas sp. JM2 degraded 12% of phenanthrene (50 mg/L) at 4 °C . Previous studies have reported that Pseudomonas species possess PAH-degrading enzymes as well as cold-adaptive enzymes. For example, Song et al. reported that P. fluorescens S01 could degrade PAHs and heterocyclic PAHs under cold stress. The genome of this strain contains numerous systems for the catabolism of PAHs and heterocyclic PAHs and harbors numerous cold adaptation systems. The constructed consortia significantly enhanced phenanthrene degradation, indicating that the strains had no inhibitory effects on one another. Two constructed consortia, ANT15_3 + ANT23_2 and ANT23_1 + ANT23_2, which consisted of Pseudomonas spp. and Pseudarthrobacter sp., exhibited high efficiencies of phenanthrene degradation, at 43% and 52%, respectively. The constructed consortium ANT23_2 + ANT13_2 ( Pseudomonas sp. and Paeniglutamicibacter sp.) degraded 32.4% phenanthrene. The results indicated that the constructed consortia exhibited significantly greater phenanthrene degradation capabilities than the individual strains. In a previous study, Kocuria flava and Rhodococcus pyridinivorans were shown to degrade pyrene with efficiencies of 53.8% and 56.2%, respectively, within 15 days of incubation. Additionally, a consortium consisting of both strains achieved 56.4% pyrene degradation, indicating that the two strains did not inhibit each other . Dechsakulwatana et al. reported that a constructed consortium consisting of S phingobium naphthae MO2-4 and Bacillus aryabhattai TL01-2 degraded approximately 43% of 50 mg/L phenanthrene within 7 days, while the individual strains degraded approximately 32–38%. Our results corresponded to the bacterial composition profile of the enriched consortia indicated that both Pseudomonas spp. and Pseudarthrobacter sp. are key degraders of phenanthrene at low temperatures (Figs. and ). A few reports have provided information on phenanthrene degradation by Pseudarthrobacter . Li et al. reported that Pseudarthrobacter sp. L1SW degraded 96.3% of 500 mg/L phenanthrene within 3 days at 30 °C. Additionally, there are reports indicating that Pseudarthrobacter species adapt to cold temperatures . Therefore, this study serves as a starting point to show the synergistic ability of Pseudomonas and Pseudarthrobacter to increase phenanthrene degradation. These data are essential for developing potential bioremediation strategies to treat contaminated soil in cold areas for efficient pollutant removal. This is the first report on the use of a synthetic consortium of the genera Pseudomonas and Pseudarthrobacter isolated from Antarctic soil for effective phenanthrene degradation at low temperatures. A possible explanation for the synergistic effect is that when the two strains are cocultured, Pseudomonas spp. may increase the solubilization and enhance the bioavailability of phenanthrene by producing biosurfactants . Furthermore, Pseudomonas spp. may provide protection against phenanthrene toxicity through biofilm formation and exopolysaccharide production. The Antarctic Pseudomonas sp. ID1 was reported to produce exopolysaccharides. These exopolysaccharides have a cryoprotective effect on Pseudomonas sp. ID1 and other bacterial cells . Furthermore, Pseudarthrobacter species have been reported to possess specific survival strategies to cope with extreme environmental conditions, such as cold shock- and heat shock-protection genes , . Moreover, to enhance phenanthrene degradation by synthetic consortia, some minor bacterial genera, such as Rhodococcus and Polaromonas , identified based on bacterial community data of the 16S rRNA gene amplicon results, should be targeted in future isolation attempts from enriched consortia. A common challenge in previous studies was that some bacteria could not be cultivated by the methods used; moreover, slow-growing synergistic partners disappeared during isolation and/or because of the presence of metabolic dependencies in microbial communities , .
Our findings demonstrated that Antarctic soils obtained from the Great Wall Station harbored bacteria capable of degrading PAHs at low temperatures. The enriched consortia derived from these soils exhibited high efficiency in phenanthrene biodegradation across a broad range of temperatures and at varying levels of water availability. Efficient phenanthrene degradation under cold conditions is noteworthy, particularly considering the challenges associated with bioremediation in polar regions. Furthermore, the environmental adaptability of the consortium enhances its potential applicability in diverse Antarctic habitats with varying environmental conditions. The results of 16S rRNA gene amplicon sequencing revealed that the phenanthrene-degrading consortia were dominated by Pseudomonas and Pseudarthrobacter . Both genera were successfully isolated from phenanthrene-degrading consortia, and these strains demonstrated the ability to degrade phenanthrene at low temperatures. Furthermore, constructed consortia, consisting of Pseudomonas spp. and Pseudarthrobacter sp., exhibited greater efficiency in terms of phenanthrene degradation than did the individual strains. These findings indicate that Pseudomonas and Pseudarthrobacter play important roles in phenanthrene degradation under low-temperature conditions. Additionally, these findings suggest that bacterial species can synergistically interact to enhance bioremediation efficiency, particularly in cold environments such as Antarctica. To gain a comprehensive understanding of the phenanthrene degradation pathway and the potential synergistic activity of these two strains, further studies must be conducted that include an evaluation of the intermediates of phenanthrene biodegradation and whole-genome analysis. Moreover, it is important to investigate the ability of these bacteria to produce biosurfactants, form biofilms in the presence of phenanthrene, and produce exopolysaccharides in addition to their cryoprotectant properties. Furthermore, to develop an efficient bioremediation strategy for Antarctic soils, bioaugmentation with the constructed consortia should be investigated to treat PAH-contaminated soils.
|
How to train practising gynaecologists in total laparoscopic hysterectomy: protocol for the stepped-wedge IMAGINE trial | 2c5d2893-1cdf-43d4-a6ad-a9fe00bea88a | 6528001 | Gynaecology[mh] | Hysterectomy is the most common major gynaecological procedure in women. Reviews and meta-analyses conclude that minimally invasive approaches should be preferred over total abdominal hysterectomy (TAH), and should be considered whenever clinically possible. The American College of Obstetricians and Gynaecologists (ACOG), American Association of Gynecologic Laparoscopists (AAGL), Society of Gynecologic Oncology (SGO), European Society for Medical Oncology (ESMO) and the Society of Obstetricians and Gynaecologists of Canada (SOGC) have all published guidelines highlighting the benefits of minimally invasive surgery for women with benign and malignant gynaecological conditions. Despite the evidence base supporting minimally invasive approaches, and the recommendations by professional societies to decrease TAH, almost 40% of hysterectomies in Australia are still performed using this approach, and a similar proportion of cases are done using a vaginal approach. Total laparoscopic hysterectomy (TLH) is a minimally invasive surgical approach to remove the uterus, with or without the adnexae, to treat benign gynaecological conditions such as uterine fibroids and adenomyosis or to prevent or treat cancers of the cervix, uterus, fallopian tubes or ovaries. TLH was developed to allow the surgery to be completed entirely laparoscopically, and has been shown to be feasible and safe; compared with TAH, TLH is associated with improved recovery, shorter hospital stay, reduced risk of surgical complications and equivalent disease-free outcomes for treating endometrial cancer. While TLH is a slightly more costly procedure than TAH, TLH has been shown to be cost-effective when the total cost of care is considered. Robotic hysterectomy is an alternative approach to hysterectomy used in some highly developed countries, especially the USA. However, its use in other countries, including Australia, is still limited due to the significant costs associated with robotic technology.
To investigate why TAH was still commonly used, a survey of Australian and New Zealand gynaecologists identified two main barriers impeding the uptake of TLH: (1) surgeons’ lack of procedural skills for TLH and (2) the limited availability of structured training and mentoring opportunities to assist practising surgeons to gain those skills. Women who have had a hysterectomy were surveyed and reported that they commonly follow the advice of their doctor with regards to the type of hysterectomy, and rarely reported to seek a second opinion. International evidence shows that structured education and training is effective in decreasing the use of TAH; in Finland, between 1996 and 2006, the proportion of hysterectomies conducted by TAH reduced from 58% to 24% through training and education. In parallel, postoperative complications also decreased significantly. In Canada, between 2005 and 2012, the proportion of hysterectomies conducted by TLH increased from 40% to 74% through stakeholder engagement and structured learning. In Australia, no other formal training programme exists to teach advanced laparoscopic techniques, including laparoscopy; this study will implement and evaluate a model of training for practising gynaecologists in TLH.
Primary objective To decrease the proportion of hysterectomies conducted by TAH by 30%, in 75% of gynaecological surgeons through a surgical outreach training programme delivered at the trainee’s hospital. Secondary objectives By decreasing the proportion of patients who receive a TAH, reduce: Incidence of surgical adverse events (AEs) in patients receiving a hysterectomy by 20%. Length of stay (LoS) for patients requiring a hysterectomy by 20%. Direct hospital costs for hysterectomy by 10%. And to: Evaluate surgeon trainees’ experiences of the training programme. Assess satisfaction with the training programme, and the views of relevant stakeholders on benefits and barriers to training.
To decrease the proportion of hysterectomies conducted by TAH by 30%, in 75% of gynaecological surgeons through a surgical outreach training programme delivered at the trainee’s hospital.
By decreasing the proportion of patients who receive a TAH, reduce: Incidence of surgical adverse events (AEs) in patients receiving a hysterectomy by 20%. Length of stay (LoS) for patients requiring a hysterectomy by 20%. Direct hospital costs for hysterectomy by 10%. And to: Evaluate surgeon trainees’ experiences of the training programme. Assess satisfaction with the training programme, and the views of relevant stakeholders on benefits and barriers to training.
Study design and setting The Implementation of Minimally Invasive Hysterectomy Trial (Imagine) will follow a stepped wedge implementation trial ( ) of a surgical training programme for practising obstetrician-gynaecologist specialists in four hospitals, to reduce the proportion of patients who receive a TAH by increasing the proportion who may receive a TLH instead. Study population The participants in this study include the trainer surgeons and trainee surgeons, theatre staff and hospital administrators. Inclusion criteria Inclusion criteria are provided in . Hospitals are eligible if there is institutional support and perceived need to increase the uptake of TLH, if there are at least two potentially eligible surgeons willing to be trained, if sufficient cases are available at the hospital for training and maintenance of benefit after training ceases and a training operating theatre exempt from national elective surgery targets can be made available. Hospitals can express desire to participate or be approached by the study team. Surgical mentors (SM) must be experienced in both TAH and TLH, have completed ≥100 TLH procedures, have no personal or professional relationship with the trainee surgeons that would impede training, have completed a train-the-trainer course and be willing to provide the necessary teaching and support for trainees and complete all relevant assessment and reporting. Surgical trainees (ST) must be Fellows of the Royal Australian and New Zealand College of Obstetrics and Gynaecology (RANZCOG) (specialists), competent in laparoscopic surgery such as laparoscopic ovarian cystectomy without complexity, laparoscopically assisted vaginal hysterectomy (uterine artery taken vaginally) without complexity and excision of stage 2 endometriosis, oophorectomy or removal of an ectopic pregnancy and willing to complete all required training days in full, as well as the necessary study assessment and reporting. At each hospital, up to three surgeons will be selected for training. Other surgeons not selected for training will be asked to also enter data on all their hysterectomies as comparison group. Theatre staff, hospital administrative staff and other relevant hospital participants are eligible if they are involved in the training programme and willing to participate in semistructured interviews to assess their personal views on benefits and future improvements of the training programme. Recruitment Potentially eligible hospitals known to offer hysterectomies will be contacted by the project manager to assess historical (3-year) hysterectomy activity (TAH, TLH, other), to explore the need for TLH training, and interest in participating in the trial. The TLH procedure For consistency, throughout all training and coaching, the surgical mentors perform the TLH procedure according to the steps described by McCartney and Obermair ; it is expected that the surgical trainees also become skilled in performing the operation according to those steps. Development and piloting of the training model The training model was developed and refined through a pilot feasibility study at one hospital, involving one trainer (with experience of >2000 TLHs) and three qualified gynaecological surgeons as trainee participants. The surgical trainee participants were Fellows of RANZCOG and already possessed general laparoscopic skills, but had minimal practical experience with TLH. The model comprises three sequential phases: (1) Planning and preparation; (2) Delivery of surgical training and (3) Programme evaluation ( ). Study procedures Phase 1: planning and preparation The first phase of the training model involves selection of the training-hospital, using the hospital selection tool questionnaire; identification of appropriate surgical mentors, using the surgical trainer/mentor selection tool questionnaire; and selection of appropriate surgical trainees ( ). All SMs attend a TLH train-the-trainer course; this practical course teaches surgical coaching skills and aims to equip the attendees as trainers. The course was adapted from existing train-the-trainers courses in flexible endoscopy and laparoscopic colorectal surgery. To maintain surgical volume for the trainees, a maximum of three STs may be trained simultaneously, the remaining surgeons not selected for TLH training support their colleagues in the training, and offer their hysterectomy cases to ensure that sufficient training cases are available; in return, non-selected surgeons are offered the same benefits to them in the future. Once these steps have been completed, a faculty focus meeting is held at the training hospital, during which information on surgical case load and key outcomes such as conversions at baseline are reviewed, and organisational/surgical trainee barriers to the adoption of TLH are discussed. This includes important practical concerns such as providing sufficient training cases to the STs, having a dedicated theatre that would be exempt from surgical target pressures, an anaesthetist that would support a reversed Trendelenburg position for laparoscopic surgery and all necessary equipment available to make the training feasible. To ensure that nursing staff, as well as the STs, gain experience and expertise, the SM, department head and the nursing theatre managers identify two scrub nurses to be involved throughout the training process. Attendance is also offered to anaesthetic staff. Phase 2: delivery of surgical training Surgical training is delivered through a sequential process of preceptorship (conducted at the SM’s hospital), proctorship (conducted at the STs’ hospital) and assessment and proficiency certification. Each trainee receives a laparoscopic simulator training device for personal use, which allows practising laparoscopic techniques in between training sessions. Training case selection For both preceptorship and proctorship stages, suitable patients are selected in advance by the SM in consultation with the trainees. Criteria are: (1) suitable patient classified as low risk, as measured by the SurgicalPerformance Risk of Surgical Complications app (RISC) ; (2) uterus size <10 weeks; (3) no previous laparotomy; (4) ≤2 previous caesarean sections; (5) a reasonably mobile uterus; and (6) not being on blood thinning medication. Preceptorship Preceptorship aims to demonstrate the flow of TLH in a well-performing team and surgical environment, it shows visual aspects of the procedure and surgical setup, the atmosphere in theatre and the sound levels that the STs should aim for when proficient. Preceptorship is provided in two stages: stage 1 involves a 1-day workshop attended by three STs and two local surgical scrub nurses. The topics comprise composition of a surgical team, surgical setup, positioning of equipment, primary and secondary port placement, surgeon’s posture, effective use of laparoscopic instruments and an overview of the steps of the TLH procedure. During this stage, the SM also explains study procedures, ensures baseline surveys are completed, and explains how to use the database for recording AEs. Stage 2 involves the STs and the nursing staff attending a live TLH. The SM acts as the lead surgeon, and through demonstration reinforces the topics covered in stage 1 of preceptorship while introducing the team to each practical step of the TLH. Each surgery is video-recorded and used to facilitate a postsurgery debrief discussion between the SM and the team. Proctorship Proctored training consists of up to 10 training days conducted by the SM, with up to three TLH procedures conducted per day; it is provided in a dedicated training theatre at the STs’ hospital using an identical configuration to that used in the preceptorship stages. The SM introduces themselves to everyone in the operating theatre, explains the aim of the day’s session and then checks the equipment. The patient is brought into the operating room, the SM supervises the patient positioning, discusses equipment, setup and port placement. The SM and STs agree on a set of stopping rules— if/when a member of the surgical team calls for a stop , the operation will be paused. A stop is typically called to pause and provide an opportunity to explain anatomy or a surgical procedural step. Not adhering to the stop may translate into a lost opportunity for learning. A stop call does not necessarily translate into the operation being taken over by the SM. During proctorship, the SM acts as the primary surgeon for the first case, with the STs acting as surgical assistants. For subsequent cases, the SM acts as surgical assistant; to avoid fatigue and exhaustion, for the first two training days, the STs perform only part of the procedure. As needed, the SM takes over to demonstrate specific procedural steps. As the STs become increasingly familiar with the TLH technique, the active involvement of the SM decreases, eventually just being present in the operating room supervising and demonstrating on the screen. Proctorship is the longest and most demanding component of TLH training. One of the challenges is for the trainees to renounce previously developed habits which are incompatible with a TLH operation. During the training period, the SM may approve that the STs could assist each other with TLHs outside of the regular proctored surgical training days, subject to them providing the mentor with updates on surgical outcomes. As with the preceptorship stage, to allow debriefing and to illustrate learning points, a video recording of each case is made. During debriefing, the SM provides detailed feedback to the STs, and the STs provide feedback on the SMs training, ideas to enhance the experience and goals they would like to set for the next session. Following each surgical training day, the STs are provided with tasks to complete before the next session. These tasks include reviewing videos of TLH procedures, active watching of tutorial videos and writing of affirmations on how to complete specific surgical tasks. Between training days, the surgical mentor is available to answer questions from the trainees by email. Formative assessments of both the STs and SM are conducted after each case including completing The Global Operative Assessment of Laparoscopic Skills tool. These assessments help the ST to identify their strengths and weaknesses, and to target areas for improvement; they also allow the SM to adapt their coaching style as needed. Assessment of proficiency On successful independent completion of at least 10 TLHs, or any time thereafter, STs can submit deidentified video-recordings of two completed TLHs for independent assessment by two senior laparoscopic surgeons. Assessments will be conducted using the Laparoscopic Competency Assessment Tool (L-CAT) provided in the online . 10.1136/bmjopen-2018-027155.supp1 Supplementary data Phase 3: programme evaluation The programme is evaluated by: examining the surgical outcomes of the patients who underwent surgery during the training programme, experiences of the STs; experiences of the SM; the theatre staff, hospital staff and other relevant stakeholders involved in the training programme. Qualitative interviews with nursing and theatre staff will allow assessment of the benefits of the training programme for their practice, and any barriers that the whole theatre team may envisage for a future rollout of training. By comparing the surgical outcomes of patients treated during the training programme with those of expert surgeons, it will be determined whether the treatment that patients received was equivalent to best practice. A summative assessment of Satisfaction with Laparoscopic Hysterectomy Training Programme is completed by each ST at the conclusion of the programme. Semistructured interviews are conducted to explore participants’ knowledge attitudes, and reflections about the programme, the integration of the new surgical skills into day-to-day practice, the ability of the programme to induce change in surgical approach, views on whether the training was successful and skills will be maintained and whether the programme could be used for future training. Interviews are recorded and transcribed verbatim for thematic analysis. In parallel to collecting TAH rates, those of hysterectomies done vaginally, using a laparoscopic-assisted vaginal or TLH approach will also be collected from before intervention throughout to follow-up. Outcomes Primary The proportion of hysterectomies performed by TAH comparing preintervention baseline and postintervention rates. We aim to decrease TAH by at least 30% in 75% of the trainee surgeons. Based on published evidence from other countries and based on our own experience from a pilot study, we expect that a surgical training programme to teach TLH will achieve a higher TLH rate, which will translate into better patient outcomes and reduced surgical AEs ; a decrease in TAH by 30% in at least 75% of the trainees would be clinically important. Secondary Number of hospitals screened, eligible, agree to commit to the training programme, number of hospitals that complete the training programme. Number of surgeons screened for eligibility, eligible, agree and committed to the training programme, number who complete the training programme, and achieved proficiency as assessed by two independent assessors who review two deidentified videos of two independently trainee completed TLHs using the Competency Assessment Score. Proportion of trainees achieving proficiency in correct theatre setup, vascular exposure, mobilisation and surgery closure as assessed by the trainer using the Formative assessment trainees’ form; change in proportion proficient over time). AEs (conversion from TLH to TAH, any anaesthetic incident, intraoperative visceral injury, red cell transfusions, hospital stay greater than 7 days, incidental finding of a malignancy, unplanned readmission, intensive care unit (ICU) admission or return to theatre, postoperative pulmonary embolus (PE) or deep vein thrombosis (DVT), development of a fistula, vault haematoma, vaginal vault dehiscence or pelvic infection). Hospital LoS (days). Cost-effectiveness (cost items: theatre staffing costs; equipment and consumables; Medicare Benefits Schedule items for surgical and anaesthetics fees; costs of health services used after surgery; costs of bed days; and costs due to AEs, readmissions or visits to the emergency department). Trainee surgeon proficiency with TLH assessed by two independent assesses from two trainee submitted anonymised videos (L-CAT Competency Assessment Score). Enrolment The hospital is the unit of analysis. All selected STs within a hospital will be assigned to receive the training programme at the same time. Other hospitals not yet assigned to intervention will continue with standard care until they are ready to start intervention. Hospitals will be informed of their intervention start 1 month prior to commencement. Blinding of hospitals/STs is not feasible. If several hospitals are ready to receive the intervention at the same time, random switching allocation will be performed by the National Health and Medical Research Council (NHMRC) Clinical Trials Centre using a computerised system. Hospitals must demonstrate eligibility and hospital site-specific approvals before commencement. Duration The study commenced on 3 August 2017 and is expected to require a maximum of 36 months to complete ( ) with hospitals entering the training phase sequentially. This comprises approximately 3 months setup, ≥3 months of baseline data collection, 24 months intervention, ≥3 months of follow-up data collection and approximately 3 months for analysis. Data collection and management The study manager will record results of the following assessments listed in in Research Electronic Data Capture : (2) hospital selection tool; (2) trainer/mentor selection tool; (3) surgical training programme participant selection tool; (4) questions for O&G surgeons; (5) formative assessment of the trainees; (6) formative assessment of the trainer/mentor; (7) the Global Operative Assessment of Laparoscopic Skills; (8) satisfaction with laparoscopic hysterectomy training programme; (9) L-CAT tool; and (10) Medicare Benefits Schedule codes. Clinical outcomes data will be collected using the SurgicalPerformance surgical reflection tool. STs or their representatives will enter information on patient information, surgical procedure details and outcomes directly into their account at SurgicalPerformance.com or provide their data to be entered by trial staff on their behalf. Similarly, data will also be entered by the surgeons who were not selected for training to allow comparison. Data collected from participants will remain confidential at all times. No identifiable data will ever be shared with third parties. All questionnaires, screening tools, interview recordings and transcripts, videos and other data will only have the participant ID number on them to protect privacy. The study ID numbers will be password-protected and only accessible to study investigators and project manager. Electronic study materials will be held in password-protected computers and hard copy documents will be stored in secure cabinets. All data transferred to and from SurgicalPerformance are encrypted. Statistical analysis Primary outcome The primary outcome is the proportion of hysterectomies performed by the trainees by TAH preintervention baseline and postintervention. For the programme to be worthwhile, we wish to decrease TAH by 30% in 75% of the trainees. We assume that a higher TLH rate will translate into better patient health outcomes including surgical AEs ; a drop in TAH by 30% is clinically relevant. The primary outcome of the proportion of TAH procedures will be analysed within each site using χ 2 test. These proportions will then be pooled (using inverse variance weighting) across the hospital sites to provide an overall difference in the rate of TAH for the participating trainees. Secondary outcomes Secondary outcomes will be increase in surgeon surgical skills, as assessed by the formative assessment forms during training; proficiency in TLH (laparoscopic assessment tool)—independently assessed by expert reviewers from two videos submitted by the trainees at the end of the training programme; AEs and hospital LoS throughout the study period. Data on surgical approach, conversion from TAH to TLH, any anaesthetic incident, intraoperative visceral injury, red cell transfusions, hospital stay >7 days, incidental finding of malignancy, unplanned readmission, ICU or return to theatre, postoperative PE or DVT, development of fistula, vault haematoma, vaginal vault dehiscence or pelvic infection will be extracted from the database. These will contribute to a weighted composite complication score. We aim for a 20% reduction in surgical complications. Data from non-training surgeons will be used to measure changes over time in surgical practice without intervention. Economic calculations will be undertaken estimating the costs of providing the training programme, costs of AEs and hospital bed days, comparing each hospital from pre to post training, and of surgery conducted by trainee surgeons compared with other surgeons not yet in the training programme. The satisfaction with the training programme rating scales will be summarised to provide a mean score (SD) for each of the sections related to the trainer/mentor, the hospital/peer support, overall training, training objectives as well a summary score overall. To assess the unadjusted and adjusted strength of association between participants’ satisfaction with the programme, and trainer, trainee or hospital characteristics linear, logistic or generalised linear regression models will be fitted, depending on the distribution of the outcome variable. Qualitative analysis All interviews will be transcribed verbatim. Transcripts will be read and re-read by two independent researchers. The semistructured interview questions will be used as a priori codes and additional codes will be developed using deductive content analysis. Using a framework approach themes will then be extracted, and compared between the two readers. Discrepancies will be discussed with other members of the research team until resolved. Data will be presented with representative direct quotes. Participant and public involvement Before planning the IMAGINE trial, we conducted interviews with 10 women who had had a hysterectomy in the past, and then surveyed over 2600 women, which helped inform the rationale for the study. Each year, we hold a patient forum to inform women about the ongoing research conducted by the Queensland Centre for Gynaecological Cancer. We also provide updated written summaries on our public facing website, and send biannual newsletters to patients and interested members of the public. Monitoring and quality assurance Process quality A study manager has overall responsibility for monitoring compliance with the study protocol and for initiating any remedy requests. All SMs and STs are provided with a copy of the protocol and a training manual which describes the practical steps necessary to implement the programme and the standard operating procedures. Data quality On a monthly basis, the study manager will assess: (2) completeness and timeliness of study data collection—completeness of information in the online Surgical Performance (SP) database is determined by comparing it with extracts from the operating room management information system); (ii) study data validity—a comparison of SP versus the medical record is made for a random sample of data (20% for each training centre, 5% during baseline, 10% during intervention and 5% during follow-up) with a target of <5% inconsistency between sources. Where problems of completeness or inconsistency are identified, then the study manager will request the participants to remedy. Ethics and dissemination The study is registered NCT03617354.
The Implementation of Minimally Invasive Hysterectomy Trial (Imagine) will follow a stepped wedge implementation trial ( ) of a surgical training programme for practising obstetrician-gynaecologist specialists in four hospitals, to reduce the proportion of patients who receive a TAH by increasing the proportion who may receive a TLH instead.
The participants in this study include the trainer surgeons and trainee surgeons, theatre staff and hospital administrators.
Inclusion criteria are provided in . Hospitals are eligible if there is institutional support and perceived need to increase the uptake of TLH, if there are at least two potentially eligible surgeons willing to be trained, if sufficient cases are available at the hospital for training and maintenance of benefit after training ceases and a training operating theatre exempt from national elective surgery targets can be made available. Hospitals can express desire to participate or be approached by the study team. Surgical mentors (SM) must be experienced in both TAH and TLH, have completed ≥100 TLH procedures, have no personal or professional relationship with the trainee surgeons that would impede training, have completed a train-the-trainer course and be willing to provide the necessary teaching and support for trainees and complete all relevant assessment and reporting. Surgical trainees (ST) must be Fellows of the Royal Australian and New Zealand College of Obstetrics and Gynaecology (RANZCOG) (specialists), competent in laparoscopic surgery such as laparoscopic ovarian cystectomy without complexity, laparoscopically assisted vaginal hysterectomy (uterine artery taken vaginally) without complexity and excision of stage 2 endometriosis, oophorectomy or removal of an ectopic pregnancy and willing to complete all required training days in full, as well as the necessary study assessment and reporting. At each hospital, up to three surgeons will be selected for training. Other surgeons not selected for training will be asked to also enter data on all their hysterectomies as comparison group. Theatre staff, hospital administrative staff and other relevant hospital participants are eligible if they are involved in the training programme and willing to participate in semistructured interviews to assess their personal views on benefits and future improvements of the training programme.
Potentially eligible hospitals known to offer hysterectomies will be contacted by the project manager to assess historical (3-year) hysterectomy activity (TAH, TLH, other), to explore the need for TLH training, and interest in participating in the trial.
For consistency, throughout all training and coaching, the surgical mentors perform the TLH procedure according to the steps described by McCartney and Obermair ; it is expected that the surgical trainees also become skilled in performing the operation according to those steps.
The training model was developed and refined through a pilot feasibility study at one hospital, involving one trainer (with experience of >2000 TLHs) and three qualified gynaecological surgeons as trainee participants. The surgical trainee participants were Fellows of RANZCOG and already possessed general laparoscopic skills, but had minimal practical experience with TLH. The model comprises three sequential phases: (1) Planning and preparation; (2) Delivery of surgical training and (3) Programme evaluation ( ).
Phase 1: planning and preparation The first phase of the training model involves selection of the training-hospital, using the hospital selection tool questionnaire; identification of appropriate surgical mentors, using the surgical trainer/mentor selection tool questionnaire; and selection of appropriate surgical trainees ( ). All SMs attend a TLH train-the-trainer course; this practical course teaches surgical coaching skills and aims to equip the attendees as trainers. The course was adapted from existing train-the-trainers courses in flexible endoscopy and laparoscopic colorectal surgery. To maintain surgical volume for the trainees, a maximum of three STs may be trained simultaneously, the remaining surgeons not selected for TLH training support their colleagues in the training, and offer their hysterectomy cases to ensure that sufficient training cases are available; in return, non-selected surgeons are offered the same benefits to them in the future. Once these steps have been completed, a faculty focus meeting is held at the training hospital, during which information on surgical case load and key outcomes such as conversions at baseline are reviewed, and organisational/surgical trainee barriers to the adoption of TLH are discussed. This includes important practical concerns such as providing sufficient training cases to the STs, having a dedicated theatre that would be exempt from surgical target pressures, an anaesthetist that would support a reversed Trendelenburg position for laparoscopic surgery and all necessary equipment available to make the training feasible. To ensure that nursing staff, as well as the STs, gain experience and expertise, the SM, department head and the nursing theatre managers identify two scrub nurses to be involved throughout the training process. Attendance is also offered to anaesthetic staff.
The first phase of the training model involves selection of the training-hospital, using the hospital selection tool questionnaire; identification of appropriate surgical mentors, using the surgical trainer/mentor selection tool questionnaire; and selection of appropriate surgical trainees ( ). All SMs attend a TLH train-the-trainer course; this practical course teaches surgical coaching skills and aims to equip the attendees as trainers. The course was adapted from existing train-the-trainers courses in flexible endoscopy and laparoscopic colorectal surgery. To maintain surgical volume for the trainees, a maximum of three STs may be trained simultaneously, the remaining surgeons not selected for TLH training support their colleagues in the training, and offer their hysterectomy cases to ensure that sufficient training cases are available; in return, non-selected surgeons are offered the same benefits to them in the future. Once these steps have been completed, a faculty focus meeting is held at the training hospital, during which information on surgical case load and key outcomes such as conversions at baseline are reviewed, and organisational/surgical trainee barriers to the adoption of TLH are discussed. This includes important practical concerns such as providing sufficient training cases to the STs, having a dedicated theatre that would be exempt from surgical target pressures, an anaesthetist that would support a reversed Trendelenburg position for laparoscopic surgery and all necessary equipment available to make the training feasible. To ensure that nursing staff, as well as the STs, gain experience and expertise, the SM, department head and the nursing theatre managers identify two scrub nurses to be involved throughout the training process. Attendance is also offered to anaesthetic staff.
Surgical training is delivered through a sequential process of preceptorship (conducted at the SM’s hospital), proctorship (conducted at the STs’ hospital) and assessment and proficiency certification. Each trainee receives a laparoscopic simulator training device for personal use, which allows practising laparoscopic techniques in between training sessions. Training case selection For both preceptorship and proctorship stages, suitable patients are selected in advance by the SM in consultation with the trainees. Criteria are: (1) suitable patient classified as low risk, as measured by the SurgicalPerformance Risk of Surgical Complications app (RISC) ; (2) uterus size <10 weeks; (3) no previous laparotomy; (4) ≤2 previous caesarean sections; (5) a reasonably mobile uterus; and (6) not being on blood thinning medication. Preceptorship Preceptorship aims to demonstrate the flow of TLH in a well-performing team and surgical environment, it shows visual aspects of the procedure and surgical setup, the atmosphere in theatre and the sound levels that the STs should aim for when proficient. Preceptorship is provided in two stages: stage 1 involves a 1-day workshop attended by three STs and two local surgical scrub nurses. The topics comprise composition of a surgical team, surgical setup, positioning of equipment, primary and secondary port placement, surgeon’s posture, effective use of laparoscopic instruments and an overview of the steps of the TLH procedure. During this stage, the SM also explains study procedures, ensures baseline surveys are completed, and explains how to use the database for recording AEs. Stage 2 involves the STs and the nursing staff attending a live TLH. The SM acts as the lead surgeon, and through demonstration reinforces the topics covered in stage 1 of preceptorship while introducing the team to each practical step of the TLH. Each surgery is video-recorded and used to facilitate a postsurgery debrief discussion between the SM and the team. Proctorship Proctored training consists of up to 10 training days conducted by the SM, with up to three TLH procedures conducted per day; it is provided in a dedicated training theatre at the STs’ hospital using an identical configuration to that used in the preceptorship stages. The SM introduces themselves to everyone in the operating theatre, explains the aim of the day’s session and then checks the equipment. The patient is brought into the operating room, the SM supervises the patient positioning, discusses equipment, setup and port placement. The SM and STs agree on a set of stopping rules— if/when a member of the surgical team calls for a stop , the operation will be paused. A stop is typically called to pause and provide an opportunity to explain anatomy or a surgical procedural step. Not adhering to the stop may translate into a lost opportunity for learning. A stop call does not necessarily translate into the operation being taken over by the SM. During proctorship, the SM acts as the primary surgeon for the first case, with the STs acting as surgical assistants. For subsequent cases, the SM acts as surgical assistant; to avoid fatigue and exhaustion, for the first two training days, the STs perform only part of the procedure. As needed, the SM takes over to demonstrate specific procedural steps. As the STs become increasingly familiar with the TLH technique, the active involvement of the SM decreases, eventually just being present in the operating room supervising and demonstrating on the screen. Proctorship is the longest and most demanding component of TLH training. One of the challenges is for the trainees to renounce previously developed habits which are incompatible with a TLH operation. During the training period, the SM may approve that the STs could assist each other with TLHs outside of the regular proctored surgical training days, subject to them providing the mentor with updates on surgical outcomes. As with the preceptorship stage, to allow debriefing and to illustrate learning points, a video recording of each case is made. During debriefing, the SM provides detailed feedback to the STs, and the STs provide feedback on the SMs training, ideas to enhance the experience and goals they would like to set for the next session. Following each surgical training day, the STs are provided with tasks to complete before the next session. These tasks include reviewing videos of TLH procedures, active watching of tutorial videos and writing of affirmations on how to complete specific surgical tasks. Between training days, the surgical mentor is available to answer questions from the trainees by email. Formative assessments of both the STs and SM are conducted after each case including completing The Global Operative Assessment of Laparoscopic Skills tool. These assessments help the ST to identify their strengths and weaknesses, and to target areas for improvement; they also allow the SM to adapt their coaching style as needed. Assessment of proficiency On successful independent completion of at least 10 TLHs, or any time thereafter, STs can submit deidentified video-recordings of two completed TLHs for independent assessment by two senior laparoscopic surgeons. Assessments will be conducted using the Laparoscopic Competency Assessment Tool (L-CAT) provided in the online . 10.1136/bmjopen-2018-027155.supp1 Supplementary data
For both preceptorship and proctorship stages, suitable patients are selected in advance by the SM in consultation with the trainees. Criteria are: (1) suitable patient classified as low risk, as measured by the SurgicalPerformance Risk of Surgical Complications app (RISC) ; (2) uterus size <10 weeks; (3) no previous laparotomy; (4) ≤2 previous caesarean sections; (5) a reasonably mobile uterus; and (6) not being on blood thinning medication.
Preceptorship aims to demonstrate the flow of TLH in a well-performing team and surgical environment, it shows visual aspects of the procedure and surgical setup, the atmosphere in theatre and the sound levels that the STs should aim for when proficient. Preceptorship is provided in two stages: stage 1 involves a 1-day workshop attended by three STs and two local surgical scrub nurses. The topics comprise composition of a surgical team, surgical setup, positioning of equipment, primary and secondary port placement, surgeon’s posture, effective use of laparoscopic instruments and an overview of the steps of the TLH procedure. During this stage, the SM also explains study procedures, ensures baseline surveys are completed, and explains how to use the database for recording AEs. Stage 2 involves the STs and the nursing staff attending a live TLH. The SM acts as the lead surgeon, and through demonstration reinforces the topics covered in stage 1 of preceptorship while introducing the team to each practical step of the TLH. Each surgery is video-recorded and used to facilitate a postsurgery debrief discussion between the SM and the team.
Proctored training consists of up to 10 training days conducted by the SM, with up to three TLH procedures conducted per day; it is provided in a dedicated training theatre at the STs’ hospital using an identical configuration to that used in the preceptorship stages. The SM introduces themselves to everyone in the operating theatre, explains the aim of the day’s session and then checks the equipment. The patient is brought into the operating room, the SM supervises the patient positioning, discusses equipment, setup and port placement. The SM and STs agree on a set of stopping rules— if/when a member of the surgical team calls for a stop , the operation will be paused. A stop is typically called to pause and provide an opportunity to explain anatomy or a surgical procedural step. Not adhering to the stop may translate into a lost opportunity for learning. A stop call does not necessarily translate into the operation being taken over by the SM. During proctorship, the SM acts as the primary surgeon for the first case, with the STs acting as surgical assistants. For subsequent cases, the SM acts as surgical assistant; to avoid fatigue and exhaustion, for the first two training days, the STs perform only part of the procedure. As needed, the SM takes over to demonstrate specific procedural steps. As the STs become increasingly familiar with the TLH technique, the active involvement of the SM decreases, eventually just being present in the operating room supervising and demonstrating on the screen. Proctorship is the longest and most demanding component of TLH training. One of the challenges is for the trainees to renounce previously developed habits which are incompatible with a TLH operation. During the training period, the SM may approve that the STs could assist each other with TLHs outside of the regular proctored surgical training days, subject to them providing the mentor with updates on surgical outcomes. As with the preceptorship stage, to allow debriefing and to illustrate learning points, a video recording of each case is made. During debriefing, the SM provides detailed feedback to the STs, and the STs provide feedback on the SMs training, ideas to enhance the experience and goals they would like to set for the next session. Following each surgical training day, the STs are provided with tasks to complete before the next session. These tasks include reviewing videos of TLH procedures, active watching of tutorial videos and writing of affirmations on how to complete specific surgical tasks. Between training days, the surgical mentor is available to answer questions from the trainees by email. Formative assessments of both the STs and SM are conducted after each case including completing The Global Operative Assessment of Laparoscopic Skills tool. These assessments help the ST to identify their strengths and weaknesses, and to target areas for improvement; they also allow the SM to adapt their coaching style as needed.
On successful independent completion of at least 10 TLHs, or any time thereafter, STs can submit deidentified video-recordings of two completed TLHs for independent assessment by two senior laparoscopic surgeons. Assessments will be conducted using the Laparoscopic Competency Assessment Tool (L-CAT) provided in the online . 10.1136/bmjopen-2018-027155.supp1 Supplementary data
The programme is evaluated by: examining the surgical outcomes of the patients who underwent surgery during the training programme, experiences of the STs; experiences of the SM; the theatre staff, hospital staff and other relevant stakeholders involved in the training programme. Qualitative interviews with nursing and theatre staff will allow assessment of the benefits of the training programme for their practice, and any barriers that the whole theatre team may envisage for a future rollout of training. By comparing the surgical outcomes of patients treated during the training programme with those of expert surgeons, it will be determined whether the treatment that patients received was equivalent to best practice. A summative assessment of Satisfaction with Laparoscopic Hysterectomy Training Programme is completed by each ST at the conclusion of the programme. Semistructured interviews are conducted to explore participants’ knowledge attitudes, and reflections about the programme, the integration of the new surgical skills into day-to-day practice, the ability of the programme to induce change in surgical approach, views on whether the training was successful and skills will be maintained and whether the programme could be used for future training. Interviews are recorded and transcribed verbatim for thematic analysis. In parallel to collecting TAH rates, those of hysterectomies done vaginally, using a laparoscopic-assisted vaginal or TLH approach will also be collected from before intervention throughout to follow-up.
Primary The proportion of hysterectomies performed by TAH comparing preintervention baseline and postintervention rates. We aim to decrease TAH by at least 30% in 75% of the trainee surgeons. Based on published evidence from other countries and based on our own experience from a pilot study, we expect that a surgical training programme to teach TLH will achieve a higher TLH rate, which will translate into better patient outcomes and reduced surgical AEs ; a decrease in TAH by 30% in at least 75% of the trainees would be clinically important. Secondary Number of hospitals screened, eligible, agree to commit to the training programme, number of hospitals that complete the training programme. Number of surgeons screened for eligibility, eligible, agree and committed to the training programme, number who complete the training programme, and achieved proficiency as assessed by two independent assessors who review two deidentified videos of two independently trainee completed TLHs using the Competency Assessment Score. Proportion of trainees achieving proficiency in correct theatre setup, vascular exposure, mobilisation and surgery closure as assessed by the trainer using the Formative assessment trainees’ form; change in proportion proficient over time). AEs (conversion from TLH to TAH, any anaesthetic incident, intraoperative visceral injury, red cell transfusions, hospital stay greater than 7 days, incidental finding of a malignancy, unplanned readmission, intensive care unit (ICU) admission or return to theatre, postoperative pulmonary embolus (PE) or deep vein thrombosis (DVT), development of a fistula, vault haematoma, vaginal vault dehiscence or pelvic infection). Hospital LoS (days). Cost-effectiveness (cost items: theatre staffing costs; equipment and consumables; Medicare Benefits Schedule items for surgical and anaesthetics fees; costs of health services used after surgery; costs of bed days; and costs due to AEs, readmissions or visits to the emergency department). Trainee surgeon proficiency with TLH assessed by two independent assesses from two trainee submitted anonymised videos (L-CAT Competency Assessment Score).
The proportion of hysterectomies performed by TAH comparing preintervention baseline and postintervention rates. We aim to decrease TAH by at least 30% in 75% of the trainee surgeons. Based on published evidence from other countries and based on our own experience from a pilot study, we expect that a surgical training programme to teach TLH will achieve a higher TLH rate, which will translate into better patient outcomes and reduced surgical AEs ; a decrease in TAH by 30% in at least 75% of the trainees would be clinically important.
Number of hospitals screened, eligible, agree to commit to the training programme, number of hospitals that complete the training programme. Number of surgeons screened for eligibility, eligible, agree and committed to the training programme, number who complete the training programme, and achieved proficiency as assessed by two independent assessors who review two deidentified videos of two independently trainee completed TLHs using the Competency Assessment Score. Proportion of trainees achieving proficiency in correct theatre setup, vascular exposure, mobilisation and surgery closure as assessed by the trainer using the Formative assessment trainees’ form; change in proportion proficient over time). AEs (conversion from TLH to TAH, any anaesthetic incident, intraoperative visceral injury, red cell transfusions, hospital stay greater than 7 days, incidental finding of a malignancy, unplanned readmission, intensive care unit (ICU) admission or return to theatre, postoperative pulmonary embolus (PE) or deep vein thrombosis (DVT), development of a fistula, vault haematoma, vaginal vault dehiscence or pelvic infection). Hospital LoS (days). Cost-effectiveness (cost items: theatre staffing costs; equipment and consumables; Medicare Benefits Schedule items for surgical and anaesthetics fees; costs of health services used after surgery; costs of bed days; and costs due to AEs, readmissions or visits to the emergency department). Trainee surgeon proficiency with TLH assessed by two independent assesses from two trainee submitted anonymised videos (L-CAT Competency Assessment Score).
The hospital is the unit of analysis. All selected STs within a hospital will be assigned to receive the training programme at the same time. Other hospitals not yet assigned to intervention will continue with standard care until they are ready to start intervention. Hospitals will be informed of their intervention start 1 month prior to commencement. Blinding of hospitals/STs is not feasible. If several hospitals are ready to receive the intervention at the same time, random switching allocation will be performed by the National Health and Medical Research Council (NHMRC) Clinical Trials Centre using a computerised system. Hospitals must demonstrate eligibility and hospital site-specific approvals before commencement.
The study commenced on 3 August 2017 and is expected to require a maximum of 36 months to complete ( ) with hospitals entering the training phase sequentially. This comprises approximately 3 months setup, ≥3 months of baseline data collection, 24 months intervention, ≥3 months of follow-up data collection and approximately 3 months for analysis.
The study manager will record results of the following assessments listed in in Research Electronic Data Capture : (2) hospital selection tool; (2) trainer/mentor selection tool; (3) surgical training programme participant selection tool; (4) questions for O&G surgeons; (5) formative assessment of the trainees; (6) formative assessment of the trainer/mentor; (7) the Global Operative Assessment of Laparoscopic Skills; (8) satisfaction with laparoscopic hysterectomy training programme; (9) L-CAT tool; and (10) Medicare Benefits Schedule codes. Clinical outcomes data will be collected using the SurgicalPerformance surgical reflection tool. STs or their representatives will enter information on patient information, surgical procedure details and outcomes directly into their account at SurgicalPerformance.com or provide their data to be entered by trial staff on their behalf. Similarly, data will also be entered by the surgeons who were not selected for training to allow comparison. Data collected from participants will remain confidential at all times. No identifiable data will ever be shared with third parties. All questionnaires, screening tools, interview recordings and transcripts, videos and other data will only have the participant ID number on them to protect privacy. The study ID numbers will be password-protected and only accessible to study investigators and project manager. Electronic study materials will be held in password-protected computers and hard copy documents will be stored in secure cabinets. All data transferred to and from SurgicalPerformance are encrypted.
Primary outcome The primary outcome is the proportion of hysterectomies performed by the trainees by TAH preintervention baseline and postintervention. For the programme to be worthwhile, we wish to decrease TAH by 30% in 75% of the trainees. We assume that a higher TLH rate will translate into better patient health outcomes including surgical AEs ; a drop in TAH by 30% is clinically relevant. The primary outcome of the proportion of TAH procedures will be analysed within each site using χ 2 test. These proportions will then be pooled (using inverse variance weighting) across the hospital sites to provide an overall difference in the rate of TAH for the participating trainees.
The primary outcome is the proportion of hysterectomies performed by the trainees by TAH preintervention baseline and postintervention. For the programme to be worthwhile, we wish to decrease TAH by 30% in 75% of the trainees. We assume that a higher TLH rate will translate into better patient health outcomes including surgical AEs ; a drop in TAH by 30% is clinically relevant. The primary outcome of the proportion of TAH procedures will be analysed within each site using χ 2 test. These proportions will then be pooled (using inverse variance weighting) across the hospital sites to provide an overall difference in the rate of TAH for the participating trainees.
Secondary outcomes will be increase in surgeon surgical skills, as assessed by the formative assessment forms during training; proficiency in TLH (laparoscopic assessment tool)—independently assessed by expert reviewers from two videos submitted by the trainees at the end of the training programme; AEs and hospital LoS throughout the study period. Data on surgical approach, conversion from TAH to TLH, any anaesthetic incident, intraoperative visceral injury, red cell transfusions, hospital stay >7 days, incidental finding of malignancy, unplanned readmission, ICU or return to theatre, postoperative PE or DVT, development of fistula, vault haematoma, vaginal vault dehiscence or pelvic infection will be extracted from the database. These will contribute to a weighted composite complication score. We aim for a 20% reduction in surgical complications. Data from non-training surgeons will be used to measure changes over time in surgical practice without intervention. Economic calculations will be undertaken estimating the costs of providing the training programme, costs of AEs and hospital bed days, comparing each hospital from pre to post training, and of surgery conducted by trainee surgeons compared with other surgeons not yet in the training programme. The satisfaction with the training programme rating scales will be summarised to provide a mean score (SD) for each of the sections related to the trainer/mentor, the hospital/peer support, overall training, training objectives as well a summary score overall. To assess the unadjusted and adjusted strength of association between participants’ satisfaction with the programme, and trainer, trainee or hospital characteristics linear, logistic or generalised linear regression models will be fitted, depending on the distribution of the outcome variable.
All interviews will be transcribed verbatim. Transcripts will be read and re-read by two independent researchers. The semistructured interview questions will be used as a priori codes and additional codes will be developed using deductive content analysis. Using a framework approach themes will then be extracted, and compared between the two readers. Discrepancies will be discussed with other members of the research team until resolved. Data will be presented with representative direct quotes.
Before planning the IMAGINE trial, we conducted interviews with 10 women who had had a hysterectomy in the past, and then surveyed over 2600 women, which helped inform the rationale for the study. Each year, we hold a patient forum to inform women about the ongoing research conducted by the Queensland Centre for Gynaecological Cancer. We also provide updated written summaries on our public facing website, and send biannual newsletters to patients and interested members of the public.
Process quality A study manager has overall responsibility for monitoring compliance with the study protocol and for initiating any remedy requests. All SMs and STs are provided with a copy of the protocol and a training manual which describes the practical steps necessary to implement the programme and the standard operating procedures. Data quality On a monthly basis, the study manager will assess: (2) completeness and timeliness of study data collection—completeness of information in the online Surgical Performance (SP) database is determined by comparing it with extracts from the operating room management information system); (ii) study data validity—a comparison of SP versus the medical record is made for a random sample of data (20% for each training centre, 5% during baseline, 10% during intervention and 5% during follow-up) with a target of <5% inconsistency between sources. Where problems of completeness or inconsistency are identified, then the study manager will request the participants to remedy.
A study manager has overall responsibility for monitoring compliance with the study protocol and for initiating any remedy requests. All SMs and STs are provided with a copy of the protocol and a training manual which describes the practical steps necessary to implement the programme and the standard operating procedures.
On a monthly basis, the study manager will assess: (2) completeness and timeliness of study data collection—completeness of information in the online Surgical Performance (SP) database is determined by comparing it with extracts from the operating room management information system); (ii) study data validity—a comparison of SP versus the medical record is made for a random sample of data (20% for each training centre, 5% during baseline, 10% during intervention and 5% during follow-up) with a target of <5% inconsistency between sources. Where problems of completeness or inconsistency are identified, then the study manager will request the participants to remedy.
The study is registered NCT03617354.
The IMAGINE trial is innovative; it will formally test a structured training model intended to develop advanced laparoscopic competency in gynaecological surgeons, so that they may complete a hysterectomy entirely laparoscopically. Hysterectomy is the most common major gynaecological surgical procedure (almost 30 000 cases/year in Australia). While high-level evidence is available to suggest that compared with less invasive approaches, TAH is associated with inferior health outcomes, a significant proportion of women still receive TAH. It has been proposed that increasing the adoption of TLH would reduce the rate of TAH and subsequently improve surgical outcomes as well as provide significant cost savings for funders of healthcare. In previous work, we interviewed and surveyed Australian gynaecologists. It is apparent that Australian gynaecologists prefer to offer the hysterectomy through a vaginal approach; however, if a vaginal approach is infeasible, many resort to an open, abdominal surgical approach. Only a minority of Australian gynaecological surgeons considered themselves sufficiently trained to offer patients a TLH, and this lack of training is the main impediment to them offering TLH as an alternative to TAH. We also surveyed women who had a hysterectomy, as active querying by patients to received TLH may contribute to a change in hysterectomy approach used. However, we found that women mainly knew about TAH, tended to follow the recommendation by their doctor and rarely sought a second opinion. Therefore, patient pressure was unlikely to facilitate a quicker move to TLH, and providing gynaecological surgeons with training to allow them to confidently offer TLH to their patients was the most promising strategy. Information is abundant on the assessment of surgical skills in specialist trainees and specialists, quantifying the surgical skills required for laparoscopic hysterectomy and documenting improved surgical skills from baseline through training. The UK programme on laparoscopic colorectal cancer surgery has reported extensively about the key elements and outcomes of the training programme. However, comparatively little information is available in the literature about how surgical training processes for practising consultant gynaecological surgeons may be established at an organisational level. The research study described here aims to close that evidence gap.
If successful, this training model will equip surgeons with the skills to offer a minimally invasive approach to hysterectomy which will translate into better care for women, while reducing health system costs. The findings may be useful in informing a scaled-up model for laparoscopic gynaecological surgical training.
Reviewer comments Author's manuscript
|
An autopsy case report of aortic dissection complicated with histiolymphocytic pericarditis and aortic inflammation after mRNA COVID-19 vaccination | 8d82ef92-a4a4-402f-b372-a3b0105ad56b | 9519380 | Forensic Medicine[mh] | Introduction In December 2020, the Japanese Ministry of Health, Labour and Welfare authorized the emergency use of two mRNA vaccines, BNT162b2 (produced by Pfizer-BioNTech) and mRNA-1273 (produced by Moderna), to control the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and its associated disease, coronavirus disease 2019 (COVID-19). These vaccines encode the SARS-CoV-2 spike protein and have excellent efficacy and safety profiles. However, they can cause mild adverse reactions such as injection site pain, fatigue, and headache, as well as rare but more severe side effects such as thromboembolism, myocarditis, and pericarditis . Because of the low incidence of severe side effects and difficulty in obtaining biopsy samples, histologically confirmed pericarditis has not been reported as a vaccine-related outcome.
Case presentation 2.1 Case history A Japanese male in his 90 s consulted a doctor because he experienced several days of general fatigue and dyspnea. His legs were edematous, and chest X-ray showed right pleural effusion. Elevated N -terminal pro-brain natriuretic peptide (NT-pro BNP; 3,706 pg/mL) and C-reactive protein (47.9 mg/L) were detected. The electrocardiogram results showed no abnormal change. He was diagnosed with heart failure but refused hospital admission. The patient was prescribed a 3-day course of diuretic medication, which relieved his symptoms and decreased the NT-pro BNP level. However, he was found lifeless in his kitchen on the morning of the fourth day after consulting the doctor. He had received a third dose of BNT162b2 approximately 2 weeks before death. No previous illness was reported. He did not have a history of smoking or habitual alcohol consumption. A police investigation at the man’s home revealed no suspicious activity. 2.2 Autopsy findings A medical examiner found no external abnormalities, including in the left deltoid injection site; therefore, an autopsy was performed 35 h postmortem. The deceased was 156 cm in height and weighed 52 kg. The pericardial sac was filled with dark red clots ( A). The ascending aorta had a 2.5 cm intimal tear at 4 cm above the aortic annulus ( B). The aortic media was dissected, and the adventitia was perforated within the pericardial cavity. The heart weighed 458 g and had a white villous surface ( C). Coronary arteries showed mild atherosclerosis. Disrupted coronary artery plaques, coronary aneurysms, and pulmonary emboli were not detected. 2.3 Microscopic examination Microscopic examination revealed fibrously thick epicardium with inflammatory cell infiltration predominantly composed of macrophages and lymphocytes ( A and 2B). Minimal necrosis of the outermost layer of the myocardium in the left lateral wall was also detected. There were no thrombi or multinuclear giant cells. Immunohistochemistry analysis of CD3, CD4, CD8, CD68, and CD79a confirmed macrophages, cytotoxic T lymphocytes, and B lymphocytes in the lesion ( C). A PCR assay for SARS-CoV-2 detection was not conducted because all tissue samples were fixed in formalin solution. The pericardial membrane was thick with fibrin deposition and hypertrophic fibroblasts. Macrophages and lymphocytes were also detected in the membrane ( ). The aortic root was dissected at the collagenous lesion; it showed inflammatory cell infiltration in the tunica media ( A and 4B). Medial elastic fibers were shown to be disrupted in Elastica van Gieson stain ( C). Immunohistochemical assay revealed macrophage and T- and B-cell infiltration in the aortic wall ( D). 2.4 Laboratory testing Laboratory examinations of the femoral blood were negative for antibodies to parvovirus-B19, cytomegalovirus, coxsackie virus-A4, ECHO virus-11 and −14, adenovirus, and influenza A (H1N1 and H3N2) and B (B-1 and B-2) viruses. A neutralization test for ECHO virus-9 was positive at a titer of 32. The serum was positive for anti-SARS-CoV-2 spike protein IgG antibody (583 AU/mL). Headspace gas chromatography revealed no ethanol in the venous blood, urine, or cerebrospinal fluid.
Case history A Japanese male in his 90 s consulted a doctor because he experienced several days of general fatigue and dyspnea. His legs were edematous, and chest X-ray showed right pleural effusion. Elevated N -terminal pro-brain natriuretic peptide (NT-pro BNP; 3,706 pg/mL) and C-reactive protein (47.9 mg/L) were detected. The electrocardiogram results showed no abnormal change. He was diagnosed with heart failure but refused hospital admission. The patient was prescribed a 3-day course of diuretic medication, which relieved his symptoms and decreased the NT-pro BNP level. However, he was found lifeless in his kitchen on the morning of the fourth day after consulting the doctor. He had received a third dose of BNT162b2 approximately 2 weeks before death. No previous illness was reported. He did not have a history of smoking or habitual alcohol consumption. A police investigation at the man’s home revealed no suspicious activity.
Autopsy findings A medical examiner found no external abnormalities, including in the left deltoid injection site; therefore, an autopsy was performed 35 h postmortem. The deceased was 156 cm in height and weighed 52 kg. The pericardial sac was filled with dark red clots ( A). The ascending aorta had a 2.5 cm intimal tear at 4 cm above the aortic annulus ( B). The aortic media was dissected, and the adventitia was perforated within the pericardial cavity. The heart weighed 458 g and had a white villous surface ( C). Coronary arteries showed mild atherosclerosis. Disrupted coronary artery plaques, coronary aneurysms, and pulmonary emboli were not detected.
Microscopic examination Microscopic examination revealed fibrously thick epicardium with inflammatory cell infiltration predominantly composed of macrophages and lymphocytes ( A and 2B). Minimal necrosis of the outermost layer of the myocardium in the left lateral wall was also detected. There were no thrombi or multinuclear giant cells. Immunohistochemistry analysis of CD3, CD4, CD8, CD68, and CD79a confirmed macrophages, cytotoxic T lymphocytes, and B lymphocytes in the lesion ( C). A PCR assay for SARS-CoV-2 detection was not conducted because all tissue samples were fixed in formalin solution. The pericardial membrane was thick with fibrin deposition and hypertrophic fibroblasts. Macrophages and lymphocytes were also detected in the membrane ( ). The aortic root was dissected at the collagenous lesion; it showed inflammatory cell infiltration in the tunica media ( A and 4B). Medial elastic fibers were shown to be disrupted in Elastica van Gieson stain ( C). Immunohistochemical assay revealed macrophage and T- and B-cell infiltration in the aortic wall ( D).
Laboratory testing Laboratory examinations of the femoral blood were negative for antibodies to parvovirus-B19, cytomegalovirus, coxsackie virus-A4, ECHO virus-11 and −14, adenovirus, and influenza A (H1N1 and H3N2) and B (B-1 and B-2) viruses. A neutralization test for ECHO virus-9 was positive at a titer of 32. The serum was positive for anti-SARS-CoV-2 spike protein IgG antibody (583 AU/mL). Headspace gas chromatography revealed no ethanol in the venous blood, urine, or cerebrospinal fluid.
Discussion Acute pericarditis is the inflammation of the visceral and parietal pericardium. Because the pericardial sac contains the heart and the roots of great vessels, pericarditis inflammation can extend to the aortic wall. Acute pericarditis has a variety of causes, such as viral and bacterial infection, systemic lupus erythematosus, rheumatoid arthritis, neoplastic disease, radiation, and trauma . In the clinical setting, pericarditis is diagnosed using several clinical manifestations and criteria, such as pericardial friction rub, laboratory testing, electrocardiogram, echocardiogram, and other imaging modalities. Histopathological findings are often not used in the diagnosis owing to the difficulty of the sampling procedure—pericardial samples can only be obtained surgically. Although pericardioscopy-guided percutaneous biopsy of the pericardium has been reported without major complications , this procedure is technically challenging, and an experienced operator is necessary. To the best of our knowledge, this is the first case report of histologically proven pericarditis after COVID-19 vaccination. SARS-CoV-2 vaccines have been associated with rare, but sometimes fatal, cardiovascular side effects such as thromboembolism, myocarditis/pericarditis, arrhythmia, and cardiomyopathy , , , , , . Fazlollahi et al. summarized the features of seven pericarditis cases after vaccination from case reports and case series. The median case age was 37 years (range: 21 to 80), and 71.4 % were men. The median time from vaccination to onset of the first symptom was 4 days (range: 1 to 11). None of the described cases had died. Diaz et al. reported 37 pericarditis cases from over 2 million individuals who received a COVID-19 vaccine (35 received an mRNA vaccine; 2 received a vector-based vaccine). In their report, the median age was 59 years (interquartile range: 46–69), 73 % were men, and the latent period was 20 days (interquartile range: 6–41). Although no mortality was reported, 13 patients (35.1 %) were admitted to the hospital, and the median length of stay was 1 day. Two-thirds of the cases were treated in the outpatient setting, mainly with colchicine and non-steroidal anti-inflammatory drugs. These two series found a generally favorable prognosis of pericarditis after COVID-19 vaccination. Several investigators have described the histopathological findings of myocarditis after COVID-19 vaccination, which showed abundant macrophage and T -cell infiltration , , , . These features are similar to those of SARS-CoV-2 infection-associated myocarditis , , , . The involvement of eosinophils and B lymphocytes has also been reported , . The abovementioned histopathological findings of post-vaccination myocarditis are compatible with those of our pericarditis case. Although a direct causal relationship between COVID-19 mRNA vaccination and pericarditis cannot be definitively established in the present case, no other causes were identified from the autopsy findings and laboratory results. A serological examination for virus detection has limited value because it cannot distinguish past infections from the most recent infection. Therefore, we cannot completely rule out a viral etiology. Although the pathophysiology remains unknown, several hypotheses for the occurrence of post-vaccination myocarditis have been suggested . One hypothesis is that a highly induced antibody response in young people can elicit a response similar to that of multisystem inflammatory syndrome in children with SARS-CoV-2 infection. However, a case of myocarditis without anti-SARS-CoV-2 spike protein antibodies after COVID-19 vaccination has been reported . The present patient developed pericarditis after a third dose of COVID-19 vaccine, and mildly elevated IgG antibodies were detected in the postmortem autopsy sample. Cross-reactive antibodies and a non-specific innate inflammatory response have also been hypothesized. Furthermore, RNA (as opposed to the translated protein) is a potent immunogen and produces a bystander or adjuvant effect, although pericarditis after vaccination with a viral vector-based vaccine has also been reported , . We presumed that death in the present case was caused by pericarditis-induced fragility of the aortic wall followed by cardiac tamponade. Diuretic medications improved the patient’s status of heart failure due to pericarditis; however, inflammation extending to the adventitia was a possible cause of aortic dissection. In this case, the deceased was aware of developing symptoms of heart failure approximately 1 week after receiving the BNT162b2 vaccine. The time between vaccination and death was approximately 2 weeks. The fibrously thick pericardial membrane was consistent with this time course.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Optimizing the production and efficacy of antimicrobial bioactive compounds from | fc5808ea-479e-4140-b2e4-31ace41c3594 | 11743287 | Microbiology[mh] | Introduction The global rise of multidrug-resistant (MDR) pathogens represents a critical public health challenge, with serious implications for healthcare systems worldwide . MDR infections have become a major cause of morbidity and mortality, with nearly 700,000 deaths annually, a figure projected to rise to 10 million by 2050 if current trends persist . This crisis is exacerbated by the indiscriminate use of antibiotics, particularly broad-spectrum antibiotics like cephalosporins, fluoroquinolones, and carbapenems, which have accelerated the emergence of resistant strains . About 50% of antibiotics prescribed are either inappropriate or unnecessary, contributing to the rapid development of resistance . The limitations of currently available antibiotics, including penicillin derivatives, tetracyclines, and macrolides, further compound the problem as many first-line treatments are losing efficacy against resistant organisms such as Escherichia coli , Klebsiella pneumoniae , and Staphylococcus aureus . This pressing issue highlights the urgent need for novel antimicrobial agents capable of effectively combating MDR pathogens. Despite decades of research, the discovery of new antibiotics has been remarkably slow, with only a handful—such as daptomycin and ceftaroline—introduced in the last 40 years . This stagnation is largely due to the high costs and extended timelines associated with antibiotic development, paired with the limited financial returns from drugs that are typically used only for short durations . The pharmaceutical industry has largely shifted its focus away from antibiotic development due to the lower profitability of antimicrobial agents. This has resulted in a limited pipeline of novel drugs . Furthermore, most recent efforts have yielded only minor improvements on existing therapies, rather than producing innovative breakthrough solutions. The scarcity of new antibiotics has led to an increased reliance on last-resort drugs like colistin and vancomycin . Alarmingly, resistance to these vital drugs is also emerging, underscoring the need to explore alternative sources of antimicrobial compounds. Natural products, particularly phytochemicals such as flavonoids, alkaloids, and terpenes, have long been recognized as rich sources of bioactive compounds with therapeutic potential . Notably, over 70% of all antibiotics in clinical use, including penicillin and vancomycin, were derived from natural sources. In recent years, research efforts have turned toward underexplored reservoirs such as plant-based compounds and microbial metabolites. Numerous studies have shown the efficacy of phytochemicals like curcumin and berberine in inhibiting bacterial growth, often with lower risks of resistance development compared to synthetic antibiotics . However, despite the promising potential of these natural compounds, large-scale, in-depth research remains limited. Most studies are still in their preliminary stages, and there is a pressing need for more comprehensive investigations on a global scale . One particularly promising source of bioactive compounds is Streptomyces , a genus of soil-dwelling bacteria renowned for its ability to produce a wide range of antibiotics, including tetracycline, chloramphenicol, and erythromycin, along with other secondary metabolites like antitumor agents and immunosuppressants . Streptomyces has played a pivotal role in antibiotic discovery, most notably as the source of streptomycin, the first effective treatment for tuberculosis . The genus is recognized for producing diverse bioactive compounds with antimicrobial, antifungal, and antitumor properties. Continued exploration of Streptomyces species remains essential for the identification of new bioactive compounds (Mazumdar et al., 2023). In this study, we focus on optimizing the production and efficacy of bioactive compounds from S. kanamyceticus in combating MDR pathogens. Our objective is to enhance the yield and potency of these compounds using Central Composite Design (CCD) and Partial Least Squares Regression (PLSR) analysis. By doing so, we aim to uncover the potential of S. kanamyceticus as a source of novel antimicrobial agents, contributing to the global fight against MDR infections. Methods 2.1 Chemicals All chemicals, media components, and standard antibiotics utilized in this study were sourced from Hi-Media Pvt. Ltd. and Sigma-Aldrich Corporation (India). The solvents used throughout the experiments were of analytical or HPLC grade, obtained from SD Fine Chem Limited (India). 2.2 Isolation and characterization of Streptomyces species A total of 10 soil samples were systematically collected from the loacal region in February 2023, specifically at Latitude 30.7333° N and Longitude 76.7794° E. Sampling sites were randomly selected from various distances within the area. To ensure diversity in Streptomyces species, soil was collected from five distinct points within a 400 m² zone for each habitat. At each point, the top 6 cm of soil was removed using a sterile spatula. Subsequently, 100 to 120 grams of soil from the underlying layer were collected, placed in stomacher sachets, mixed, and homogenized to produce a heterogeneous sample . All soil samples were collected in sterile containers and stored immediately at 4°C until further processing. To initiate the isolation process, soil samples (1gram) were suspended in sterile saline (9ml), and serial dilutions were spread onto starch-nitrate agar medium, which was supplemented with 50 μg/ml of cycloheximide and 30 μl/ml of nalidixic acid. The agar plates were incubated at 37°C for 5 days. Potentially valuable isolates were identified by selecting grey colonies that secreted pigments. These selected colonies were then streaked onto starch casein nitrate (SCN) agar containing 25 µg/ml cycloheximide and 50 µg/ml rifampicin, with further incubation at 37°C for another 5 days. After incubation, the isolates were characterized based on the International Streptomyces Project (ISP) criteria, which included an evaluation of mycelium shape, colour, substrate mycelium, melanin production, and soluble pigment production. Stock cultures of the 20 selected isolates were preserved as spore and mycelial suspensions in 20% glycerol at -20°C for future reference. To confirm the species, morphological assessments were conducted on the 20 isolates. Species confirmation involved performing a PCR assay targeting the 16S rRNA gene using forward primer (5’-AGAGTTTGATCMTGGCTCAG-3’) and reverse primer (5’-TACGGYTACCTTGTTACGACTT-3’) . The amplified products were visualized through electrophoresis on a 1.8% agarose gel stained with 0.5 μg/ml ethidium bromide. DNA sequencing of the PCR products was carried out by a commercial service provider (Genentech). The resulting sequences were analyzed using the Basic Local Alignment Search Tool (BLAST) on the National Center for Biotechnology Information (NCBI) platform. These sequences were compared with GenBank database sequences ( www.ncbi.nlm.nih.gov ), and a phylogenetic tree was constructed to determine evolutionary relationships. 2.3 Primary screening for antimicrobial activity To identify the optimal medium for antibiotic production, the antimicrobial activity of pure isolates was evaluated using the double-layer method on various ISP agar media. First, different ISP agar media, including ISP2, ISP3, ISP4, and ISP5, were prepared according to standard protocols . These media were selected to assess the performance of S treptomyces under different nutrient conditions. Streptomyces isolates were then inoculated onto the prepared ISP agar media and incubated for a specified period, typically 7 days, to promote optimal growth and antibiotic production. Subsequently, test organisms, including Escherichia coli ATCC 25922 (Gram-negative bacterium), Staphylococcus aureus ATCC 25923 (Gram-positive bacterium), Bacillus subtilis ATCC 19659 (non-pathogenic Gram-positive bacterium), and Candida albicans ATCC 60193 (pathogenic yeast), were used to evaluate antimicrobial activity. The plates were then incubated at 37 ° C for 24 to 48 hours for bacterial strains and at 25 ° C for 48 to 72 hours for Candida albicans . Following incubation, antimicrobial activity was assessed by measuring the zones of inhibition around the Actinomycetes colonies. The measurement of these zones (mm) provided an indication of the antimicrobial effectiveness of the isolates. Larger zones of inhibition signified greater antimicrobial activity. The results were analysed to determine which ISP media supported the highest levels of antimicrobial activity. This comparison facilitated the selection of the most effective medium for optimizing antibiotic production. 2.4 Extraction method for bioactive compounds from Streptomyces spp. To extract bioactive compounds from Streptomyces spp., the culture broth was filtered through sterile cheesecloth to remove mycelial debris The filtrate, typically 100 mL, was transferred to a separatory funnel. To this, an equal volume of diethyl ether (Et2O) was added. The mixture was shaken vigorously for 10 minutes to ensure thorough mixing. Afterwards, the mixture was allowed to settle, and the organic layer, which contained the extracted compounds, was carefully separated and collected. Concurrently, the mycelial mass was homogenized with diethyl ether in a mortar and pestle, using approximately 100 mL of solvent for the homogenization. The resulting homogenate was filtered through a fine mesh to obtain the liquid extract. This liquid was then combined with the organic layer from the culture broth extraction. The combined diethyl ether extracts were concentrated using a rotary evaporator at 40°C to remove the solvent . The concentrated extract was transferred to a clean glass vial and stored at -20°C until further analysis. 2.5 Determination of minimum inhibitory concentration To determine the MIC of the bioactive compounds extracted from Streptomyces spp., the Kirby-Bauer disc diffusion method was utilized. Mueller-Hinton agar plates were prepared for bacterial assays, while Sabouraud Dextrose agar plates were used for fungal assays. The test microorganisms— Escherichia coli , Staphylococcus aureus , and Candida albicans —were cultured to an appropriate turbidity, specifically a 0.5 McFarland standard, which corresponds to approximately 1.5 × 10 8 CFU/mL. This turbidity standard ensures consistent inoculum density across different assays. The cultures were then evenly spread onto the surface of the agar plates using a sterile spreader to achieve a uniform lawn of growth. Sterile filter paper discs were impregnated with various concentrations of the concentrated extract. Typically, concentrations ranged from 10 µg, 20 µg, 40µg to 100 µg per disc, though specific concentrations may vary depending on the extract’s potency and experimental design. The discs were placed on the inoculated agar plates with sufficient spacing to prevent overlapping inhibition zones. The plates were incubated at 37°C for 24 hours for bacterial strains and at 25°C for 48 to 72 hours for Candida albicans . After incubation, the zones of inhibition around the discs were measured in mm . The MIC was determined by identifying the lowest concentration of the extract which resulted in a clear zone of inhibition around the disc, indicating effective suppression of microbial growth. This approach provided a quantitative measure of the antimicrobial activity of the extracts against the selected pathogens. 2.6 FTIR analysis of diethyl ether extracts from Streptomyces SK-2023-2 and SK-2023-4 The diethyl ether extracts of Streptomyces strains SK-2023-2 and SK-2023-4 were analyzed using Fourier Transform Infrared Spectroscopy (FTIR) to identify functional groups and characterize the chemical composition of the bioactive compounds . 1 mg of each extract was placed on a clean KBr pellet and thoroughly mixed with about 100 mg of dry KBr powder to form a homogeneous mixture. This mixture was then compressed into a transparent pellet using a hydraulic press at a pressure of 10,000 psi for approximately 5 minutes, ensuring that the pellet was uniform and free from air bubbles. FTIR spectra were recorded using an FTIR spectrometer (ReactIR702L) over the range of 4000 cm - ¹ to 400 cm - ¹. Each spectrum was acquired at a resolution of 4 cm - ¹, with 32 scans averaged to enhance the signal-to-noise ratio. The obtained spectra were analyzed to identify characteristic absorption bands corresponding to the functional groups present in the extracts. Peaks were assigned based on comparisons with standard reference spectra and literature values. The spectral data were interpreted to determine the functional groups and possible chemical structures of the bioactive compounds present in the extracts from SK-2023-2 and SK-2023-4. 2.7 Statistical optimization of antimicrobial properties To identify the optimal concentration and interactions of key media components—glucose, glycine max, and CaCO 3 —a systematic investigation was performed using a Central Composite Design (CCD) . The experimental setup is detailed in , where each factor was evaluated at five different levels: -α, -1, 0, + 1, and +α. Design Expert Software version 13 was utilized to generate a series of 20 experiments, each conducted in triplicate. The average zone of inhibition (mm) was measured as the dependent variable or response. All experiments were prepared and incubated at 28°C and 100 rpm for 5 days. 2.7.1 Statistical analysis and modelling The data obtained from the CCD experiments were subjected to rigorous analysis of variance (ANOVA). A first-order polynomial equation was fitted through multiple regression analysis to develop an empirical model that quantifies the relationship between the measured response and the independent variables. The empirical model is expressed by the following equation: Y = B 0 + B 1 X 1 + B 2 X 2 + B 3 X 3 + B 4 X 1 X 2 + B 5 X 1 X 3 + B 6 X 2 X 3 + B 7 X 1 2 + B 8 X 2 2 + B 9 X 3 2 Where Y represents the predicted response, and B 0 through B 9 are the regression coefficients for the respective variables and interaction terms. The independent variables X 1, X 2 , and X 3 correspond to glucose, glycine max, and CaCO 3 , respectively. The response was further analysed through three-dimensional plots to visualize the effects and interactions of the variables. 2.7.2 Multivariate analysis A combination of multiple linear regression (MLR) and the white box approach of data modelling was employed, utilizing Partial Least Squares Regression (PLSR) . This methodology, as discussed in the literature previously aimed to thoroughly understand the correlation between the independent variables (glucose, glycine max, and CaCO 3 ) and the zone of inhibition (dependent variable). PLSR is particularly useful in the presence of multicollinearity among predictor variables, allowing the development of both explanatory and predictive models. Additionally, to ensure the robustness of the developed model, a model assessment technique known as leave-one-out cross-validation was applied. This iterative procedure involves creating a regression model for each sample while excluding it as a test case, and then assessing the prediction outcomes. Through these statistical approaches, the complex relationships between the experimental variables and the antifungal properties were extensively explored. Chemicals All chemicals, media components, and standard antibiotics utilized in this study were sourced from Hi-Media Pvt. Ltd. and Sigma-Aldrich Corporation (India). The solvents used throughout the experiments were of analytical or HPLC grade, obtained from SD Fine Chem Limited (India). Isolation and characterization of Streptomyces species A total of 10 soil samples were systematically collected from the loacal region in February 2023, specifically at Latitude 30.7333° N and Longitude 76.7794° E. Sampling sites were randomly selected from various distances within the area. To ensure diversity in Streptomyces species, soil was collected from five distinct points within a 400 m² zone for each habitat. At each point, the top 6 cm of soil was removed using a sterile spatula. Subsequently, 100 to 120 grams of soil from the underlying layer were collected, placed in stomacher sachets, mixed, and homogenized to produce a heterogeneous sample . All soil samples were collected in sterile containers and stored immediately at 4°C until further processing. To initiate the isolation process, soil samples (1gram) were suspended in sterile saline (9ml), and serial dilutions were spread onto starch-nitrate agar medium, which was supplemented with 50 μg/ml of cycloheximide and 30 μl/ml of nalidixic acid. The agar plates were incubated at 37°C for 5 days. Potentially valuable isolates were identified by selecting grey colonies that secreted pigments. These selected colonies were then streaked onto starch casein nitrate (SCN) agar containing 25 µg/ml cycloheximide and 50 µg/ml rifampicin, with further incubation at 37°C for another 5 days. After incubation, the isolates were characterized based on the International Streptomyces Project (ISP) criteria, which included an evaluation of mycelium shape, colour, substrate mycelium, melanin production, and soluble pigment production. Stock cultures of the 20 selected isolates were preserved as spore and mycelial suspensions in 20% glycerol at -20°C for future reference. To confirm the species, morphological assessments were conducted on the 20 isolates. Species confirmation involved performing a PCR assay targeting the 16S rRNA gene using forward primer (5’-AGAGTTTGATCMTGGCTCAG-3’) and reverse primer (5’-TACGGYTACCTTGTTACGACTT-3’) . The amplified products were visualized through electrophoresis on a 1.8% agarose gel stained with 0.5 μg/ml ethidium bromide. DNA sequencing of the PCR products was carried out by a commercial service provider (Genentech). The resulting sequences were analyzed using the Basic Local Alignment Search Tool (BLAST) on the National Center for Biotechnology Information (NCBI) platform. These sequences were compared with GenBank database sequences ( www.ncbi.nlm.nih.gov ), and a phylogenetic tree was constructed to determine evolutionary relationships. Primary screening for antimicrobial activity To identify the optimal medium for antibiotic production, the antimicrobial activity of pure isolates was evaluated using the double-layer method on various ISP agar media. First, different ISP agar media, including ISP2, ISP3, ISP4, and ISP5, were prepared according to standard protocols . These media were selected to assess the performance of S treptomyces under different nutrient conditions. Streptomyces isolates were then inoculated onto the prepared ISP agar media and incubated for a specified period, typically 7 days, to promote optimal growth and antibiotic production. Subsequently, test organisms, including Escherichia coli ATCC 25922 (Gram-negative bacterium), Staphylococcus aureus ATCC 25923 (Gram-positive bacterium), Bacillus subtilis ATCC 19659 (non-pathogenic Gram-positive bacterium), and Candida albicans ATCC 60193 (pathogenic yeast), were used to evaluate antimicrobial activity. The plates were then incubated at 37 ° C for 24 to 48 hours for bacterial strains and at 25 ° C for 48 to 72 hours for Candida albicans . Following incubation, antimicrobial activity was assessed by measuring the zones of inhibition around the Actinomycetes colonies. The measurement of these zones (mm) provided an indication of the antimicrobial effectiveness of the isolates. Larger zones of inhibition signified greater antimicrobial activity. The results were analysed to determine which ISP media supported the highest levels of antimicrobial activity. This comparison facilitated the selection of the most effective medium for optimizing antibiotic production. Extraction method for bioactive compounds from Streptomyces spp. To extract bioactive compounds from Streptomyces spp., the culture broth was filtered through sterile cheesecloth to remove mycelial debris The filtrate, typically 100 mL, was transferred to a separatory funnel. To this, an equal volume of diethyl ether (Et2O) was added. The mixture was shaken vigorously for 10 minutes to ensure thorough mixing. Afterwards, the mixture was allowed to settle, and the organic layer, which contained the extracted compounds, was carefully separated and collected. Concurrently, the mycelial mass was homogenized with diethyl ether in a mortar and pestle, using approximately 100 mL of solvent for the homogenization. The resulting homogenate was filtered through a fine mesh to obtain the liquid extract. This liquid was then combined with the organic layer from the culture broth extraction. The combined diethyl ether extracts were concentrated using a rotary evaporator at 40°C to remove the solvent . The concentrated extract was transferred to a clean glass vial and stored at -20°C until further analysis. Determination of minimum inhibitory concentration To determine the MIC of the bioactive compounds extracted from Streptomyces spp., the Kirby-Bauer disc diffusion method was utilized. Mueller-Hinton agar plates were prepared for bacterial assays, while Sabouraud Dextrose agar plates were used for fungal assays. The test microorganisms— Escherichia coli , Staphylococcus aureus , and Candida albicans —were cultured to an appropriate turbidity, specifically a 0.5 McFarland standard, which corresponds to approximately 1.5 × 10 8 CFU/mL. This turbidity standard ensures consistent inoculum density across different assays. The cultures were then evenly spread onto the surface of the agar plates using a sterile spreader to achieve a uniform lawn of growth. Sterile filter paper discs were impregnated with various concentrations of the concentrated extract. Typically, concentrations ranged from 10 µg, 20 µg, 40µg to 100 µg per disc, though specific concentrations may vary depending on the extract’s potency and experimental design. The discs were placed on the inoculated agar plates with sufficient spacing to prevent overlapping inhibition zones. The plates were incubated at 37°C for 24 hours for bacterial strains and at 25°C for 48 to 72 hours for Candida albicans . After incubation, the zones of inhibition around the discs were measured in mm . The MIC was determined by identifying the lowest concentration of the extract which resulted in a clear zone of inhibition around the disc, indicating effective suppression of microbial growth. This approach provided a quantitative measure of the antimicrobial activity of the extracts against the selected pathogens. FTIR analysis of diethyl ether extracts from Streptomyces SK-2023-2 and SK-2023-4 The diethyl ether extracts of Streptomyces strains SK-2023-2 and SK-2023-4 were analyzed using Fourier Transform Infrared Spectroscopy (FTIR) to identify functional groups and characterize the chemical composition of the bioactive compounds . 1 mg of each extract was placed on a clean KBr pellet and thoroughly mixed with about 100 mg of dry KBr powder to form a homogeneous mixture. This mixture was then compressed into a transparent pellet using a hydraulic press at a pressure of 10,000 psi for approximately 5 minutes, ensuring that the pellet was uniform and free from air bubbles. FTIR spectra were recorded using an FTIR spectrometer (ReactIR702L) over the range of 4000 cm - ¹ to 400 cm - ¹. Each spectrum was acquired at a resolution of 4 cm - ¹, with 32 scans averaged to enhance the signal-to-noise ratio. The obtained spectra were analyzed to identify characteristic absorption bands corresponding to the functional groups present in the extracts. Peaks were assigned based on comparisons with standard reference spectra and literature values. The spectral data were interpreted to determine the functional groups and possible chemical structures of the bioactive compounds present in the extracts from SK-2023-2 and SK-2023-4. Statistical optimization of antimicrobial properties To identify the optimal concentration and interactions of key media components—glucose, glycine max, and CaCO 3 —a systematic investigation was performed using a Central Composite Design (CCD) . The experimental setup is detailed in , where each factor was evaluated at five different levels: -α, -1, 0, + 1, and +α. Design Expert Software version 13 was utilized to generate a series of 20 experiments, each conducted in triplicate. The average zone of inhibition (mm) was measured as the dependent variable or response. All experiments were prepared and incubated at 28°C and 100 rpm for 5 days. 2.7.1 Statistical analysis and modelling The data obtained from the CCD experiments were subjected to rigorous analysis of variance (ANOVA). A first-order polynomial equation was fitted through multiple regression analysis to develop an empirical model that quantifies the relationship between the measured response and the independent variables. The empirical model is expressed by the following equation: Y = B 0 + B 1 X 1 + B 2 X 2 + B 3 X 3 + B 4 X 1 X 2 + B 5 X 1 X 3 + B 6 X 2 X 3 + B 7 X 1 2 + B 8 X 2 2 + B 9 X 3 2 Where Y represents the predicted response, and B 0 through B 9 are the regression coefficients for the respective variables and interaction terms. The independent variables X 1, X 2 , and X 3 correspond to glucose, glycine max, and CaCO 3 , respectively. The response was further analysed through three-dimensional plots to visualize the effects and interactions of the variables. 2.7.2 Multivariate analysis A combination of multiple linear regression (MLR) and the white box approach of data modelling was employed, utilizing Partial Least Squares Regression (PLSR) . This methodology, as discussed in the literature previously aimed to thoroughly understand the correlation between the independent variables (glucose, glycine max, and CaCO 3 ) and the zone of inhibition (dependent variable). PLSR is particularly useful in the presence of multicollinearity among predictor variables, allowing the development of both explanatory and predictive models. Additionally, to ensure the robustness of the developed model, a model assessment technique known as leave-one-out cross-validation was applied. This iterative procedure involves creating a regression model for each sample while excluding it as a test case, and then assessing the prediction outcomes. Through these statistical approaches, the complex relationships between the experimental variables and the antifungal properties were extensively explored. Statistical analysis and modelling The data obtained from the CCD experiments were subjected to rigorous analysis of variance (ANOVA). A first-order polynomial equation was fitted through multiple regression analysis to develop an empirical model that quantifies the relationship between the measured response and the independent variables. The empirical model is expressed by the following equation: Y = B 0 + B 1 X 1 + B 2 X 2 + B 3 X 3 + B 4 X 1 X 2 + B 5 X 1 X 3 + B 6 X 2 X 3 + B 7 X 1 2 + B 8 X 2 2 + B 9 X 3 2 Where Y represents the predicted response, and B 0 through B 9 are the regression coefficients for the respective variables and interaction terms. The independent variables X 1, X 2 , and X 3 correspond to glucose, glycine max, and CaCO 3 , respectively. The response was further analysed through three-dimensional plots to visualize the effects and interactions of the variables. Multivariate analysis A combination of multiple linear regression (MLR) and the white box approach of data modelling was employed, utilizing Partial Least Squares Regression (PLSR) . This methodology, as discussed in the literature previously aimed to thoroughly understand the correlation between the independent variables (glucose, glycine max, and CaCO 3 ) and the zone of inhibition (dependent variable). PLSR is particularly useful in the presence of multicollinearity among predictor variables, allowing the development of both explanatory and predictive models. Additionally, to ensure the robustness of the developed model, a model assessment technique known as leave-one-out cross-validation was applied. This iterative procedure involves creating a regression model for each sample while excluding it as a test case, and then assessing the prediction outcomes. Through these statistical approaches, the complex relationships between the experimental variables and the antifungal properties were extensively explored. Results 3.1 Identification and characterization of Streptomyces spp. The soil analysis from the site indicated it to be alkaline with a pH of 7.9 and low salinity (EC = 0.15 ds/m). Based on the texture diagram, the soil was classified as sandy loam, characterized by a low clay content (20%), adequate total nitrogen (0.17%), and a moderate level of organic matter (2.98%). The analysis also revealed the presence of exchangeable cations, including potassium (K), magnesium (Mg), aluminium (Al), calcium (Ca), and silicon (Si). Among the mineral elements, oxygen (O), iron (Fe), and silicon (Si) were the most abundant, followed by aluminium, calcium, potassium, and magnesium . From the soil samples collected across the five locations, 25 morphologically distinct presumptive Streptomyces strains were successfully isolated. Out of the 25 isolated strains, 7 were identified as S. kanamyceticus based on 16S rRNA sequencing, and their phylogenetic relationships were confirmed through a constructed phylogenetic tree . These 7 isolates were selected for further study and were designated as SK-2023-1 through SK-2023-7. 3.2 Screening of active S. kanamyceticus isolates The primary screening of the S. kanamyceticus isolates against Escherichia coli ATCC 25922, Staphylococcus aureus ATCC 25923, Bacillus subtilis ATCC 19659, and Candida albicans ATCC 60193 revealed varied levels of antimicrobial activity . Isolate SK-2023-1 demonstrated inhibition zones of 20 mm, 22 mm, 25 mm, and 25 mm, respectively, against these microorganisms. SK-2023-2 exhibited the highest inhibition for all tested organisms, with 24 mm against E. coli , 30 mm against S. aureus , 29 mm against B. subtilis , and 30 mm against C. albicans . Conversely, SK-2023-3 showed the lowest activity, with inhibition zones of 16 mm, 15 mm, 14 mm, and 18 mm, respectively. SK-2023-4 produced strong inhibition, particularly against C. albicans (31 mm) and S. aureus (28 mm). The antimicrobial activity of SK-2023-5 was moderate, ranging from 15 mm to 20 mm across all pathogens. SK-2023-6 and SK-2023-7 displayed similar levels of activity, with SK-2023-6 showing higher inhibition against E. coli (29 mm), while both exhibited strong inhibition against C. albicans (27 mm and 31 mm, respectively). These results indicate significant variability in the antimicrobial efficacy of the different S. kanamyceticus isolates. 3.3 Extraction and minimum inhibitory concentration The extraction of bioactive compounds from Streptomyces spp. was successfully achieved, yielding approximately 150-200 mL of concentrated extracts suitable for further analysis. The concentrated extracts exhibited distinct colours and viscosities, indicating the presence of various metabolites. Subsequent antibacterial and antifungal activity assays revealed significant antimicrobial potential of the bioactive compounds. The MIC (MIC) and the corresponding Zone of Inhibition were recorded . Escherichia coli ATCC25922 exhibited a MIC ranging from 20 µg/ml to 70 µg/ml, with zone inhibition measurements varying from non-detectable to substantial inhibition (29 mm). Similarly, Staphylococcus aureus ATCC25923 demonstrated MIC values between 25 µg/ml and 70 µg/ml, with zones of inhibition reaching up to 25 mm. Bacillus subtilis ATCC 19659 showed an MIC range of 20 µg/ml to 60 µg/ml, while its zone of inhibition varied significantly, reaching a maximum of 23 mm. For the fungal pathogen Candida albicans ATCC 60193, the MIC ranged from 20 µg/ml to 65 µg/ml, with inhibition zones also showing substantial variability, peaking at 31 mm. These findings indicated that the bioactive compounds possessed significant antimicrobial activity against both bacterial and fungal pathogens. 3.4 FTIR analysis The FTIR analysis of the diethyl ether extracts from Streptomyces SK-2023-2 and SK-2023-4 revealed distinct spectral profiles, highlighting the presence of various functional groups . The FTIR spectrum of SK-2023-2 displayed characteristic absorption bands at specific wavenumbers . A prominent peak at 3350 cm - ¹ corresponded to O-H stretching vibrations, indicating the presence of hydroxyl groups. Additionally, peaks at 2920 cm - ¹ and 2850 cm - ¹ indicated C-H stretching of aliphatic compounds, while the absorption band at 1740 cm - ¹ suggested the presence of carbonyl (C=O) groups. Furthermore, peaks at 1600 cm - ¹ and 1450 cm - ¹ were attributed to C=C stretching vibrations, suggesting possible aromatic structures. FTIR spectrum of SK-2023-4 exhibited similar absorption features. The presence of O-H stretching was confirmed by a broad peak around 3400 cm - ¹, while C-H stretching vibrations appeared at 2925 cm - ¹ and 2855 cm - ¹. The carbonyl group was indicated by a peak at 1715 cm - ¹. Moreover, peaks at 1610 cm - ¹ and 1490 cm - ¹ were associated with aromatic compounds. The FTIR analysis confirmed the presence of functional groups indicative of diverse bioactive compounds in both extracts, suggesting their potential for further investigation into antimicrobial activities. 3.5 Statistical optimization and modelling The optimization of antibiotic production was performed using a central composite design (CCD), which included three independent variables: X 1 = glucose, X 2 = glycine max, and X 3 = CaCO 3 . This design comprised three experimental runs that examined the influence of these factors on the antibiotic yield . The highest antibiotic production was achieved in a medium containing (g/L): glucose 10, glycine max 10, and CaCO 3 1, resulting in a zone of inhibition of 30 mm. The statistical model analysed through ANOVA, confirmed the significance of the optimization process with an F-value of 3.18 and a p-value of 0.0427 . Three-dimensional response surface plots were generated to illustrate the interaction effects of glucose, glycine max, and CaCO 3 concentrations on the zone of inhibition. The results showed that varying two factors at a time, while maintaining the third at the midpoint, significantly impacted antifungal activity against Candida albicans . These insights underscored the critical roles of glucose and glycine max concentrations in maximizing the antibiotic yield produced by Streptomyces SK-2023-2. The regression equation modelling the zone of inhibition (Y) based on the glucose (X 1 ), glycine max (X 2 ), and CaCO 3 (X 3 ) concentrations was as follows: [ Y = 26.13 + 3.08 X 1 + 2.60 X 2 + 0.89 X 3 + 2.50 X 1 X 2 − 0.75 X 1 X 3 − 5.15 X 1 2 − 4.80 X 2 2 − 2.33 X 3 2 ] The predictive power of this model highlighted the significant contributions of glucose and glycine max in enhancing antifungal activity, while CaCO 3 exerted a moderate effect. 3.6 Multivariate analysis Partial Least Squares Regression (PLSR) was used to delve deeper into the relationships between the independent variables (glucose, glycine max, and CaCO 3 ) and the zone of inhibition. Variable Importance in Projection (VIP) scores revealed that both glucose and glycine max played critical roles in antimicrobial activity, with VIP scores exceeding 1, indicating high significance . In contrast, CaCO 3 exhibited a comparatively lower influence on antibiotic production. The correlation circle plot illustrated the strong positive correlation between the glucose and glycine max concentrations with the zone of inhibition, while CaCO 3 had a more moderate association. Additionally, the VIP plot visually confirmed that glycine max exerted the greatest impact, followed by glucose. These findings reinforce the importance of optimizing glucose and glycine max concentrations to maximize antibiotic production. Statistical validation using leave-one-out cross-validation confirmed the robustness of the regression model. The optimization experiments, as summarized in , showed that the maximum antibiotic production, corresponding to a zone of inhibition of 30 mm, was obtained at 10 g/L glucose, 10 g/L glycine max, and 1 g/L CaCO 3 . The regression analysis strongly indicated the critical importance of glucose and glycine max in enhancing antibiotic production, as depicted in . These results highlight the necessity of fine-tuning these variables to achieve the highest possible antimicrobial efficacy. Identification and characterization of Streptomyces spp. The soil analysis from the site indicated it to be alkaline with a pH of 7.9 and low salinity (EC = 0.15 ds/m). Based on the texture diagram, the soil was classified as sandy loam, characterized by a low clay content (20%), adequate total nitrogen (0.17%), and a moderate level of organic matter (2.98%). The analysis also revealed the presence of exchangeable cations, including potassium (K), magnesium (Mg), aluminium (Al), calcium (Ca), and silicon (Si). Among the mineral elements, oxygen (O), iron (Fe), and silicon (Si) were the most abundant, followed by aluminium, calcium, potassium, and magnesium . From the soil samples collected across the five locations, 25 morphologically distinct presumptive Streptomyces strains were successfully isolated. Out of the 25 isolated strains, 7 were identified as S. kanamyceticus based on 16S rRNA sequencing, and their phylogenetic relationships were confirmed through a constructed phylogenetic tree . These 7 isolates were selected for further study and were designated as SK-2023-1 through SK-2023-7. Screening of active S. kanamyceticus isolates The primary screening of the S. kanamyceticus isolates against Escherichia coli ATCC 25922, Staphylococcus aureus ATCC 25923, Bacillus subtilis ATCC 19659, and Candida albicans ATCC 60193 revealed varied levels of antimicrobial activity . Isolate SK-2023-1 demonstrated inhibition zones of 20 mm, 22 mm, 25 mm, and 25 mm, respectively, against these microorganisms. SK-2023-2 exhibited the highest inhibition for all tested organisms, with 24 mm against E. coli , 30 mm against S. aureus , 29 mm against B. subtilis , and 30 mm against C. albicans . Conversely, SK-2023-3 showed the lowest activity, with inhibition zones of 16 mm, 15 mm, 14 mm, and 18 mm, respectively. SK-2023-4 produced strong inhibition, particularly against C. albicans (31 mm) and S. aureus (28 mm). The antimicrobial activity of SK-2023-5 was moderate, ranging from 15 mm to 20 mm across all pathogens. SK-2023-6 and SK-2023-7 displayed similar levels of activity, with SK-2023-6 showing higher inhibition against E. coli (29 mm), while both exhibited strong inhibition against C. albicans (27 mm and 31 mm, respectively). These results indicate significant variability in the antimicrobial efficacy of the different S. kanamyceticus isolates. Extraction and minimum inhibitory concentration The extraction of bioactive compounds from Streptomyces spp. was successfully achieved, yielding approximately 150-200 mL of concentrated extracts suitable for further analysis. The concentrated extracts exhibited distinct colours and viscosities, indicating the presence of various metabolites. Subsequent antibacterial and antifungal activity assays revealed significant antimicrobial potential of the bioactive compounds. The MIC (MIC) and the corresponding Zone of Inhibition were recorded . Escherichia coli ATCC25922 exhibited a MIC ranging from 20 µg/ml to 70 µg/ml, with zone inhibition measurements varying from non-detectable to substantial inhibition (29 mm). Similarly, Staphylococcus aureus ATCC25923 demonstrated MIC values between 25 µg/ml and 70 µg/ml, with zones of inhibition reaching up to 25 mm. Bacillus subtilis ATCC 19659 showed an MIC range of 20 µg/ml to 60 µg/ml, while its zone of inhibition varied significantly, reaching a maximum of 23 mm. For the fungal pathogen Candida albicans ATCC 60193, the MIC ranged from 20 µg/ml to 65 µg/ml, with inhibition zones also showing substantial variability, peaking at 31 mm. These findings indicated that the bioactive compounds possessed significant antimicrobial activity against both bacterial and fungal pathogens. FTIR analysis The FTIR analysis of the diethyl ether extracts from Streptomyces SK-2023-2 and SK-2023-4 revealed distinct spectral profiles, highlighting the presence of various functional groups . The FTIR spectrum of SK-2023-2 displayed characteristic absorption bands at specific wavenumbers . A prominent peak at 3350 cm - ¹ corresponded to O-H stretching vibrations, indicating the presence of hydroxyl groups. Additionally, peaks at 2920 cm - ¹ and 2850 cm - ¹ indicated C-H stretching of aliphatic compounds, while the absorption band at 1740 cm - ¹ suggested the presence of carbonyl (C=O) groups. Furthermore, peaks at 1600 cm - ¹ and 1450 cm - ¹ were attributed to C=C stretching vibrations, suggesting possible aromatic structures. FTIR spectrum of SK-2023-4 exhibited similar absorption features. The presence of O-H stretching was confirmed by a broad peak around 3400 cm - ¹, while C-H stretching vibrations appeared at 2925 cm - ¹ and 2855 cm - ¹. The carbonyl group was indicated by a peak at 1715 cm - ¹. Moreover, peaks at 1610 cm - ¹ and 1490 cm - ¹ were associated with aromatic compounds. The FTIR analysis confirmed the presence of functional groups indicative of diverse bioactive compounds in both extracts, suggesting their potential for further investigation into antimicrobial activities. Statistical optimization and modelling The optimization of antibiotic production was performed using a central composite design (CCD), which included three independent variables: X 1 = glucose, X 2 = glycine max, and X 3 = CaCO 3 . This design comprised three experimental runs that examined the influence of these factors on the antibiotic yield . The highest antibiotic production was achieved in a medium containing (g/L): glucose 10, glycine max 10, and CaCO 3 1, resulting in a zone of inhibition of 30 mm. The statistical model analysed through ANOVA, confirmed the significance of the optimization process with an F-value of 3.18 and a p-value of 0.0427 . Three-dimensional response surface plots were generated to illustrate the interaction effects of glucose, glycine max, and CaCO 3 concentrations on the zone of inhibition. The results showed that varying two factors at a time, while maintaining the third at the midpoint, significantly impacted antifungal activity against Candida albicans . These insights underscored the critical roles of glucose and glycine max concentrations in maximizing the antibiotic yield produced by Streptomyces SK-2023-2. The regression equation modelling the zone of inhibition (Y) based on the glucose (X 1 ), glycine max (X 2 ), and CaCO 3 (X 3 ) concentrations was as follows: [ Y = 26.13 + 3.08 X 1 + 2.60 X 2 + 0.89 X 3 + 2.50 X 1 X 2 − 0.75 X 1 X 3 − 5.15 X 1 2 − 4.80 X 2 2 − 2.33 X 3 2 ] The predictive power of this model highlighted the significant contributions of glucose and glycine max in enhancing antifungal activity, while CaCO 3 exerted a moderate effect. Multivariate analysis Partial Least Squares Regression (PLSR) was used to delve deeper into the relationships between the independent variables (glucose, glycine max, and CaCO 3 ) and the zone of inhibition. Variable Importance in Projection (VIP) scores revealed that both glucose and glycine max played critical roles in antimicrobial activity, with VIP scores exceeding 1, indicating high significance . In contrast, CaCO 3 exhibited a comparatively lower influence on antibiotic production. The correlation circle plot illustrated the strong positive correlation between the glucose and glycine max concentrations with the zone of inhibition, while CaCO 3 had a more moderate association. Additionally, the VIP plot visually confirmed that glycine max exerted the greatest impact, followed by glucose. These findings reinforce the importance of optimizing glucose and glycine max concentrations to maximize antibiotic production. Statistical validation using leave-one-out cross-validation confirmed the robustness of the regression model. The optimization experiments, as summarized in , showed that the maximum antibiotic production, corresponding to a zone of inhibition of 30 mm, was obtained at 10 g/L glucose, 10 g/L glycine max, and 1 g/L CaCO 3 . The regression analysis strongly indicated the critical importance of glucose and glycine max in enhancing antibiotic production, as depicted in . These results highlight the necessity of fine-tuning these variables to achieve the highest possible antimicrobial efficacy. Discussion In this study, we characterized the soil environment where Streptomyces strains were isolated, noting a pH of 7.9 and a sandy-loam texture conducive to microbial diversity. Our analysis revealed that the soil’s nutritional profile, including adequate nitrogen and organic matter, supported the isolation of 25 distinct presumptive Streptomyces strains. Using 16S rRNA sequencing, we identified seven strains as Streptomyces kanamyceticus . This rigorous identification process underscores the importance of molecular techniques in accurately classifying microbial species, reflecting best practices in microbial ecology . Our findings highlight the potential of these isolates for biotechnological applications, particularly in antibiotic production. We conducted a primary screening of our S. kanamyceticus isolates against several pathogens, revealing significant variability in antimicrobial activity. Notably, isolate SK-2023-2 demonstrated the highest inhibition zones, showcasing its potential for producing bioactive compounds. This observation aligns with previous studies indicating that certain Streptomyces species can yield potent antibiotics . Our results suggest that specific isolates may be more effective for therapeutic purposes, emphasizing the need for further investigation into their bioactive metabolites and mechanisms of action, particularly against resistant strains of bacteria. We successfully extracted bioactive compounds from the Streptomyces isolates, achieving significant antimicrobial activity as evidenced by MIC values ranging from 20 µg/ml to 70 µg/ml. Our findings, particularly the high activity against Staphylococcus aureus and Candida albicans , reinforce the notion that S. kanamyceticus can produce unique antimicrobial agents. This variability in efficacy supports existing literature documenting the diverse bioactive potential of Streptomyces extracts . The substantial inhibition observed across different pathogens indicates the possibility of developing new therapeutic agents from these isolates, particularly in light of the increasing incidence of antimicrobial resistance. Our FTIR analysis provided insight into the functional groups present in the extracts from SK-2023-2 and SK-2023-4. The identification of O-H, C-H, and C=O groups suggests a diverse range of bioactive metabolites, including phenolic compounds and terpenoids, which are known for their antimicrobial properties . The similarity in spectral profiles between the two isolates indicates shared metabolic pathways for compound production. This finding enhances our understanding of the chemical diversity present in Streptomyces extracts and paves the way for further investigations into their antimicrobial mechanisms. We employed a central composite design (CCD) to optimize the production of antimicrobial compounds, demonstrating the significant roles of glucose, glycine max, and CaCO 3 . The optimization yielded the highest antibiotic production at specific nutrient concentrations, with an F-value of 3.18 and a p-value of 0.0427 indicating the reliability of our findings. This statistical approach not only affirms our methodology but also aligns with other studies utilizing response surface methodology to enhance microbial production . Our results underscore the necessity of optimizing nutrient profiles in biotechnological processes, particularly for maximizing the yields of bioactive compounds. In our multivariate analysis using Partial Least Squares Regression (PLSR), we elucidated the relationships between nutrient concentrations and antimicrobial activity. The high VIP scores for glucose and glycine max reaffirm their critical contributions to antibiotic production. This analysis aligns with existing literature, highlighting the importance of optimizing carbon and nitrogen sources in microbial metabolism . The correlation circle plot we generated visually represents these relationships, reinforcing our findings on how nutrient manipulation can significantly impact antimicrobial efficacy. Our statistical validation through leave-one-out cross-validation confirms the robustness of our regression model, emphasizing its potential for guiding future research in optimizing Streptomyces metabolites. The correlation circle plot illustrated a strong positive correlation between the concentrations of glucose and glycine max and the zone of inhibition, suggesting that higher levels of these nutrients enhance antimicrobial activity. In contrast, the lower VIP score for CaCO 3 indicated that its influence on antibiotic production was less pronounced. This is corroborated by the fact that CaCO 3 mainly serves to buffer the medium rather than directly influencing the metabolic pathways related to antimicrobial synthesis. The VIP plot further substantiated the dominance of glycine max over glucose in driving antimicrobial production. This finding highlights the critical role of nitrogen sources in optimizing the biosynthesis of bioactive metabolites, which aligns with prior research emphasizing the synergistic effect of carbon and nitrogen in microbial secondary metabolism . Furthermore, the statistical validation using leave-one-out cross-validation confirmed the robustness and predictive power of our regression model, suggesting that it could be a reliable tool for guiding future optimization experiments aimed at enhancing the antimicrobial efficacy of S. kanamyceticus . These findings emphasize the necessity of fine-tuning both carbon and nitrogen sources to maximize the production of antimicrobial metabolites, and open new avenues for optimizing fermentation conditions in Streptomyces-based antibiotic production systems. Our study yielded valuable insights into the antimicrobial potential of S. kanamyceticus , and the screening of antimicrobial activity was conducted using a limited range of pathogenic strains. In future studies, expanding this screening to include a broader spectrum of clinically relevant pathogens, particularly multidrug-resistant strains, could provide a more comprehensive understanding of the antibacterial efficacy of our isolates. Future work could incorporate advanced analytical techniques such as NMR or mass spectrometry to elucidate the specific structures of the metabolites. This would deepen our understanding of the mechanisms underlying their antimicrobial activity and facilitate the discovery of novel compounds. Moreover, our optimization experiments primarily focused on a select few nutritional variables. Considering additional factors such as pH, temperature, and incubation time in future optimization studies could further enhance the yield and activity of bioactive compounds. This holistic approach may lead to more efficient production strategies. Conclusion Our study provides a comprehensive analysis of S. kanamyceticus isolates, detailing their identification, antimicrobial activity, and the optimization of nutrient conditions for enhanced bioactive compound production. The findings contribute to the understanding of Streptomyces species as a valuable source for novel antibiotics, offering a foundation for future research aimed at combating antimicrobial resistance. The methodologies employed and insights gained from this work underscore the potential of these microbial strains in biotechnological applications. |
Everything but the kitchen sink: The use of multiple hypothesis generation methods to investigate an outbreak of | 71f6a888-ddb5-4928-ac43-588e37bd5014 | 11450506 | Microbiology[mh] | In December 2018, an outbreak of Salmonella Enteritidis infections was identified in Canada. Eighty-three cases across seven provinces were reported over the course of the investigation. Case-onset dates ranged from 6 November 2018 to 7 May 2019. Ages ranged from 1 to 88 years; 60% (50/83) of the cases were female; 39% (22/56) were hospitalized; and there were three deaths reported. Brand X profiteroles and eclairs imported from Thailand in October 2018 were identified as the source of the outbreak based on epidemiological, microbiological, and food safety evidence. Eggs supplied from an unregistered facility were hypothesized to be the likely cause of contamination. The outbreak investigation was challenging and complex, requiring multiple hypothesis generation methods to identify the source, including various interviewing approaches, analytic approaches, and comparison of genomic sequence data to domestic and international repositories.
During foodborne illness outbreak investigations, microbiological, epidemiological, and food safety evidence is collated to identify the source of the illnesses. For complex investigations, this can necessitate the use of multiple, iterative hypothesis generation methods . These methods may include analysing case questionnaires and open-ended interview data and using population-based food exposure surveys or online surveys to generate reference values for exposures of interest. Additional hypothesis generation methods might include case focus groups , analytic studies , in-person interviews, or shopping trips with cases. Methods such as analysing consumer food purchase records and comparing case pathogen isolate sequence data with domestic and international repositories can also help generate hypotheses and identify exposures of interest. As described by Morton et al. (2020), the hypothesis generation process is pivotal in foodborne disease outbreak investigations but often poorly described, representing a missed opportunity for sharing lessons learned . In December 2018, an outbreak of Salmonella Enteritidis was identified in Canada via WGS, with cases geographically distributed across multiple provinces. By February 2019, the cluster was growing rapidly, and a collaborative investigation was initiated to identify the source of the illnesses, implement control measures, and prevent future illnesses and deaths . This study describes the complex and challenging outbreak investigation, highlighting the many complementary hypothesis generation methods that were used to identify the outbreak source.
Case definition A confirmed case was defined as a resident of or visitor to Canada with laboratory confirmation of Salmonella Enteritidis, an isolate related within 0–10 whole-genome multi-locus sequence typing (wgMLST) allele differences, and symptom onset or specimen collection date on or after 1 November 2018. The collaborative investigation was deactivated on 17 June 2019. Microbiological hypothesis generation In Canada, all Salmonella isolates are forwarded to provincial public health laboratories or the National Microbiology Laboratory for WGS-based subtyping using the standardized PulseNet Canada (PNC) protocol . WGS data are analysed locally and uploaded to a centralized BioNumerics v7.6 (BioMerieux) database where it is analysed by the PNC national database team using wgMLST. Isolates within 0–10 wgMLST allele differences were considered genetically related and included in the case definition for this outbreak investigation. WGS data were deposited retrospectively onto the National Center for Biotechnology Information (NCBI) in BioProject PRJNA543337. Comparison of sequence data to domestic WGS data The PNC WGS database was reviewed for Canadian clinical and non-clinical historical matches to the outbreak within 0–10 wgMLST allele differences. Comparison of sequence data to international repositories Under a bilateral information sharing agreement between Canada and the United States (USA), WGS data were exchanged with PulseNet USA to facilitate the query of the PulseNet USA databases for potential matches. Related US isolates were used to query the NCBI Pathogen Detection Pipeline. Data from potentially related international isolates identified via NCBI were imported into the PNC WGS database to allow for standardized analysis. Following confirmation from the International Food Safety Authorities Network (INFOSAN) that the implicated product was distributed in Australia, PNC shared WGS data with OzFoodNet to facilitate the identification of related clinical cases in Australia. Epidemiological hypothesis generation Case interviewing Local public health authorities conducted initial case investigations by telephone using routine provincial Salmonella questionnaires. These questionnaires varied in content and length, but typically included questions on various exposures prior to onset, including travel within and outside Canada, foodborne exposures (within and outside the home), zoonotic exposures, and exposures to high-risk occupations or environments. Questionnaire data were sent to the Public Health Agency of Canada (PHAC) for centralized analysis (i.e. analysis across provinces). Selected cases were re-interviewed by telephone by one of two interviewers, centralized at PHAC, using an iterative, open-ended interviewing approach. Cases were selected for re-interview based on a variety of factors, which included whether they provided consent to be re-contacted, whether they had good recall of exposures at initial interview, and the length of time since their symptom onset (with more recent cases prioritized for re-interview). These interviews were conversational in style and allowed cases to elaborate on typical food habits and preferences. Case–case analysis Two case–case analyses were conducted with a subset of cases from British Columbia (BC). One case–case analysis compared the exposures reported by outbreak cases to other Salmonella Enteritidis cases in BC from November 2018 to January 2019 that did not match the outbreak strain. The other case–case analysis compared the exposures reported by outbreak cases to all other Salmonella cases in BC, excluding Salmonella Enteritidis and Salmonella Typhi cases. Salmonella Enteritidis cases were excluded due to an ongoing outbreak of S. Enteritidis in BC associated with poultry and egg products, and Salmonella Typhi cases were excluded as S. Typhi is not domestically acquired in Canada; excluding these serotypes served to increase the comparability of the groups. Grocery store site visit Investigators visited a location of Grocery Store Chain A, a chain reported by several cases. Comparison of case clinical isolate sequence data to international repositories had revealed a potential connection to Thailand. Investigators examined product labels at Grocery Store Chain A to identify foods originating from Thailand as potential exposures of interest. Thematic analysis The two centralized interviewers read aloud their open-ended interviewing notes. Investigators recorded food exposures, preferences, habits, and shopping locations on a whiteboard and grouped this information into themes. Comparison to healthy control groups Using binomial probabilities, case exposure data were compared to results of the 2015 Foodbook population-based telephone survey, which reports the expected proportion of people in Canada reporting various food exposures in the previous 7 days . Foodbook values were restricted to the provinces where cases were reported and to months in which illness onsets were reported. A Bonferroni correction was used to restrict the significant p-value of each individual test and reduce the likelihood of a Type 1 error given the number of simultaneous tests (n = 185). An online survey was conducted to gather contemporary exposure information from healthy, population-based controls for foods without comparison values in Foodbook. At the time of the survey, leading exposures of interest included frozen fish and commercially prepared mixed fruit cups. The survey was launched on Canada.ca webpages and promoted through social media channels and was available from 12 April 2019 to 30 April 2019. Residents of Canada that did not report vomiting, diarrhoea, or international travel in the previous 7 days were asked about consumption of unbreaded/unbattered frozen fish, breaded/battered frozen fish, and mixed fruit cups. Survey responses were extracted once per week and compared to case exposure data. Purchase record analysis During centralized case re-interview, cases with grocery store loyalty cards were asked for consent for investigators to access their purchase records for the 3 months prior to illness onset. For stores without loyalty card programmes, cases were asked to provide any available receipts for their food purchases. Several confirmed cases resided in long-term care facilities; menus and invoices were requested from these facilities. Analysis of purchase records and invoices were conducted in Microsoft Excel, looking across records for commonalities in categories of foods and specific food items. Food safety hypothesis testing Traceback analysis When suspect outbreak sources were identified, product details (e.g. purchase location, purchase date, product name) were shared with the Canadian Food Inspection Agency (CFIA) to support food safety investigation and traceback activities. CFIA completed traceback on several exposures of interest during the investigation. Food sample collection and analysis from case homes and establishments Opened and unopened food samples taken from retail, case homes, and a long-term care facility were tested by CFIA using the MFHPB-20, MFLP-29, or MFLP-40 methods published in Health Canada’s Compendium of Analytical Methods for the Microbiological Analysis of Foods . The British Columbia Centres for Disease Control (BCCDC) Public Health Laboratory also tested retail samples using the MFLP-29 method. Health Canada tested recalled products obtained from retail as well as an open sample collected from a long-term care facility using the MFHPB-20 method. WGS was performed on any Salmonella isolates recovered from food samples and compared to clinical isolates.
A confirmed case was defined as a resident of or visitor to Canada with laboratory confirmation of Salmonella Enteritidis, an isolate related within 0–10 whole-genome multi-locus sequence typing (wgMLST) allele differences, and symptom onset or specimen collection date on or after 1 November 2018. The collaborative investigation was deactivated on 17 June 2019.
In Canada, all Salmonella isolates are forwarded to provincial public health laboratories or the National Microbiology Laboratory for WGS-based subtyping using the standardized PulseNet Canada (PNC) protocol . WGS data are analysed locally and uploaded to a centralized BioNumerics v7.6 (BioMerieux) database where it is analysed by the PNC national database team using wgMLST. Isolates within 0–10 wgMLST allele differences were considered genetically related and included in the case definition for this outbreak investigation. WGS data were deposited retrospectively onto the National Center for Biotechnology Information (NCBI) in BioProject PRJNA543337. Comparison of sequence data to domestic WGS data The PNC WGS database was reviewed for Canadian clinical and non-clinical historical matches to the outbreak within 0–10 wgMLST allele differences. Comparison of sequence data to international repositories Under a bilateral information sharing agreement between Canada and the United States (USA), WGS data were exchanged with PulseNet USA to facilitate the query of the PulseNet USA databases for potential matches. Related US isolates were used to query the NCBI Pathogen Detection Pipeline. Data from potentially related international isolates identified via NCBI were imported into the PNC WGS database to allow for standardized analysis. Following confirmation from the International Food Safety Authorities Network (INFOSAN) that the implicated product was distributed in Australia, PNC shared WGS data with OzFoodNet to facilitate the identification of related clinical cases in Australia.
The PNC WGS database was reviewed for Canadian clinical and non-clinical historical matches to the outbreak within 0–10 wgMLST allele differences.
Under a bilateral information sharing agreement between Canada and the United States (USA), WGS data were exchanged with PulseNet USA to facilitate the query of the PulseNet USA databases for potential matches. Related US isolates were used to query the NCBI Pathogen Detection Pipeline. Data from potentially related international isolates identified via NCBI were imported into the PNC WGS database to allow for standardized analysis. Following confirmation from the International Food Safety Authorities Network (INFOSAN) that the implicated product was distributed in Australia, PNC shared WGS data with OzFoodNet to facilitate the identification of related clinical cases in Australia.
Case interviewing Local public health authorities conducted initial case investigations by telephone using routine provincial Salmonella questionnaires. These questionnaires varied in content and length, but typically included questions on various exposures prior to onset, including travel within and outside Canada, foodborne exposures (within and outside the home), zoonotic exposures, and exposures to high-risk occupations or environments. Questionnaire data were sent to the Public Health Agency of Canada (PHAC) for centralized analysis (i.e. analysis across provinces). Selected cases were re-interviewed by telephone by one of two interviewers, centralized at PHAC, using an iterative, open-ended interviewing approach. Cases were selected for re-interview based on a variety of factors, which included whether they provided consent to be re-contacted, whether they had good recall of exposures at initial interview, and the length of time since their symptom onset (with more recent cases prioritized for re-interview). These interviews were conversational in style and allowed cases to elaborate on typical food habits and preferences. Case–case analysis Two case–case analyses were conducted with a subset of cases from British Columbia (BC). One case–case analysis compared the exposures reported by outbreak cases to other Salmonella Enteritidis cases in BC from November 2018 to January 2019 that did not match the outbreak strain. The other case–case analysis compared the exposures reported by outbreak cases to all other Salmonella cases in BC, excluding Salmonella Enteritidis and Salmonella Typhi cases. Salmonella Enteritidis cases were excluded due to an ongoing outbreak of S. Enteritidis in BC associated with poultry and egg products, and Salmonella Typhi cases were excluded as S. Typhi is not domestically acquired in Canada; excluding these serotypes served to increase the comparability of the groups. Grocery store site visit Investigators visited a location of Grocery Store Chain A, a chain reported by several cases. Comparison of case clinical isolate sequence data to international repositories had revealed a potential connection to Thailand. Investigators examined product labels at Grocery Store Chain A to identify foods originating from Thailand as potential exposures of interest. Thematic analysis The two centralized interviewers read aloud their open-ended interviewing notes. Investigators recorded food exposures, preferences, habits, and shopping locations on a whiteboard and grouped this information into themes. Comparison to healthy control groups Using binomial probabilities, case exposure data were compared to results of the 2015 Foodbook population-based telephone survey, which reports the expected proportion of people in Canada reporting various food exposures in the previous 7 days . Foodbook values were restricted to the provinces where cases were reported and to months in which illness onsets were reported. A Bonferroni correction was used to restrict the significant p-value of each individual test and reduce the likelihood of a Type 1 error given the number of simultaneous tests (n = 185). An online survey was conducted to gather contemporary exposure information from healthy, population-based controls for foods without comparison values in Foodbook. At the time of the survey, leading exposures of interest included frozen fish and commercially prepared mixed fruit cups. The survey was launched on Canada.ca webpages and promoted through social media channels and was available from 12 April 2019 to 30 April 2019. Residents of Canada that did not report vomiting, diarrhoea, or international travel in the previous 7 days were asked about consumption of unbreaded/unbattered frozen fish, breaded/battered frozen fish, and mixed fruit cups. Survey responses were extracted once per week and compared to case exposure data. Purchase record analysis During centralized case re-interview, cases with grocery store loyalty cards were asked for consent for investigators to access their purchase records for the 3 months prior to illness onset. For stores without loyalty card programmes, cases were asked to provide any available receipts for their food purchases. Several confirmed cases resided in long-term care facilities; menus and invoices were requested from these facilities. Analysis of purchase records and invoices were conducted in Microsoft Excel, looking across records for commonalities in categories of foods and specific food items.
Local public health authorities conducted initial case investigations by telephone using routine provincial Salmonella questionnaires. These questionnaires varied in content and length, but typically included questions on various exposures prior to onset, including travel within and outside Canada, foodborne exposures (within and outside the home), zoonotic exposures, and exposures to high-risk occupations or environments. Questionnaire data were sent to the Public Health Agency of Canada (PHAC) for centralized analysis (i.e. analysis across provinces). Selected cases were re-interviewed by telephone by one of two interviewers, centralized at PHAC, using an iterative, open-ended interviewing approach. Cases were selected for re-interview based on a variety of factors, which included whether they provided consent to be re-contacted, whether they had good recall of exposures at initial interview, and the length of time since their symptom onset (with more recent cases prioritized for re-interview). These interviews were conversational in style and allowed cases to elaborate on typical food habits and preferences.
Two case–case analyses were conducted with a subset of cases from British Columbia (BC). One case–case analysis compared the exposures reported by outbreak cases to other Salmonella Enteritidis cases in BC from November 2018 to January 2019 that did not match the outbreak strain. The other case–case analysis compared the exposures reported by outbreak cases to all other Salmonella cases in BC, excluding Salmonella Enteritidis and Salmonella Typhi cases. Salmonella Enteritidis cases were excluded due to an ongoing outbreak of S. Enteritidis in BC associated with poultry and egg products, and Salmonella Typhi cases were excluded as S. Typhi is not domestically acquired in Canada; excluding these serotypes served to increase the comparability of the groups.
Investigators visited a location of Grocery Store Chain A, a chain reported by several cases. Comparison of case clinical isolate sequence data to international repositories had revealed a potential connection to Thailand. Investigators examined product labels at Grocery Store Chain A to identify foods originating from Thailand as potential exposures of interest.
The two centralized interviewers read aloud their open-ended interviewing notes. Investigators recorded food exposures, preferences, habits, and shopping locations on a whiteboard and grouped this information into themes.
Using binomial probabilities, case exposure data were compared to results of the 2015 Foodbook population-based telephone survey, which reports the expected proportion of people in Canada reporting various food exposures in the previous 7 days . Foodbook values were restricted to the provinces where cases were reported and to months in which illness onsets were reported. A Bonferroni correction was used to restrict the significant p-value of each individual test and reduce the likelihood of a Type 1 error given the number of simultaneous tests (n = 185). An online survey was conducted to gather contemporary exposure information from healthy, population-based controls for foods without comparison values in Foodbook. At the time of the survey, leading exposures of interest included frozen fish and commercially prepared mixed fruit cups. The survey was launched on Canada.ca webpages and promoted through social media channels and was available from 12 April 2019 to 30 April 2019. Residents of Canada that did not report vomiting, diarrhoea, or international travel in the previous 7 days were asked about consumption of unbreaded/unbattered frozen fish, breaded/battered frozen fish, and mixed fruit cups. Survey responses were extracted once per week and compared to case exposure data.
During centralized case re-interview, cases with grocery store loyalty cards were asked for consent for investigators to access their purchase records for the 3 months prior to illness onset. For stores without loyalty card programmes, cases were asked to provide any available receipts for their food purchases. Several confirmed cases resided in long-term care facilities; menus and invoices were requested from these facilities. Analysis of purchase records and invoices were conducted in Microsoft Excel, looking across records for commonalities in categories of foods and specific food items.
Traceback analysis When suspect outbreak sources were identified, product details (e.g. purchase location, purchase date, product name) were shared with the Canadian Food Inspection Agency (CFIA) to support food safety investigation and traceback activities. CFIA completed traceback on several exposures of interest during the investigation. Food sample collection and analysis from case homes and establishments Opened and unopened food samples taken from retail, case homes, and a long-term care facility were tested by CFIA using the MFHPB-20, MFLP-29, or MFLP-40 methods published in Health Canada’s Compendium of Analytical Methods for the Microbiological Analysis of Foods . The British Columbia Centres for Disease Control (BCCDC) Public Health Laboratory also tested retail samples using the MFLP-29 method. Health Canada tested recalled products obtained from retail as well as an open sample collected from a long-term care facility using the MFHPB-20 method. WGS was performed on any Salmonella isolates recovered from food samples and compared to clinical isolates.
When suspect outbreak sources were identified, product details (e.g. purchase location, purchase date, product name) were shared with the Canadian Food Inspection Agency (CFIA) to support food safety investigation and traceback activities. CFIA completed traceback on several exposures of interest during the investigation.
Opened and unopened food samples taken from retail, case homes, and a long-term care facility were tested by CFIA using the MFHPB-20, MFLP-29, or MFLP-40 methods published in Health Canada’s Compendium of Analytical Methods for the Microbiological Analysis of Foods . The British Columbia Centres for Disease Control (BCCDC) Public Health Laboratory also tested retail samples using the MFLP-29 method. Health Canada tested recalled products obtained from retail as well as an open sample collected from a long-term care facility using the MFHPB-20 method. WGS was performed on any Salmonella isolates recovered from food samples and compared to clinical isolates.
Eighty-three (83) cases in seven provinces were identified as part of this outbreak with onset dates from 6 November 2018 to 7 May 2019 . Ages ranged from 1 to 88 years (median age = 51 years); 60% (50/83) of the cases were female; 39% (22/56) were hospitalized; and three deaths were reported. presents a timeline of all hypothesis generation and testing methods used throughout the investigation. Microbiological results Comparison of sequence data to domestic WGS data All clinical isolates within the outbreak grouped together by 0–10 wgMLST allele differences and were considered genetically related based on WGS. The WGS profile of the outbreak isolates was considered unique within the Canadian national database. Comparison of sequence data to international repositories Clinical (n = 29) and food (n = 4) isolates posted on NCBI by the UK Health Security Agency (called Public Health England at the time of the investigation) were found to match the outbreak cluster within 0–12 wgMLST alleles. In early April, it was reported that of the UK cases, the majority had travel to Thailand indicated on their laboratory forms. The four UK food isolates were frozen raw chicken and frozen salted chicken from Thailand, which were tested in late 2017 and mid-2018. Seven clinical isolates from the USA were identified as grouping within 0 to 14 wgMLST allele differences to the Canadian outbreak cluster, with isolation dates ranging from February 2018 to March 2019. By early May, it was confirmed that five of the US cases reported travel during their exposure period, with three cases travelling to Thailand, one to Asia (country not specified), and one to Mexico. One additional case did not report travel, but a household contact travelled to Thailand. In late May, Australian state public health laboratories ran independent analyses of Salmonella Enteritidis sequences from their jurisdictions to identify cases related to the representative outbreak sequence provided by PNC. All related Australian cases reported travel to Thailand prior to illness onset. Epidemiological results Case interviewing Exposure information was available for 89% of cases (74/83). Of the 74 cases with exposure information, five reported travel to Thailand or Southeast Asia during their exposure period and were excluded from food frequency analyses. Of the nine cases without exposure information, five were lost to follow-up and four were not interviewed. Of the 74 cases with exposure information, 41 cases were interviewed using a routine provincial Salmonella questionnaire only. Thirty-three cases were additionally interviewed throughout March and April by one of two PHAC interviewers using an open-ended approach. PHAC conducted 64 open-ended interviews with these 33 cases, contacting each case one to five times throughout the investigation as additional exposures of interest were identified. Exposure to cream-filled pastries including profiteroles and eclairs was not included on any routine Salmonella questionnaires in Canada at the time of this outbreak. Consequently, only two cases reported exposure to profiteroles or eclairs (unprompted) when interviewed using routine provincial Salmonella questionnaires, and only two more cases reported this exposure (unprompted) in their first open-ended re-interview. After additional re-interviews conducted in late-April 2019, 81% (21/26) of cases who were directly asked about exposure to profiteroles and eclairs reported consuming these products in the 7 days prior to illness onset. An additional three cases who resided in the same personal care home were not asked about profiterole or eclair exposure; however, it was later confirmed in late-April that the personal care home had received a shipment of Brand X profiteroles prior to the residents’ illness onset dates, and ‘cream puffs’ were listed on the residents’ menu. Case–case analysis The case–case analysis was conducted in early March. Consumption of chicken nuggets or strips, other chicken or poultry, raw eggs, beef, pork, cucumbers, nuts, and contact with animals were reported with significantly higher frequency among outbreak cases in comparison with other Salmonella Enteritidis cases in BC. In comparison with other Salmonella cases in BC (excluding Salmonella Enteritidis and Salmonella Typhi), exposures reported with significantly higher frequency among outbreak cases were contact with animals or raw pet food treats, consumption of chicken pieces or parts, chicken nuggets or strips, raw eggs, beef, pork, cucumbers, leafy greens, and nuts. Because profiteroles and eclairs are not included in the routine Salmonella questionnaire of BC, these exposures were not included in this analysis. Grocery store site visit The visit to Grocery Store Chain A, a chain reported by several outbreak cases, was conducted in mid-March and identified several products of interest that originated from Thailand. These products included frozen fish, canned fish, imitation crab meat, frozen breaded shrimp, and shelf-stable commercially prepared mixed fruit cups. Thematic analysis The thematic analysis was conducted in mid-April and revealed key themes among the outbreak cases’ reported food exposures. Cases tended to be ‘plain’ eaters preferring milder flavours and starchy products and also tended to be bargain shoppers, purchasing ‘whatever was on sale’ rather than any specific brands or flavours. Cases also tended to reside outside of large urban areas and shop at smaller or discount grocery stores. Comparison to healthy control groups Comparisons to Foodbook reference values were conducted throughout March and April and revealed significant results (p < 0.001) for turkey, bacon, pork pieces or parts, ground beef, whole cut beef products, and sausage. Profiteroles and eclairs do not have corresponding Foodbook values and were therefore not included in this analysis. In mid-April, frozen fish and shelf-stable commercially prepared mixed fruit cups were included in the online survey because they were commonly reported among cases during open-ended interviews and had been identified as potentially originating from Thailand during the visit to Grocery Store Chain A. The online survey received 283 complete responses between 12 April 2019 and 30 April 2019. The mean age of respondents was 38 years with a range of 16 to 72 years, and 75% of respondents were female. A larger proportion of outbreak cases reported frozen unbreaded/breaded fish (52.2%/47.8%) and commercially prepared mixed fruit cup exposure (55.5%) when compared to survey respondents (14.2%/11.0% and 7.8%, respectively). Each of these comparisons was statistically significant . Following a traceback investigation, frozen fish was determined to be an unlikely source due to lack of convergence in brands, processors, and suppliers across the exposures reported by cases. Discussion with food safety partners revealed that shelf-stable commercially prepared mixed fruit cups were not a plausible outbreak source as they would undergo a thermal treatment sufficient to inactivate Salmonella Enteritidis. Therefore, frozen fish and mixed fruit cups were excluded as possible hypotheses. Purchase record analysis Twenty sets of purchase records were collected from 15 cases throughout March and April, representing purchases at 13 different grocery stores. Invoices were collected from a care home where one case resided and a personal care home where three cases resided. Ultimately, purchase records obtained in late-April for three cases from a single grocery store location indicated that each case had purchased Brand X classical profiteroles, egg nog profiteroles, or mini chocolate eclairs in the 2 months prior to their illness onset. This unusual commonality provided the initial lead for additional investigation into these products via additional interviewing (described above) and traceback investigation (described below). Food safety results Traceback analysis Several exposures of interest were identified through the epidemiological investigation, resulting in traceback being conducted for almonds, frozen fish, sausage, and deli meat. Traceback analysis helped to rule out these items as suspect sources due to lack of convergence to a common supplier. The traceback investigation for profiteroles and eclairs provided strong evidence that these were the source of the outbreak. A single manufacturer was identified for Brand X products. These products were manufactured in Thailand, connecting the Canadian outbreak to the microbiological data shared by the USA, England, and, later, Australia, that were linked to Thailand. The products were imported into Canada for the first time in October 2018, aligning with the symptom-onset dates of initial cases in November 2018. The products were stored frozen, had a 15-month shelf life, and were distributed to grocery chains in the provinces where illnesses were reported. Of note, these grocery chains were generally smaller chains or discount grocery stores, consistent with what was reported during open-ended case interviewing. Additionally, the personal care home where three cases resided had received a shipment of Brand X profiteroles prior to the residents’ illness onset dates, and ‘cream puffs’ were listed on the residents’ menu. Two shipments of Brand X products were recalled in Canada on 26 April 2019. The CFIA notified Thailand of the recall via INFOSAN. INFOSAN contacts in Thailand confirmed that products produced by the Brand X manufacturer at the same time as those distributed to Canada were also distributed to Australia. Brand X products were recalled in Australia on 1 May 2019 . The Thai Food and Drug Administration (Thai FDA) initiated a food safety investigation at the Brand X manufacturer. This investigation revealed that the eggs used in the production of the implicated product were sourced from a non-registered farm due to shortages in the usual supply from registered farms monitored by the Thai FDA. The eggs from the non-registered farm are hypothesized to be the source of the outbreak. Following a thorough audit, the Thai FDA identified several corrective actions for the manufacturer, including only purchasing eggs directly from a company regulated by the Thai FDA, designating the room where pastries were filled after baking as a high-risk area, changing the layout of the factory to protect the pastry filling room, and protecting the transport of baked profiteroles to the filling room. Food sample collection and analysis from case homes and establishments Throughout the course of the outbreak, samples of almonds, frozen fish, chicken, meatballs, fruit cups, deli meat, dry spices, sausage, and pepperoni were collected from case homes and establishments for testing; none had Salmonella detected. Three profiterole samples tested by Health Canada were positive for Salmonella Enteritidis, and the recovered isolates were genetically related to the outbreak cases based on WGS. Further analyses have since characterized the matrix of the recalled frozen profiteroles as permissive to the growth of Salmonella .
Comparison of sequence data to domestic WGS data All clinical isolates within the outbreak grouped together by 0–10 wgMLST allele differences and were considered genetically related based on WGS. The WGS profile of the outbreak isolates was considered unique within the Canadian national database. Comparison of sequence data to international repositories Clinical (n = 29) and food (n = 4) isolates posted on NCBI by the UK Health Security Agency (called Public Health England at the time of the investigation) were found to match the outbreak cluster within 0–12 wgMLST alleles. In early April, it was reported that of the UK cases, the majority had travel to Thailand indicated on their laboratory forms. The four UK food isolates were frozen raw chicken and frozen salted chicken from Thailand, which were tested in late 2017 and mid-2018. Seven clinical isolates from the USA were identified as grouping within 0 to 14 wgMLST allele differences to the Canadian outbreak cluster, with isolation dates ranging from February 2018 to March 2019. By early May, it was confirmed that five of the US cases reported travel during their exposure period, with three cases travelling to Thailand, one to Asia (country not specified), and one to Mexico. One additional case did not report travel, but a household contact travelled to Thailand. In late May, Australian state public health laboratories ran independent analyses of Salmonella Enteritidis sequences from their jurisdictions to identify cases related to the representative outbreak sequence provided by PNC. All related Australian cases reported travel to Thailand prior to illness onset.
All clinical isolates within the outbreak grouped together by 0–10 wgMLST allele differences and were considered genetically related based on WGS. The WGS profile of the outbreak isolates was considered unique within the Canadian national database.
Clinical (n = 29) and food (n = 4) isolates posted on NCBI by the UK Health Security Agency (called Public Health England at the time of the investigation) were found to match the outbreak cluster within 0–12 wgMLST alleles. In early April, it was reported that of the UK cases, the majority had travel to Thailand indicated on their laboratory forms. The four UK food isolates were frozen raw chicken and frozen salted chicken from Thailand, which were tested in late 2017 and mid-2018. Seven clinical isolates from the USA were identified as grouping within 0 to 14 wgMLST allele differences to the Canadian outbreak cluster, with isolation dates ranging from February 2018 to March 2019. By early May, it was confirmed that five of the US cases reported travel during their exposure period, with three cases travelling to Thailand, one to Asia (country not specified), and one to Mexico. One additional case did not report travel, but a household contact travelled to Thailand. In late May, Australian state public health laboratories ran independent analyses of Salmonella Enteritidis sequences from their jurisdictions to identify cases related to the representative outbreak sequence provided by PNC. All related Australian cases reported travel to Thailand prior to illness onset.
Case interviewing Exposure information was available for 89% of cases (74/83). Of the 74 cases with exposure information, five reported travel to Thailand or Southeast Asia during their exposure period and were excluded from food frequency analyses. Of the nine cases without exposure information, five were lost to follow-up and four were not interviewed. Of the 74 cases with exposure information, 41 cases were interviewed using a routine provincial Salmonella questionnaire only. Thirty-three cases were additionally interviewed throughout March and April by one of two PHAC interviewers using an open-ended approach. PHAC conducted 64 open-ended interviews with these 33 cases, contacting each case one to five times throughout the investigation as additional exposures of interest were identified. Exposure to cream-filled pastries including profiteroles and eclairs was not included on any routine Salmonella questionnaires in Canada at the time of this outbreak. Consequently, only two cases reported exposure to profiteroles or eclairs (unprompted) when interviewed using routine provincial Salmonella questionnaires, and only two more cases reported this exposure (unprompted) in their first open-ended re-interview. After additional re-interviews conducted in late-April 2019, 81% (21/26) of cases who were directly asked about exposure to profiteroles and eclairs reported consuming these products in the 7 days prior to illness onset. An additional three cases who resided in the same personal care home were not asked about profiterole or eclair exposure; however, it was later confirmed in late-April that the personal care home had received a shipment of Brand X profiteroles prior to the residents’ illness onset dates, and ‘cream puffs’ were listed on the residents’ menu. Case–case analysis The case–case analysis was conducted in early March. Consumption of chicken nuggets or strips, other chicken or poultry, raw eggs, beef, pork, cucumbers, nuts, and contact with animals were reported with significantly higher frequency among outbreak cases in comparison with other Salmonella Enteritidis cases in BC. In comparison with other Salmonella cases in BC (excluding Salmonella Enteritidis and Salmonella Typhi), exposures reported with significantly higher frequency among outbreak cases were contact with animals or raw pet food treats, consumption of chicken pieces or parts, chicken nuggets or strips, raw eggs, beef, pork, cucumbers, leafy greens, and nuts. Because profiteroles and eclairs are not included in the routine Salmonella questionnaire of BC, these exposures were not included in this analysis. Grocery store site visit The visit to Grocery Store Chain A, a chain reported by several outbreak cases, was conducted in mid-March and identified several products of interest that originated from Thailand. These products included frozen fish, canned fish, imitation crab meat, frozen breaded shrimp, and shelf-stable commercially prepared mixed fruit cups. Thematic analysis The thematic analysis was conducted in mid-April and revealed key themes among the outbreak cases’ reported food exposures. Cases tended to be ‘plain’ eaters preferring milder flavours and starchy products and also tended to be bargain shoppers, purchasing ‘whatever was on sale’ rather than any specific brands or flavours. Cases also tended to reside outside of large urban areas and shop at smaller or discount grocery stores. Comparison to healthy control groups Comparisons to Foodbook reference values were conducted throughout March and April and revealed significant results (p < 0.001) for turkey, bacon, pork pieces or parts, ground beef, whole cut beef products, and sausage. Profiteroles and eclairs do not have corresponding Foodbook values and were therefore not included in this analysis. In mid-April, frozen fish and shelf-stable commercially prepared mixed fruit cups were included in the online survey because they were commonly reported among cases during open-ended interviews and had been identified as potentially originating from Thailand during the visit to Grocery Store Chain A. The online survey received 283 complete responses between 12 April 2019 and 30 April 2019. The mean age of respondents was 38 years with a range of 16 to 72 years, and 75% of respondents were female. A larger proportion of outbreak cases reported frozen unbreaded/breaded fish (52.2%/47.8%) and commercially prepared mixed fruit cup exposure (55.5%) when compared to survey respondents (14.2%/11.0% and 7.8%, respectively). Each of these comparisons was statistically significant . Following a traceback investigation, frozen fish was determined to be an unlikely source due to lack of convergence in brands, processors, and suppliers across the exposures reported by cases. Discussion with food safety partners revealed that shelf-stable commercially prepared mixed fruit cups were not a plausible outbreak source as they would undergo a thermal treatment sufficient to inactivate Salmonella Enteritidis. Therefore, frozen fish and mixed fruit cups were excluded as possible hypotheses. Purchase record analysis Twenty sets of purchase records were collected from 15 cases throughout March and April, representing purchases at 13 different grocery stores. Invoices were collected from a care home where one case resided and a personal care home where three cases resided. Ultimately, purchase records obtained in late-April for three cases from a single grocery store location indicated that each case had purchased Brand X classical profiteroles, egg nog profiteroles, or mini chocolate eclairs in the 2 months prior to their illness onset. This unusual commonality provided the initial lead for additional investigation into these products via additional interviewing (described above) and traceback investigation (described below).
Exposure information was available for 89% of cases (74/83). Of the 74 cases with exposure information, five reported travel to Thailand or Southeast Asia during their exposure period and were excluded from food frequency analyses. Of the nine cases without exposure information, five were lost to follow-up and four were not interviewed. Of the 74 cases with exposure information, 41 cases were interviewed using a routine provincial Salmonella questionnaire only. Thirty-three cases were additionally interviewed throughout March and April by one of two PHAC interviewers using an open-ended approach. PHAC conducted 64 open-ended interviews with these 33 cases, contacting each case one to five times throughout the investigation as additional exposures of interest were identified. Exposure to cream-filled pastries including profiteroles and eclairs was not included on any routine Salmonella questionnaires in Canada at the time of this outbreak. Consequently, only two cases reported exposure to profiteroles or eclairs (unprompted) when interviewed using routine provincial Salmonella questionnaires, and only two more cases reported this exposure (unprompted) in their first open-ended re-interview. After additional re-interviews conducted in late-April 2019, 81% (21/26) of cases who were directly asked about exposure to profiteroles and eclairs reported consuming these products in the 7 days prior to illness onset. An additional three cases who resided in the same personal care home were not asked about profiterole or eclair exposure; however, it was later confirmed in late-April that the personal care home had received a shipment of Brand X profiteroles prior to the residents’ illness onset dates, and ‘cream puffs’ were listed on the residents’ menu.
The case–case analysis was conducted in early March. Consumption of chicken nuggets or strips, other chicken or poultry, raw eggs, beef, pork, cucumbers, nuts, and contact with animals were reported with significantly higher frequency among outbreak cases in comparison with other Salmonella Enteritidis cases in BC. In comparison with other Salmonella cases in BC (excluding Salmonella Enteritidis and Salmonella Typhi), exposures reported with significantly higher frequency among outbreak cases were contact with animals or raw pet food treats, consumption of chicken pieces or parts, chicken nuggets or strips, raw eggs, beef, pork, cucumbers, leafy greens, and nuts. Because profiteroles and eclairs are not included in the routine Salmonella questionnaire of BC, these exposures were not included in this analysis.
The visit to Grocery Store Chain A, a chain reported by several outbreak cases, was conducted in mid-March and identified several products of interest that originated from Thailand. These products included frozen fish, canned fish, imitation crab meat, frozen breaded shrimp, and shelf-stable commercially prepared mixed fruit cups.
The thematic analysis was conducted in mid-April and revealed key themes among the outbreak cases’ reported food exposures. Cases tended to be ‘plain’ eaters preferring milder flavours and starchy products and also tended to be bargain shoppers, purchasing ‘whatever was on sale’ rather than any specific brands or flavours. Cases also tended to reside outside of large urban areas and shop at smaller or discount grocery stores.
Comparisons to Foodbook reference values were conducted throughout March and April and revealed significant results (p < 0.001) for turkey, bacon, pork pieces or parts, ground beef, whole cut beef products, and sausage. Profiteroles and eclairs do not have corresponding Foodbook values and were therefore not included in this analysis. In mid-April, frozen fish and shelf-stable commercially prepared mixed fruit cups were included in the online survey because they were commonly reported among cases during open-ended interviews and had been identified as potentially originating from Thailand during the visit to Grocery Store Chain A. The online survey received 283 complete responses between 12 April 2019 and 30 April 2019. The mean age of respondents was 38 years with a range of 16 to 72 years, and 75% of respondents were female. A larger proportion of outbreak cases reported frozen unbreaded/breaded fish (52.2%/47.8%) and commercially prepared mixed fruit cup exposure (55.5%) when compared to survey respondents (14.2%/11.0% and 7.8%, respectively). Each of these comparisons was statistically significant . Following a traceback investigation, frozen fish was determined to be an unlikely source due to lack of convergence in brands, processors, and suppliers across the exposures reported by cases. Discussion with food safety partners revealed that shelf-stable commercially prepared mixed fruit cups were not a plausible outbreak source as they would undergo a thermal treatment sufficient to inactivate Salmonella Enteritidis. Therefore, frozen fish and mixed fruit cups were excluded as possible hypotheses.
Twenty sets of purchase records were collected from 15 cases throughout March and April, representing purchases at 13 different grocery stores. Invoices were collected from a care home where one case resided and a personal care home where three cases resided. Ultimately, purchase records obtained in late-April for three cases from a single grocery store location indicated that each case had purchased Brand X classical profiteroles, egg nog profiteroles, or mini chocolate eclairs in the 2 months prior to their illness onset. This unusual commonality provided the initial lead for additional investigation into these products via additional interviewing (described above) and traceback investigation (described below).
Traceback analysis Several exposures of interest were identified through the epidemiological investigation, resulting in traceback being conducted for almonds, frozen fish, sausage, and deli meat. Traceback analysis helped to rule out these items as suspect sources due to lack of convergence to a common supplier. The traceback investigation for profiteroles and eclairs provided strong evidence that these were the source of the outbreak. A single manufacturer was identified for Brand X products. These products were manufactured in Thailand, connecting the Canadian outbreak to the microbiological data shared by the USA, England, and, later, Australia, that were linked to Thailand. The products were imported into Canada for the first time in October 2018, aligning with the symptom-onset dates of initial cases in November 2018. The products were stored frozen, had a 15-month shelf life, and were distributed to grocery chains in the provinces where illnesses were reported. Of note, these grocery chains were generally smaller chains or discount grocery stores, consistent with what was reported during open-ended case interviewing. Additionally, the personal care home where three cases resided had received a shipment of Brand X profiteroles prior to the residents’ illness onset dates, and ‘cream puffs’ were listed on the residents’ menu. Two shipments of Brand X products were recalled in Canada on 26 April 2019. The CFIA notified Thailand of the recall via INFOSAN. INFOSAN contacts in Thailand confirmed that products produced by the Brand X manufacturer at the same time as those distributed to Canada were also distributed to Australia. Brand X products were recalled in Australia on 1 May 2019 . The Thai Food and Drug Administration (Thai FDA) initiated a food safety investigation at the Brand X manufacturer. This investigation revealed that the eggs used in the production of the implicated product were sourced from a non-registered farm due to shortages in the usual supply from registered farms monitored by the Thai FDA. The eggs from the non-registered farm are hypothesized to be the source of the outbreak. Following a thorough audit, the Thai FDA identified several corrective actions for the manufacturer, including only purchasing eggs directly from a company regulated by the Thai FDA, designating the room where pastries were filled after baking as a high-risk area, changing the layout of the factory to protect the pastry filling room, and protecting the transport of baked profiteroles to the filling room. Food sample collection and analysis from case homes and establishments Throughout the course of the outbreak, samples of almonds, frozen fish, chicken, meatballs, fruit cups, deli meat, dry spices, sausage, and pepperoni were collected from case homes and establishments for testing; none had Salmonella detected. Three profiterole samples tested by Health Canada were positive for Salmonella Enteritidis, and the recovered isolates were genetically related to the outbreak cases based on WGS. Further analyses have since characterized the matrix of the recalled frozen profiteroles as permissive to the growth of Salmonella .
Several exposures of interest were identified through the epidemiological investigation, resulting in traceback being conducted for almonds, frozen fish, sausage, and deli meat. Traceback analysis helped to rule out these items as suspect sources due to lack of convergence to a common supplier. The traceback investigation for profiteroles and eclairs provided strong evidence that these were the source of the outbreak. A single manufacturer was identified for Brand X products. These products were manufactured in Thailand, connecting the Canadian outbreak to the microbiological data shared by the USA, England, and, later, Australia, that were linked to Thailand. The products were imported into Canada for the first time in October 2018, aligning with the symptom-onset dates of initial cases in November 2018. The products were stored frozen, had a 15-month shelf life, and were distributed to grocery chains in the provinces where illnesses were reported. Of note, these grocery chains were generally smaller chains or discount grocery stores, consistent with what was reported during open-ended case interviewing. Additionally, the personal care home where three cases resided had received a shipment of Brand X profiteroles prior to the residents’ illness onset dates, and ‘cream puffs’ were listed on the residents’ menu. Two shipments of Brand X products were recalled in Canada on 26 April 2019. The CFIA notified Thailand of the recall via INFOSAN. INFOSAN contacts in Thailand confirmed that products produced by the Brand X manufacturer at the same time as those distributed to Canada were also distributed to Australia. Brand X products were recalled in Australia on 1 May 2019 . The Thai Food and Drug Administration (Thai FDA) initiated a food safety investigation at the Brand X manufacturer. This investigation revealed that the eggs used in the production of the implicated product were sourced from a non-registered farm due to shortages in the usual supply from registered farms monitored by the Thai FDA. The eggs from the non-registered farm are hypothesized to be the source of the outbreak. Following a thorough audit, the Thai FDA identified several corrective actions for the manufacturer, including only purchasing eggs directly from a company regulated by the Thai FDA, designating the room where pastries were filled after baking as a high-risk area, changing the layout of the factory to protect the pastry filling room, and protecting the transport of baked profiteroles to the filling room.
Throughout the course of the outbreak, samples of almonds, frozen fish, chicken, meatballs, fruit cups, deli meat, dry spices, sausage, and pepperoni were collected from case homes and establishments for testing; none had Salmonella detected. Three profiterole samples tested by Health Canada were positive for Salmonella Enteritidis, and the recovered isolates were genetically related to the outbreak cases based on WGS. Further analyses have since characterized the matrix of the recalled frozen profiteroles as permissive to the growth of Salmonella .
Eighty-three cases were identified in this outbreak, with the active investigation taking place between February and April 2019. The outbreak was associated with Brand X profiteroles and eclairs imported into Canada from Thailand, with contaminated eggs hypothesized as the root cause. Although this was not the first outbreak or product recall linked to profiterole and eclair-style products , this outbreak investigation was challenging and complex and required the use of several hypothesis generation methods to identify the source. The microbiological hypothesis generation methods employed in this outbreak investigation provided significant contextual clues for investigators. Initial comparison to other clusters of Salmonella Enteritidis in Canada made it clear that this cluster was unique and genetically distinct from previously identified clusters associated with poultry. International WGS matches demonstrated a clear connection to Thailand, with clinical isolates from individuals who had travelled to Thailand and non-clinical isolates from products originating from Thailand. Five outbreak cases also reported travel to Thailand or Southeast Asia, strengthening this connection for investigators. This investigation emphasizes the value of countries routinely posting isolates and their associated metadata on public repositories to facilitate rapid comparisons during outbreak investigations, as well as the value of established information sharing agreements, such as the bilateral agreement between PulseNet-USA and PNC. Open-ended centralized case re-interviewing was the primary epidemiological hypothesis generation method used in this investigation. This strategy was employed after routine questionnaires did not reveal common exposures across outbreak cases. As has been observed previously , centralized case re-interviewing allowed for quick identification of commonalities by interviewers as they shared interview results in real time and iteratively modified questions for future interviews. The open-ended interviewing approach was also able to aid case recall challenges in the context of a lengthy delay between exposure and case re-interview, as the unstructured approach permitted investigators to fully explore case food histories, including routine diets, ‘one-time’ exposures, and foods not included in routine Salmonella questionnaires. While the result was rich contextual information, this approach was resource-intensive and produced data which required adjustments in data management strategies and new analytic approaches, including the thematic analysis employed in this investigation. Although the outbreak source was not identified after the first round of open-ended re-interviewing, the data gathered during those interviews and the patterns identified via the thematic analysis provided leads for further hypothesis generation methods (i.e. the grocery store site visit) and supporting evidence for the outbreak source once identified (i.e. a mild-flavoured, starchy food sold at smaller, discount grocery store chains). Other epidemiological hypothesis generation methods, including case–case analyses, comparisons to healthy control groups, and the site visit to Grocery Chain A, did not identify the source of the outbreak. Ultimately, purchase record data from a small grocery chain identified a common product purchased by multiple cases, which led investigators to identify the outbreak source. This grocery chain was amenable to sharing the requested records; however, unlike many major retailers in Canada, they did not have an existing data-sharing agreement with PHAC. As a result, this necessitated a new data-sharing agreement to be developed ad hoc during the investigation, and the process of compiling the records proved labour-intensive for the retailer in the absence of an established or automated system. Many additional methods of epidemiological hypothesis generation were considered but ultimately not employed in this investigation, as the outbreak source was identified before they were initiated. These methods included in-person re-interviews at case homes, in-person grocery store visits with cases, a focus group of cases who resided in the same small city, and a cohort study including cases that were guests at a dinner party. The food safety investigation was essential to identifying the outbreak source. Throughout the course of the investigation, the epidemiological data implicated many exposures of interest. Traceback activities were invaluable in ruling out each suspect item by searching for convergence in suppliers, manufacturers, distributors, and farms and finding none. Food sample collection and analysis activities throughout the investigation also provided additional evidence to rule out suspect sources as they were identified. Ultimately, while the epidemiological evidence pointed towards profiteroles and eclairs as the cause of the outbreak, and both microbiological and epidemiological evidence indicated a connection to Thailand, it was information received from purchase records and the ensuing food safety investigation that confirmed this link and supported product action in Canada. Analysis of recalled products later provided a microbiological link between recalled product and outbreak cases . Collaboration between Canadian and Thai food safety authorities was also key to arriving at a suspected root cause, which resulted in meaningful public health action at the manufacturer. This outbreak is a reminder of the importance of global partnerships such as INFOSAN in the protection of public health in a world of complex global food trade. Limitations There are limitations to reflect upon when considering employing the methods described here in future investigations. While many hypothesis generation methods were used, some produced false leads that may have prolonged the investigation. For example, the case–case analysis, comparison to Foodbook, and online survey all resulted in significant values for foods that were not the outbreak source. These analyses faced challenges due to characteristics of outbreak cases that were likely not shared with these comparison groups, including a strong preference for ‘plain’ flavours and shopping at smaller discount grocery chains. Because routine Salmonella questionnaires and the Foodbook study did not include questions on profiteroles or eclairs, these analyses alone would never have resulted in the identification of the outbreak source. Continual updating of routine questionnaires and population-based food consumption surveys such as Foodbook is important to ensure newly identified outbreak vehicles and changing food habits are reflected in these tools and data sources. Similarly, although searching for international sequencing matches to the Canadian outbreak did provide helpful clues in the link to Thailand, it also resulted in matches of frozen chicken isolates, another misleading clue in the investigation. Other methods used, such as open-ended interviews and thematic analysis, provided useful context, but did not identify a specific source of illness. This investigation highlights the importance of considering data from all hypothesis generation methods holistically and to be cognizant of the potential for false leads that might distract the investigation.
There are limitations to reflect upon when considering employing the methods described here in future investigations. While many hypothesis generation methods were used, some produced false leads that may have prolonged the investigation. For example, the case–case analysis, comparison to Foodbook, and online survey all resulted in significant values for foods that were not the outbreak source. These analyses faced challenges due to characteristics of outbreak cases that were likely not shared with these comparison groups, including a strong preference for ‘plain’ flavours and shopping at smaller discount grocery chains. Because routine Salmonella questionnaires and the Foodbook study did not include questions on profiteroles or eclairs, these analyses alone would never have resulted in the identification of the outbreak source. Continual updating of routine questionnaires and population-based food consumption surveys such as Foodbook is important to ensure newly identified outbreak vehicles and changing food habits are reflected in these tools and data sources. Similarly, although searching for international sequencing matches to the Canadian outbreak did provide helpful clues in the link to Thailand, it also resulted in matches of frozen chicken isolates, another misleading clue in the investigation. Other methods used, such as open-ended interviews and thematic analysis, provided useful context, but did not identify a specific source of illness. This investigation highlights the importance of considering data from all hypothesis generation methods holistically and to be cognizant of the potential for false leads that might distract the investigation.
The outbreak investigation was challenging and complex, requiring multiple hypothesis generation methods to identify the source, which was Brand X profiteroles and eclairs imported into Canada from Thailand. The hypothesis generation methods employed included closed-ended and open-ended interviewing, thematic analysis of interview data, various comparisons of case data to healthy control groups, a case–case analysis, a grocery store site visit, purchase record analysis, and comparison of genomic sequence data to domestic and international repositories. Each outbreak investigation is unique, and different methods may prove helpful in each one; however, this investigation serves to demonstrate the importance of employing various hypothesis generation methods simultaneously and iteratively, especially in instances where identification of the source proves difficult.
|
Factors Contributing to Uptake of Stillbirth Evaluations: A Qualitative Analysis | b5dc6be8-7e4c-4f43-89b9-94920d817628 | 11879755 | Forensic Medicine[mh] | Introduction Approximately two million stillbirths occur around the world each year . Determining stillbirth etiology is frequently done based on clinical history and observation, such as an external examination of the body. However, due to a lack of a single systematically applied protocol, clinical diagnostic approaches vary across institutions and often do not include standardised evaluation metrics . This lack of uniformity can lead to a misdiagnosis based on preliminary clinical presentation. Understanding the cause of stillbirth is important not only to help researchers and physicians reduce incidence but also to help facilitate bereavement and decrease emotional duress . Foetal autopsy, placental histology, and genetic testing are the most useful evaluations for assessing stillbirth . Yet, despite strong recommendations from the American Congress of Obstetricians and Gynaecologists (ACOG) , only about one fifth of stillbirths in the U.S. undergo perinatal autopsy . Identifying factors contributing to a stillbirth not only helps focus care for subsequent pregnancies and target prevention strategies; it improves mental health . Stillbirth often leaves parents with increased anxiety, depression, and feelings of guilt or shame surrounding their loss . Parents are also at increased risk for experiencing anxiety during pregnancies that follow a loss . The RESPECT study , a large multi‐country study, identified the most important factors in quality bereavement care. They stressed, “Make every effort to investigate and identify contributory factors to provide an acceptable explanation to women and families for the death of their baby.” as one of the top priciples . The objective of this study was to explore individuals' beliefs, values, and experiences surrounding stillbirth evaluation decisions. Here we interviewed parents about their stillbirth experience and identified the barriers and facilitators to uptake of stillbirth evaluations. Methods This descriptive research used semi‐structured interviews that were analysed using content analysis to gather in‐depth information about the stillbirth experience, as well as factors surrounding stillbirth evaluation decisions. Part of the stillbirth experience is deciding whether to have any evaluations conducted to determine the cause of death, such as foetal autopsy, placental pathology, or genetic testing. A semi‐structured interview guide was created based on published data and the clinical expertise of the research team (Table ). 2.1 Participants Participants were identified through medical record abstraction within a national research consortium on stillbirth (SL). Ninety‐four patients who met the following inclusion criteria were sent an invitation via email from the obstetrics clinic where they received care: (1) experienced a stillbirth within the last 5 years (mothers or fathers individually); (2) the stillbirth occurred at the University of Utah; (3) the patient had previously consented to being contacted for future research; (4) was at least 18 years old at the time of the interview; and (5) able to communicate in English. Two weeks after the email invitation was sent, patients were contacted by a member of the research team (SL) via telephone to determine their interest in the study if they had not already responded to the email invitation. Details about the study and its voluntary nature were reiterated over the phone prior to the interview. Enrollment for interviews concluded when data saturation was reached (i.e., no new information was added from additional interviews). 2.2 Data Analysis Interviews were conducted over the phone by a single interviewer (SL or NR), audio‐recorded (February to May 2021), professionally transcribed, and the transcripts were uploaded to the software Dedoose 9.0.17 . Inductive content analysis was conducted by identifying codes from within the transcripts and systematically designating data segments that contain similar material or themes to the remaining transcripts. This coding methodology was based on prior work . One member of the research team (NR) generated the original codes (e.g., stillbirth evaluation uptake barriers). These codes were systematically applied to the remaining transcripts, with additional codes added as necessary. An independent coder (ER) reviewed data for accuracy. Discrepancies were resolved through discussion until a consensus was reached. ER, a qualitative research expert, trained SL and NR in conducting interviews and conducting the content analysis. Our study follows suggested standards for reporting qualitative research . Participants Participants were identified through medical record abstraction within a national research consortium on stillbirth (SL). Ninety‐four patients who met the following inclusion criteria were sent an invitation via email from the obstetrics clinic where they received care: (1) experienced a stillbirth within the last 5 years (mothers or fathers individually); (2) the stillbirth occurred at the University of Utah; (3) the patient had previously consented to being contacted for future research; (4) was at least 18 years old at the time of the interview; and (5) able to communicate in English. Two weeks after the email invitation was sent, patients were contacted by a member of the research team (SL) via telephone to determine their interest in the study if they had not already responded to the email invitation. Details about the study and its voluntary nature were reiterated over the phone prior to the interview. Enrollment for interviews concluded when data saturation was reached (i.e., no new information was added from additional interviews). Data Analysis Interviews were conducted over the phone by a single interviewer (SL or NR), audio‐recorded (February to May 2021), professionally transcribed, and the transcripts were uploaded to the software Dedoose 9.0.17 . Inductive content analysis was conducted by identifying codes from within the transcripts and systematically designating data segments that contain similar material or themes to the remaining transcripts. This coding methodology was based on prior work . One member of the research team (NR) generated the original codes (e.g., stillbirth evaluation uptake barriers). These codes were systematically applied to the remaining transcripts, with additional codes added as necessary. An independent coder (ER) reviewed data for accuracy. Discrepancies were resolved through discussion until a consensus was reached. ER, a qualitative research expert, trained SL and NR in conducting interviews and conducting the content analysis. Our study follows suggested standards for reporting qualitative research . Results Nineteen parents were interviewed. The average age of participants at the time of their stillbirth was 31.1 years (Table ). The number of children participants reported having, including losses, ranged from 1 to 13, with an average of 4.4 children. Participants tended to be well‐educated, with 48.0% having a bachelor's degree or higher, and at least 48.0% of participants had an income higher than the median annual Utah household income of $71.6 k . Seventeen of the 19 participants were offered one or more stillbirth evaluations. Of those, 11 reported that they chose to undergo an autopsy, three, placenta histology, eight, genetic testing, three declined all examinations, and two were not offered any. 3.1 Facilitators to Stillbirth Evaluation We asked participants why they did or did not consent to fetal autopsy, placental histology, and genetic testing. The most commonly reported reason was due to personal values and beliefs. For example, having a strong belief in science, wanting the information to inform future pregnancies, altruism, or they simply wanted to know why. The following are examples, with quotes from participant interviews, of the values and beliefs that contributed as facilitators to the stillbirth evaluation decision. "I really wanted the autopsy. For me that wasn't weird, to be offered that, just because I have more of a medical‐based occupation. I have dissected cadavers, and I like knowing the reasons why. I was like, Why did this happen?" Participant 4 "I worked in medical field for almost 20 years, and my husband is very much into finding out as much weak information we could if we—for future pregnancies, so we definitely wanted to know what had gone wrong and answer some questions that we had in our own minds about what had gone wrong." Participant 5 "I remember that I pretty enthusiastically agreed because I believe in science, and having more information is helpful to me on a personal level. Also, HELLP syndrome in particular is still being researched." Participant 6 Parents who chose placental histology or genetic testing but declined fetal autopsy also stated how they desired some understanding of the cause, yet felt protective of their baby. "Ultimately, my husband and I decided that we didn't want an autopsy done on her and that, due to the genetic testing coming back with no real answers, there would be no need to find out what was wrong with her." Participant 13 The information parents received also contributed as a facilitator to consenting to stillbirth evaluation(s). When asked how evaluations were offered, responses ranged from not receiving any information to an in‐depth conversation supplemented with educational reading material. Those who chose to have at least one of the stillbirth evaluations ( n = 14) remembered more about the evaluation options offered and how the information was presented to them. For example, three participants received an informational pamphlet, and six were enrolled in a foetal autopsy study not related to this work. The six participants who opted to get an autopsy as part of another study were presented with the most information, and several expressed altruistic reasons for participating. "You know what? They had a clipboard, and they had some papers, and they pretty much talked me through the choices that I had. They explained to me about the study that they were doing and whether or not I wanted the autopsy." Participant 7 Additionally, one of the participants saw medical providers treat their baby with respect, which was expressed as a facilitator for choosing autopsy. "We had the time that we needed. They treated him just like a baby, even though he was the size of my hand and was dead. That was really helpful. That made the choice to do the autopsy that much easier because it didn't feel like, Oh yeah, okay, here; we're just gonna treat this like we're dissecting a frog in biology." Participant 9 3.2 Barriers to Stillbirth Evaluation Personal values and beliefs also were cited as the main reasons for declining evaluations, typically foetal autopsy. Participants said they declined one or more evaluations to protect their baby from the harm they imagined was caused by the procedure, to spend more time with their baby, because of cost, or because they believed they already knew the cause of death prior to being offered the evaluations. "She's my angel looking after me. I didn't want to put her through that." Participant 14 "I needed to spend the time with him and have the keepsakes of him taken care of, and I don't think I needed anything outside of that." Participant 17 "They did offer it [autopsy & genetic testing]. They said it was not covered by my insurance." Participant 15 "It was my cervix, so I had that answer. I didn't want to disturb her little body, so we decided not to." Participant 16 Medical providers were sometimes the barrier to obtaining a stillbirth evaluation. In several cases, the participant received a diagnosis before being offered any of the evaluations, which contributed to their decision to decline an evaluation. Additionally, some participants said their provider recommend against an evaluation, even if the participant asked for one. These participants also expressed lingering resentment towards their provider for not supporting their wishes. "It seemed like we had to push and pry to get testing done. ‘Well, if you guys are really worried about it, we could do an autopsy, but it's gonna cost money.’ It's like, well, how much? It's like $100. It's like, you kidding me? In the grand scheme of medical expenses, that's nothing." Participant 12 The last barrier identified was not recieving information about the evaluations. Two participants were not offered evaluation options. Nonetheless, they expressed that they would have liked to have been told about their options. "I think any knowledge, any option, is good because a lot of times you could go to a doctor if you have any questions, like now. Would you want to know? I don't know unless I'm offered." Participant 19 A summary of the facilitators and barriers can be seen in Table . 3.3 Satisfaction or Regret in Stillbirth Evaluation(s) Decision Sixteen of the 19 participants expressed satisfaction about their stillbirth evaluation decision regardless of their choice. The main reasons most participants gave for being satisfied with their decision was knowledge about the cause or because they felt they did everything they could. Several also expressed that the results helped them cope with the loss or that they felt relief from the results. Representative quotes from participants who opted for one or more evaluations: "Absolutely, yeah. Yeah, definitely. I'm now pregnant again, and I think it is helpful to have that information." Participant 1 "I feel like that one, we handled it the best. We did everything we could." Participant 3 "I would have regretted not having had that information." Participant 6 The following is a representative quote from a participant who declined evaluations: "Just because we already knew her condition and what she was diagnosed with. I truly just believed it was just a very rare situation. …I didn't want to put her through that." Participant 14 Most participants did not regret their stillbirth evaluation decision. "I guess, personally, I don't see the downside of the autopsy. It didn't seem like there was much of any noticeable expense to it. I wish I could say that, yeah, we got all this useful information from it. I don't know that we did, but at least I can look back on it and say, ‘You know what? At least we tried,’ or at least if there was something super obvious as to what happened, we would have found out." Participant 12 The only regret that was expressed was from one participant whose decision was not realised and from the two participants who did not receive information about their stillbirth options. "It definitely was [frustrating] to leave the hospital and not really have a definite answer on why it happened and how it could happen if I happen to get pregnant again." Participant 15 "I think at least—I would have like to know about the histology of the placenta. Even the genetic testing, I think that—I don't know if that would be able to tell me more or not, or the doctors more or not. We don't know unless we find out, unless we look." Participant 19 Facilitators to Stillbirth Evaluation We asked participants why they did or did not consent to fetal autopsy, placental histology, and genetic testing. The most commonly reported reason was due to personal values and beliefs. For example, having a strong belief in science, wanting the information to inform future pregnancies, altruism, or they simply wanted to know why. The following are examples, with quotes from participant interviews, of the values and beliefs that contributed as facilitators to the stillbirth evaluation decision. "I really wanted the autopsy. For me that wasn't weird, to be offered that, just because I have more of a medical‐based occupation. I have dissected cadavers, and I like knowing the reasons why. I was like, Why did this happen?" Participant 4 "I worked in medical field for almost 20 years, and my husband is very much into finding out as much weak information we could if we—for future pregnancies, so we definitely wanted to know what had gone wrong and answer some questions that we had in our own minds about what had gone wrong." Participant 5 "I remember that I pretty enthusiastically agreed because I believe in science, and having more information is helpful to me on a personal level. Also, HELLP syndrome in particular is still being researched." Participant 6 Parents who chose placental histology or genetic testing but declined fetal autopsy also stated how they desired some understanding of the cause, yet felt protective of their baby. "Ultimately, my husband and I decided that we didn't want an autopsy done on her and that, due to the genetic testing coming back with no real answers, there would be no need to find out what was wrong with her." Participant 13 The information parents received also contributed as a facilitator to consenting to stillbirth evaluation(s). When asked how evaluations were offered, responses ranged from not receiving any information to an in‐depth conversation supplemented with educational reading material. Those who chose to have at least one of the stillbirth evaluations ( n = 14) remembered more about the evaluation options offered and how the information was presented to them. For example, three participants received an informational pamphlet, and six were enrolled in a foetal autopsy study not related to this work. The six participants who opted to get an autopsy as part of another study were presented with the most information, and several expressed altruistic reasons for participating. "You know what? They had a clipboard, and they had some papers, and they pretty much talked me through the choices that I had. They explained to me about the study that they were doing and whether or not I wanted the autopsy." Participant 7 Additionally, one of the participants saw medical providers treat their baby with respect, which was expressed as a facilitator for choosing autopsy. "We had the time that we needed. They treated him just like a baby, even though he was the size of my hand and was dead. That was really helpful. That made the choice to do the autopsy that much easier because it didn't feel like, Oh yeah, okay, here; we're just gonna treat this like we're dissecting a frog in biology." Participant 9 Barriers to Stillbirth Evaluation Personal values and beliefs also were cited as the main reasons for declining evaluations, typically foetal autopsy. Participants said they declined one or more evaluations to protect their baby from the harm they imagined was caused by the procedure, to spend more time with their baby, because of cost, or because they believed they already knew the cause of death prior to being offered the evaluations. "She's my angel looking after me. I didn't want to put her through that." Participant 14 "I needed to spend the time with him and have the keepsakes of him taken care of, and I don't think I needed anything outside of that." Participant 17 "They did offer it [autopsy & genetic testing]. They said it was not covered by my insurance." Participant 15 "It was my cervix, so I had that answer. I didn't want to disturb her little body, so we decided not to." Participant 16 Medical providers were sometimes the barrier to obtaining a stillbirth evaluation. In several cases, the participant received a diagnosis before being offered any of the evaluations, which contributed to their decision to decline an evaluation. Additionally, some participants said their provider recommend against an evaluation, even if the participant asked for one. These participants also expressed lingering resentment towards their provider for not supporting their wishes. "It seemed like we had to push and pry to get testing done. ‘Well, if you guys are really worried about it, we could do an autopsy, but it's gonna cost money.’ It's like, well, how much? It's like $100. It's like, you kidding me? In the grand scheme of medical expenses, that's nothing." Participant 12 The last barrier identified was not recieving information about the evaluations. Two participants were not offered evaluation options. Nonetheless, they expressed that they would have liked to have been told about their options. "I think any knowledge, any option, is good because a lot of times you could go to a doctor if you have any questions, like now. Would you want to know? I don't know unless I'm offered." Participant 19 A summary of the facilitators and barriers can be seen in Table . Satisfaction or Regret in Stillbirth Evaluation(s) Decision Sixteen of the 19 participants expressed satisfaction about their stillbirth evaluation decision regardless of their choice. The main reasons most participants gave for being satisfied with their decision was knowledge about the cause or because they felt they did everything they could. Several also expressed that the results helped them cope with the loss or that they felt relief from the results. Representative quotes from participants who opted for one or more evaluations: "Absolutely, yeah. Yeah, definitely. I'm now pregnant again, and I think it is helpful to have that information." Participant 1 "I feel like that one, we handled it the best. We did everything we could." Participant 3 "I would have regretted not having had that information." Participant 6 The following is a representative quote from a participant who declined evaluations: "Just because we already knew her condition and what she was diagnosed with. I truly just believed it was just a very rare situation. …I didn't want to put her through that." Participant 14 Most participants did not regret their stillbirth evaluation decision. "I guess, personally, I don't see the downside of the autopsy. It didn't seem like there was much of any noticeable expense to it. I wish I could say that, yeah, we got all this useful information from it. I don't know that we did, but at least I can look back on it and say, ‘You know what? At least we tried,’ or at least if there was something super obvious as to what happened, we would have found out." Participant 12 The only regret that was expressed was from one participant whose decision was not realised and from the two participants who did not receive information about their stillbirth options. "It definitely was [frustrating] to leave the hospital and not really have a definite answer on why it happened and how it could happen if I happen to get pregnant again." Participant 15 "I think at least—I would have like to know about the histology of the placenta. Even the genetic testing, I think that—I don't know if that would be able to tell me more or not, or the doctors more or not. We don't know unless we find out, unless we look." Participant 19 Discussion 4.1 Main Findings In the present study, the reasons expressed for consenting to an evaluation varied by type of examination. Parents who consented to foetal autopsy wanted to understand why their baby passed, approaching their decision with more deductive reasoning than emotions. Those who chose placental histology or genetic testing over foetal autopsy often didn't want to harm their baby, but desired information about the cause of their stillbirth. Other reasons for consenting to one or more of the stillbirth evaluations were a desire to understand whether they were to blame, to inform possible future pregnancies, and the respect shown by the medical team towards the stillborn and parents. Those who chose one or more of the evaluations often expressed a desire to contribute to the scientific knowledge about preventing future stillbirths and to help others who may have suffered a loss. Decision‐making for stillbirth evaluations is often impacted by emotions and parental readiness . This decision comes at a time when parents are grieving the loss of their baby, maternal physical exhaustion from the birth, or maternal impairment from anaesthesia or opioid medications for pain. This level of distress decreases one's ability to make decisions , while conflicting desires and needs (e.g., protecting their baby vs. wanting answers) further complicate decision‐making. Among participants in this study, this decision‐making aligned with the knowledge, values, and existing parental beliefs prior to the event. Unfortunately, misconceptions about medical evaluation contributed to declining one or more evaluations, most often foetal autopsy. 4.2 Limitations Patients were recruited from a single hospital; the majority were non‐Hispanic White, well educated, wealthier, and at least 40% belonged to The Church of Jesus Christ of Latter‐Day Saints. Results from this cohort may not reflect the general population; stillbirth occurs at higher rates among minoritized and socioeconomically disadvantaged groups . However, our participants shared experiences similar to two other research cohorts from around the world . Additionally, there may be recall bias due to the impact of the highly distressing nature of stillbirth. 4.3 Interpretation The reasons for declining stillbirth evaluations are varied and complex, with several notable differences across populations. Previous research identified several reasons parents decline evaluations including complexity of the consent, emotional stress of the situation, lack of information for families and providers, mistrust, protective parenting, and belief that no new information will be found . However, culture also may contribute to stillbirth evaluation decisions. For instance, a study in Malaysia found that among the Muslim population, religious tenets stipulated that autopsies can only be carried out on a stillborn less than 120 days gestation . Women interviewed in tertiary care centers in Blantyre, Malawi, Mansa, Mwanza, Tanzania, and Zambia expressed fear of blame for the stillbirth and the concequences of denying cultural traditions . In the US, participants from this study who chose to consent to stillbirth evaluations approached their decision with more deductive reasoning than emotions and tended to note how a science or medical background made them feel comfortable with their decision. On the other hand, some findings may apply across cultures. In this study, the most common reason stated for declining a stillbirth evaluation was having already received a probable cause of stillbirth before any of the examinations were offered. They believed that no new information would be found with further testing. However, even in the case of inconclusive evaluation results, our participants expressed that actively doing something for their baby or confirming that they were not responsible for the death was valuable to them and their healing process, consistent with other findings . Our participants were more often misinformed about foetal autopsy than other tests. Several participants expressed a desire to protect their baby from the harm inflicted by an autopsy without knowing what the procedure entails or the other evaluation options. Meaney et al. indicated that parents' misperceptions about the invasiveness of autopsies were based on the dramatisation seen on television . Even among a gr Italian physicians, half believed the baby would be dismembered . However, the autopsy exam may be done at varying levels of invasiveness, with targeted options that assess part of the body or radiographic‐only exams, which do not require any incisions . The PURPOSe study in India and Pakistan determined that using minimally invasive tissue sampling was reliable and found acceptable by the Muslim community . Genetic testing only requires blood or a small tissue sample, identifies copy number variants, and can be used to inform care in subsequent pregnancies . Even with a complete foetal autopsy, the incisions are created similarly to a surgery and stitched afterward . Clothing and a cap can cover the incisions if the patient desires an open casket funeral. These misconceptions can be addressed through better communication or educational materials. Decision aids are tools that support informed decision‐making in medical situations where there is often no “best” option . A decision aid for stillbirth evaluations, tailored for particular cultures, could explain the level of invasiveness of each examination, and present alternatives to complete autopsy, such as partial autopsy, computed tomography, ultrasonography, or magnetic resonance imaging . The creation and utilization of this decision aid would provide unbiased information, increase capacity for shared decision‐making between patients and their providers, and reduce decisional conflict of those faced with uncertainty. Hospital‐level factors contributing to stillbirth evaluation decision‐making include the necessity to make many other decisions concerning the stillbirth, a lack of provider training, limited perinatal pathologists, cost, and limited time available for medical providers to communicate with bereaved parents . Hospitals could increase the uptake of evaluations by educating providers on evaluation options, the procedures involved, and the importance of a correct diagnosis . Participants in our study unveiled several examples of how they were misinformed. For instance, some parents chose to decline evaluations because they erroneously thought that it precluded them from spending time with their baby. Another key area that could facilitate decision‐making, is educating clinicians and hospital staff on best practices for interacting with grieving parents . Within our cohort, some women shared gratitude about the care they received in the hospital. Yet others expressed frustration concerning interactions with providers, which lingered long after the stillbirth. Physician communication training programs have been successfully created in other medical fields, such as primary care . By giving clinicians the skills they need to communicate with patients about difficult topics, the patient is more likely to receive care that supports their values and needs. Finally, parents are asked to make numerous decisions in a small timeframe that they, and sometimes hospital personnel, are not adequately prepared to deal with. Some healthcare providers are not confident in the utility of examinations, such as autopsy, and therefore do not take the time to discuss this option . Parents in this cohort who were not offered or they were denied stillbirth evaluations expressed dissatisfaction with the medical system. Negative experiences like these can colour the stillbirth experience for parents for years afterwards. Main Findings In the present study, the reasons expressed for consenting to an evaluation varied by type of examination. Parents who consented to foetal autopsy wanted to understand why their baby passed, approaching their decision with more deductive reasoning than emotions. Those who chose placental histology or genetic testing over foetal autopsy often didn't want to harm their baby, but desired information about the cause of their stillbirth. Other reasons for consenting to one or more of the stillbirth evaluations were a desire to understand whether they were to blame, to inform possible future pregnancies, and the respect shown by the medical team towards the stillborn and parents. Those who chose one or more of the evaluations often expressed a desire to contribute to the scientific knowledge about preventing future stillbirths and to help others who may have suffered a loss. Decision‐making for stillbirth evaluations is often impacted by emotions and parental readiness . This decision comes at a time when parents are grieving the loss of their baby, maternal physical exhaustion from the birth, or maternal impairment from anaesthesia or opioid medications for pain. This level of distress decreases one's ability to make decisions , while conflicting desires and needs (e.g., protecting their baby vs. wanting answers) further complicate decision‐making. Among participants in this study, this decision‐making aligned with the knowledge, values, and existing parental beliefs prior to the event. Unfortunately, misconceptions about medical evaluation contributed to declining one or more evaluations, most often foetal autopsy. Limitations Patients were recruited from a single hospital; the majority were non‐Hispanic White, well educated, wealthier, and at least 40% belonged to The Church of Jesus Christ of Latter‐Day Saints. Results from this cohort may not reflect the general population; stillbirth occurs at higher rates among minoritized and socioeconomically disadvantaged groups . However, our participants shared experiences similar to two other research cohorts from around the world . Additionally, there may be recall bias due to the impact of the highly distressing nature of stillbirth. Interpretation The reasons for declining stillbirth evaluations are varied and complex, with several notable differences across populations. Previous research identified several reasons parents decline evaluations including complexity of the consent, emotional stress of the situation, lack of information for families and providers, mistrust, protective parenting, and belief that no new information will be found . However, culture also may contribute to stillbirth evaluation decisions. For instance, a study in Malaysia found that among the Muslim population, religious tenets stipulated that autopsies can only be carried out on a stillborn less than 120 days gestation . Women interviewed in tertiary care centers in Blantyre, Malawi, Mansa, Mwanza, Tanzania, and Zambia expressed fear of blame for the stillbirth and the concequences of denying cultural traditions . In the US, participants from this study who chose to consent to stillbirth evaluations approached their decision with more deductive reasoning than emotions and tended to note how a science or medical background made them feel comfortable with their decision. On the other hand, some findings may apply across cultures. In this study, the most common reason stated for declining a stillbirth evaluation was having already received a probable cause of stillbirth before any of the examinations were offered. They believed that no new information would be found with further testing. However, even in the case of inconclusive evaluation results, our participants expressed that actively doing something for their baby or confirming that they were not responsible for the death was valuable to them and their healing process, consistent with other findings . Our participants were more often misinformed about foetal autopsy than other tests. Several participants expressed a desire to protect their baby from the harm inflicted by an autopsy without knowing what the procedure entails or the other evaluation options. Meaney et al. indicated that parents' misperceptions about the invasiveness of autopsies were based on the dramatisation seen on television . Even among a gr Italian physicians, half believed the baby would be dismembered . However, the autopsy exam may be done at varying levels of invasiveness, with targeted options that assess part of the body or radiographic‐only exams, which do not require any incisions . The PURPOSe study in India and Pakistan determined that using minimally invasive tissue sampling was reliable and found acceptable by the Muslim community . Genetic testing only requires blood or a small tissue sample, identifies copy number variants, and can be used to inform care in subsequent pregnancies . Even with a complete foetal autopsy, the incisions are created similarly to a surgery and stitched afterward . Clothing and a cap can cover the incisions if the patient desires an open casket funeral. These misconceptions can be addressed through better communication or educational materials. Decision aids are tools that support informed decision‐making in medical situations where there is often no “best” option . A decision aid for stillbirth evaluations, tailored for particular cultures, could explain the level of invasiveness of each examination, and present alternatives to complete autopsy, such as partial autopsy, computed tomography, ultrasonography, or magnetic resonance imaging . The creation and utilization of this decision aid would provide unbiased information, increase capacity for shared decision‐making between patients and their providers, and reduce decisional conflict of those faced with uncertainty. Hospital‐level factors contributing to stillbirth evaluation decision‐making include the necessity to make many other decisions concerning the stillbirth, a lack of provider training, limited perinatal pathologists, cost, and limited time available for medical providers to communicate with bereaved parents . Hospitals could increase the uptake of evaluations by educating providers on evaluation options, the procedures involved, and the importance of a correct diagnosis . Participants in our study unveiled several examples of how they were misinformed. For instance, some parents chose to decline evaluations because they erroneously thought that it precluded them from spending time with their baby. Another key area that could facilitate decision‐making, is educating clinicians and hospital staff on best practices for interacting with grieving parents . Within our cohort, some women shared gratitude about the care they received in the hospital. Yet others expressed frustration concerning interactions with providers, which lingered long after the stillbirth. Physician communication training programs have been successfully created in other medical fields, such as primary care . By giving clinicians the skills they need to communicate with patients about difficult topics, the patient is more likely to receive care that supports their values and needs. Finally, parents are asked to make numerous decisions in a small timeframe that they, and sometimes hospital personnel, are not adequately prepared to deal with. Some healthcare providers are not confident in the utility of examinations, such as autopsy, and therefore do not take the time to discuss this option . Parents in this cohort who were not offered or they were denied stillbirth evaluations expressed dissatisfaction with the medical system. Negative experiences like these can colour the stillbirth experience for parents for years afterwards. Conclusion Stillbirth evaluations improve etiological understanding for parents, and the care providers and researchers trying to identify risk factors to prevent stillbirth. Two major barriers to autopsy consent were a misconception regarding prohibited time spent with the baby and that no new information would be identified from an evaluation. Providers could potentially improve uptake by educating and offereing stillbirth evaluations, supporting parents' wishes, and treating their baby with respect. Nathan Blue: writing – review and editing (equal). Erin P. Johnson: supervision; writing – review and editing (equal). Sarah Lopez: data curation (equal); writing – review and editing (equal). Jessica Page: writing – review and editing (equal). Naomi O. Riches: formal analysis (lead), writing – original draft (lead); writing – review and editing (equal). Erin Rothwell: conceptualisation (lead); formal analysis; writing – original draft; writing – review and editing (equal). Robert M. Silver: conceptualisation (equal); writing – review and editing (equal). Tsegaselassie Workalemahu: writing – review and editing (equal). Ethical approval for this study was granted by the University of Utah Institutional Review Board (IRB_00133359). Conflicts of Interest The authors declare no conflicts of interest. Table S1. Semi‐structured interview guide. |
The modified 5-item frailty index in total hip arthroplasty patients: a retrospective cohort from a low-middle income country | f3d320fa-5a3e-489d-bcf9-3e3e3796de68 | 11924793 | Surgical Procedures, Operative[mh] | Total hip arthroplasty (THA) is an effective surgical intervention delivering consistent and reliable outcomes for patients with hip conditions . The procedure utilizes prosthetic devices made of metal, plastic, or ceramic to repair or reconstruct diseased or damaged hip joints. It is primarily indicated for patients with end-stage osteoarthritis, however, its utility extends to osteonecrotic, traumatic, inflammatory and even congenital diseases. Although employed across demographics, the frequency of geriatric individuals undergoing the procedure is rising. These individuals can have poor post-operative outcomes given their risk profile and potential complications of the procedure such as infection, dislocation and implant failure, all of which necessitate the need for reliable peri-operative assessment via scoring systems. . Predictive scoring is a strategy employed across the board for various surgeries. Various scoring systems have evolved over the years to demonstrate a high predictive efficacy, allowing informed decision making for physicians and patients. The establishment of predictive scoring in orthopedic surgery is still subject to debate, with American Society of Anesthesiologists (ASA) Physical Status Classification System, the Elixhauser Comorbidity Method (ECM), the Charlson Comorbidity Index (CCI), and the Readmission After Total Hip Replacement Risk Scale (RATHRR) being widely used. These models estimate complication rates by aggregating deficits to measure reduced physiologic reserve. The modified frailty index, mFI-5, is a frailty score which evaluates five comorbidities—ischemic heart disease (IHD), hypertension, diabetes, chronic obstructive pulmonary disease (COPD), and functional status—offers a streamlined alternative by using fewer variables while yielding comparable predictive accuracy. The total score, ranging from 0 to 5, correlates directly with frailty and the risk of adverse outcomes, such as postoperative complications, extended hospital stays, and mortality. Given its validity and ease-of-application, the mFI-5 is particularly valuable in resource-limited settings, where the lack of standardized medical records, coupled with limited access to healthcare often leads to incomplete patient information. Complications following THA place a significant burden on both patients and healthcare systems, particularly in low-and middle-income countries (LMICs) where out of pocket payment is the norm. Risk stratification models like the ECM and CCI offer increased sensitivity but are often impractical in these settings due to use of numerous variables. The mFI-5, using only five variables, presents a simpler alternative without sacrificing predictive accuracy. . The mFI-5’s strength lies in its ease of use, requiring only basic patient history, making it well-suited in resource constrained environment. By identifying high-risk patients preoperatively, healthcare providers can optimize resource allocation, reducing complications which increase costs and care complexity in these settings. Given the average cost of treating an infected arthroplasty is over three times higher than uncomplicated cases, effective risk stratification becomes crucial in cost-saving efforts. To the best of our knowledge, this study is first of its kind from LMICs aiming to evaluate and validate mFI-5 as a strong predictor of early postoperative outcomes in the setting of a major tertiary referral center in a low- and middle-income country. By providing caregivers with a clearer understanding of the mFI-5’s advantages—such as simplicity and reliance on minimal data requirements—as well as its drawbacks, this study seeks to inform perioperative assessment practices through data, particularly where resources are constrained.
Study design and setting This retrospective cohort study was conducted at a tertiary care hospital settings adhering to the Declaration of Helsinki principles. Data from a prospective hip arthroplasty registry of the Aga Khan University Hospital (AKUH), a tertiary care hospital in the metropolitan city of Karachi in Pakistan was used. The registry was approved by the Institutional Review Board (IRB) before initiation, and the data qualified for exemption from the ethical review committee. Study sample A total of 498 patients who underwent total hip arthroplasty (THA) between January 2014 and December 2019 were included in the final analysis of the study. The study included all the eligible patients during study duration were identified and included in the study. Inclusion and exclusion criteria Patients above 18 years of age, who underwent THA at the hospital during the study period, and consented to be to be included in the registry were included in the study. Patients who did not give their consent to be included in the registry were excluded. Data collection and protection The study utilized data from the institutional hip arthroplasty registry, established in 2014. The registry provided comprehensive information on patient demographics, medical history, and surgical outcomes. Data from the medical records was manually extracted and entered into an excel sheet by trained individuals to ensure consistency and accuracy. Collected parameters included patient’s age, sex, comorbidities, surgical approach, intraoperative complications, postoperative complications, length of hospital stay, and follow-up outcomes. For the purposes of this study, the entire dataset from the registry was utilized without modification for analysis. All data collected was stored in a secure electronic file, accessible only to the research team. Access to this file was restricted to team members solely for the purpose of analysis, ensuring the protection and confidentiality of patient information. Outcome variables The mFI-5 score was categorized into two groups: mFI-5 = 0–1 (Group 1), indicating low frailty patients and mFI-5 = 2–5 (Group 2), indicating high frailty patients. The outcome of the study was post-surgery adverse outcomes. The post-operative adverse outcomes evaluated for each group included complications such as dislocations, infections, venous thromboembolisms, pulmonary embolisms, others (neuropathies, pain, and nerve injuries) and mortality. Statistical analysis The data was analyzed using STATA Version 15.0. Descriptive statistics were used to summarize the demographic and clinical characteristics of the patients. Quantitative variables were reported as medians with interquartile ranges (IQR) for non-normally distributed data. Categorical variables were reported as frequencies and percentages. The incidence of post-operative adverse outcomes, including complications, adverse intra-operative events, 30-day readmission, and 30-day mortality, length of stay was compared between the two mFI-5 groups. To examine potential differences between mFI-5 categories, we compared patient characteristics across groups using the Wilcoxon rank-sum test for continuous variables and chi-square tests for categorical variables. To examine the relationship between outcome (post-operative complications) with the mFI-5 and other covariates, univariable and multivariable logistic regression models were constructed. The assumptions of the models were checked for all our analyses, including the absence of multicollinearity. All p -values were two-tailed, and a value of p ≤ 0.05 was regarded as statistically significant.
This retrospective cohort study was conducted at a tertiary care hospital settings adhering to the Declaration of Helsinki principles. Data from a prospective hip arthroplasty registry of the Aga Khan University Hospital (AKUH), a tertiary care hospital in the metropolitan city of Karachi in Pakistan was used. The registry was approved by the Institutional Review Board (IRB) before initiation, and the data qualified for exemption from the ethical review committee.
A total of 498 patients who underwent total hip arthroplasty (THA) between January 2014 and December 2019 were included in the final analysis of the study. The study included all the eligible patients during study duration were identified and included in the study.
Patients above 18 years of age, who underwent THA at the hospital during the study period, and consented to be to be included in the registry were included in the study. Patients who did not give their consent to be included in the registry were excluded.
The study utilized data from the institutional hip arthroplasty registry, established in 2014. The registry provided comprehensive information on patient demographics, medical history, and surgical outcomes. Data from the medical records was manually extracted and entered into an excel sheet by trained individuals to ensure consistency and accuracy. Collected parameters included patient’s age, sex, comorbidities, surgical approach, intraoperative complications, postoperative complications, length of hospital stay, and follow-up outcomes. For the purposes of this study, the entire dataset from the registry was utilized without modification for analysis. All data collected was stored in a secure electronic file, accessible only to the research team. Access to this file was restricted to team members solely for the purpose of analysis, ensuring the protection and confidentiality of patient information.
The mFI-5 score was categorized into two groups: mFI-5 = 0–1 (Group 1), indicating low frailty patients and mFI-5 = 2–5 (Group 2), indicating high frailty patients. The outcome of the study was post-surgery adverse outcomes. The post-operative adverse outcomes evaluated for each group included complications such as dislocations, infections, venous thromboembolisms, pulmonary embolisms, others (neuropathies, pain, and nerve injuries) and mortality.
The data was analyzed using STATA Version 15.0. Descriptive statistics were used to summarize the demographic and clinical characteristics of the patients. Quantitative variables were reported as medians with interquartile ranges (IQR) for non-normally distributed data. Categorical variables were reported as frequencies and percentages. The incidence of post-operative adverse outcomes, including complications, adverse intra-operative events, 30-day readmission, and 30-day mortality, length of stay was compared between the two mFI-5 groups. To examine potential differences between mFI-5 categories, we compared patient characteristics across groups using the Wilcoxon rank-sum test for continuous variables and chi-square tests for categorical variables. To examine the relationship between outcome (post-operative complications) with the mFI-5 and other covariates, univariable and multivariable logistic regression models were constructed. The assumptions of the models were checked for all our analyses, including the absence of multicollinearity. All p -values were two-tailed, and a value of p ≤ 0.05 was regarded as statistically significant.
Demographics Of the 498 patients in the study, 62.8% (313) had a mFI-5 scores of ≤ 1 and 37.2% (185) had a scores > 1. The median age was 50 years (range: 36–63) for patients with an mFI-5 score of ≤ 1 and 65 years (range: 58–72) for those with a score > 1. The proportion of males was slightly higher in the mFI-5 (0–1) group (52.4%) than in the mFI-5 (> 1) group (47.6%). Individuals in the mFI-5 (> 1) group had a slightly higher median body mass index (BMI) of 27.54 kg/m2 (IQR: 23.62–30.04) compared to those in the mFI-5 (0–1) group with a median BMI of 25.78 kg/m2 (IQR: 23.33–29.21). The presence of diabetes (59.46%), hypertension (11.89%), and ischemic heart disease (91.89%) was more common in the mFI-5 (> 1) group. Peri-operative findings Patients with mFI-5 scores greater than 1 had higher ASA grades, with 61.6% in grade 3 compared to only 18.5% in the mFI-5 (0–1) group. The use of general anesthesia was slightly lower in the mFI-5 (> 1) group (87.0%) compared to the mFI-5 (0–1) group (94.6%). Median preoperative hemoglobin levels were similar between the groups, at 11.6 g/dL (IQR: 10.8–14.3) for mFI-5 (0–1) and 11.9 g/dL (IQR: 11–13.4) for mFI-5 (> 1). Postoperative hemoglobin levels were also similar between both groups, with a median of 10.3 g/dL. However, blood transfusions were more frequently required in the mFI-5 (> 1) group (47.0%) compared to the mFI-5 (0–1) group (34.8%). Blood loss was slightly higher in the mFI-5 (> 1) group, with a median of 300 mL (IQR: 200–450) compared to 250 mL (IQR: 170–500) in the mFI-5 (0–1) group. The median length of hospital stay was 6 days for both groups, with a slightly broader range in the mFI-5 (> 1) group (IQR: 5–8 days). Table outlines the demographic, preoperative, and postoperative characteristics of patients identified across the mFI-5 groups. Post-operative complications Among the 498 patients, those with higher mFI-5 scores (> 1) had a greater incidence of postoperative complications compared to those with lower scores (0–1). In the mFI-5 (0–1) group, 9.6% of patients experienced complications, whereas in the mFI-5 (> 1) group, 17.8% of patients had complications as noted in Fig. . After adjusting for demographic, pre-operative, and postoperative characteristics, patients with mF1-5 scores greater than 1 had 97% higher odds of experiencing postoperative complications compared to those with scores of 1 or less (aOR = 1.97, 95% CI 1.06–3.70). Patients with a diagnosis of fracture had 63% lower odds of complications compared to those with AVN (aOR = 0.37; 95% CI 0.16–0.87). Additionally, higher postoperative hemoglobin levels were significantly associated with a reduced likelihood of complications. For each g/dL increase in postoperative hemoglobin levels, the odds of complications decreased by 26% (aOR = 0.74, 95% CI 0.61–0.90). Table presents the detailed multivariable results for postoperative complications. Length of stay was also associated with post-operative outcomes. With every one-day increase in length of stay, the odds of experiencing complications increasing by 13% (aOR = 1.13; 95%CI 1.05–1.21).
Of the 498 patients in the study, 62.8% (313) had a mFI-5 scores of ≤ 1 and 37.2% (185) had a scores > 1. The median age was 50 years (range: 36–63) for patients with an mFI-5 score of ≤ 1 and 65 years (range: 58–72) for those with a score > 1. The proportion of males was slightly higher in the mFI-5 (0–1) group (52.4%) than in the mFI-5 (> 1) group (47.6%). Individuals in the mFI-5 (> 1) group had a slightly higher median body mass index (BMI) of 27.54 kg/m2 (IQR: 23.62–30.04) compared to those in the mFI-5 (0–1) group with a median BMI of 25.78 kg/m2 (IQR: 23.33–29.21). The presence of diabetes (59.46%), hypertension (11.89%), and ischemic heart disease (91.89%) was more common in the mFI-5 (> 1) group.
Patients with mFI-5 scores greater than 1 had higher ASA grades, with 61.6% in grade 3 compared to only 18.5% in the mFI-5 (0–1) group. The use of general anesthesia was slightly lower in the mFI-5 (> 1) group (87.0%) compared to the mFI-5 (0–1) group (94.6%). Median preoperative hemoglobin levels were similar between the groups, at 11.6 g/dL (IQR: 10.8–14.3) for mFI-5 (0–1) and 11.9 g/dL (IQR: 11–13.4) for mFI-5 (> 1). Postoperative hemoglobin levels were also similar between both groups, with a median of 10.3 g/dL. However, blood transfusions were more frequently required in the mFI-5 (> 1) group (47.0%) compared to the mFI-5 (0–1) group (34.8%). Blood loss was slightly higher in the mFI-5 (> 1) group, with a median of 300 mL (IQR: 200–450) compared to 250 mL (IQR: 170–500) in the mFI-5 (0–1) group. The median length of hospital stay was 6 days for both groups, with a slightly broader range in the mFI-5 (> 1) group (IQR: 5–8 days). Table outlines the demographic, preoperative, and postoperative characteristics of patients identified across the mFI-5 groups.
Among the 498 patients, those with higher mFI-5 scores (> 1) had a greater incidence of postoperative complications compared to those with lower scores (0–1). In the mFI-5 (0–1) group, 9.6% of patients experienced complications, whereas in the mFI-5 (> 1) group, 17.8% of patients had complications as noted in Fig. . After adjusting for demographic, pre-operative, and postoperative characteristics, patients with mF1-5 scores greater than 1 had 97% higher odds of experiencing postoperative complications compared to those with scores of 1 or less (aOR = 1.97, 95% CI 1.06–3.70). Patients with a diagnosis of fracture had 63% lower odds of complications compared to those with AVN (aOR = 0.37; 95% CI 0.16–0.87). Additionally, higher postoperative hemoglobin levels were significantly associated with a reduced likelihood of complications. For each g/dL increase in postoperative hemoglobin levels, the odds of complications decreased by 26% (aOR = 0.74, 95% CI 0.61–0.90). Table presents the detailed multivariable results for postoperative complications. Length of stay was also associated with post-operative outcomes. With every one-day increase in length of stay, the odds of experiencing complications increasing by 13% (aOR = 1.13; 95%CI 1.05–1.21).
This is the first of its kind study to examine the use of mFI-5 in patients undergoing THA in a resource limited setting. The findings strengthen the utilization of mFI-5 as an effective risk stratification and predictive risk scoring tool in a resource limited setting. The study revealed a significant association between the mFI-5 score and 30-day outcome measures in patients undergoing THA in resource-limited settings. The results indicate that patients with high mF1-5 scores had higher odds of experiencing postoperative complication. Subsequently, with increased number of days in hospital, the likelihood of experience post-op complications also increases. The outcomes of THA can be affected by various factors, both independent of the patient (type of prosthesis, surgeon expertise, and hospital characteristics) and related to the patient (age, presence of co-morbidities, severity of osteoarthritis, preoperative functional status). [ , , ] Numerous studies have established frailty as a critical determinant of poor surgical outcomes across specialties, with the mFI-5 emerging as a reliable and efficient tool for preoperative evaluation. Recently published data have established the utility of mFI-5 as an independent predictor of postoperative surgical outcomes. Furthermore, a paper by Traven et al. noted the risk of complications increased by 25% for every single point increase on the mFI scale, with the risk of 30- day mortality corresponding to as high as 49% . In our analysis patients with mF1-5 > 1 had 97% higher odds of experiencing postoperative complications in concurrence with the findings by Traven et al. who also observed an increase in postoperative complications in patients with higher mFI-5 score. Our results indicate that patients in the high mFI-5 (> 1) group experienced a higher rate of complications, particularly infections and dislocations. One factor contributing to the increased infection rates in the higher mFI-5 group may be the presence of comorbidities, such as diabetes, which is known to impair immune response and increase surgical site infections. Other studies have similarly found that higher mFI-5 scores correlate with increased rates of infections in both general and orthopedic trauma settings, reinforcing our findings. . Notably LOS also emerged as a significant factor influencing postoperative outcomes in our study. Patients with prolonged LOS demonstrated a higher incidence of postoperative complications. Extended hospital stay can reflect both the severity of frailty and development of postoperative complications such as infections (hospital acquired in already immunocompromised patients) and venous thromboembolism (VTE), which are major causes of morbidity, mortality and increased health care costs. . From a public health perspective, mFI-5 has important implications for perioperative management in LMIC. With scarce resources, particularly limited hospital beds, and shortage of trained health care providers, the mFI-5 can assist in optimizing peri-operative planning and by focusing on high-risk individuals. From a public health perspective, mFI-5 can assist in hospital management by providing patients and families with realistic expectations of potential costs, which is particularly important in LMICs where healthcare expenses are often out-of-pocket. In LMICs, where the prevalence of diabetes exceeds 30%, the impact on postoperative outcomes can be significant. For instance, an observational study by Hasan et al. using the NSQIP database highlighted a higher odds ratio for complications in orthopedic surgeries in LMICs, including increased risks for SSIs, sepsis, and readmissions, which aligns with our analysis. In resource-constrained LMICs, high postoperative infection rates can potentially be mitigated through stringent preoperative infection protocols and the use of aggressive perioperative antibiotics. . Interestingly, unlike previous studies, our research did not establish a correlation between 30-day mortality rates between the two groups. This can be attributed to the limitations in our study such as relatively smaller sample size, shorter duration of follow-up and local factors such as patients lost to follow-up, underreporting of deaths by families, impacting the accuracy of the data. Some limitations were noted in our study. Firstly, due to resource-limitations, we had a smaller sample size limited to patients presenting at a single institution, which can limit the impact on generalizability and warrant further research with larger sample sizes through collaborations with multiple centers to enhance the validity of surgical outcomes. Secondly, risk stratification calculators assign equal weights to patients with different underlying comorbid conditions irrespective of the severity of the individual condition and its impact on the individual patient itself. However, our study aims to fill in the gap in the literature from LMICs and we hope that our study will allow researchers in LMICs to conduct further, larger, prospective studies to be able to assess outcomes in patients from similar regions.
This is the first of its kind study exploring the Modified 5-Item Frailty Index (mFI-5) in predicting 30-day outcome measures in THA patients in resource-limited settings. With ease in obtaining relevant parameters from the patient's history, the mFI-5 inherently serves as a practical tool to guide perioperative care, promoting shared decision-making, and optimizing patient outcomes in THA procedures. With the increasing incidence of hip fractures due to multiple reasons, hip arthroplasties procedures are going to rise exponentially hence we, through this study, propose the standardization of preoperative mFI-5 scoring to guide perioperative decision making for both patients and caregivers. Policies should be instituted to exercise increased caution in the treatment approach for patients with high mFI-5 scores to mitigate the risk of post-operative complications. Further larger database research is needed to refine frailty assessments and enhance the validity of risk stratification calculators like the mFI-5 particularly in resource constrained environments.
|
Enrichment of | 249e8319-d5b7-45a5-b466-520d00515300 | 11936212 | Pathologic Processes[mh] | Periodontitis is a chronic inflammation that affects the supporting structures of the teeth; it is characterized by bacterial plaque accumulation, which leads to tissue destruction and potential tooth loss . Numerous epidemiological and animal studies reported that periodontitis is associated with various extraoral inflammatory diseases, such as cardiovascular diseases, diabetes, and rheumatoid arthritis [ – ]. As the plausible causative mechanism of their biological associations, the leakage of pathogenic bacteria and proinflammatory cytokines originating from the periodontal lesion into the systemic circulation has been proposed . Recent studies have also proposed the possible involvement of the oral–gut axis in systemic diseases as a new causative mechanism . Research has shown that oral microbes, including periodontal pathogens, translocate to the gut, which results in the ectopic enrichment of oral bacteria in the gut and notable alteration of gut microbial balance . The subsequent adverse effects have been demonstrated in gastrointestinal diseases, such as inflammatory bowel disease and colorectal cancer (CRC) . CRC is a heterogeneous malignant disease; it is the third most diagnosed cancer worldwide and the second leading cause of mortality among patients with cancer . The occurrence and progression of CRC are influenced by various environmental and genetic factors. As a key environmental factor, increased evidence has indicated the crucial role of specific bacteria in the pathogenesis of CRC . F. nucleatum is a gram-negative anaerobic oral commensal and periodontal pathogen. It is recognized as a CRC-related bacterium. Clinical studies have reported that F. nucleatum is more abundant in cancerous than in noncancerous tissues and that its presence is correlated with poor prognosis in patients with CRC, particularly those in advanced stages . In addition to the direct procarcinogenic properties of F. nucleatum to mucosal epithelium, a pivotal role of F. nucleatum on regulating tumor microenvironment, complex surrounding tumors consisting of various cellular and extracellular components, has been reported . Despite the sufficient knowledge of the involvement of F. nucleatum in the pathogenesis of CRC, the involvement of other periodontal pathogens, including P. gingivalis and P. intermedia , remains poorly understood. In addition to the oncogenic effects of specific bacteria, accumulating evidence indicates that dysbiosis of the gut microbiota is also associated with CRC development . Most studies on gut microbiota focused on the use of fecal samples (referred to as lumen-associated microbiota [LAM]) owing to the convenience and noninvasiveness of fecal sampling. However, recent studies have reported another microbiota colonizing on the intestinal mucosa (referred to as mucosa-associated microbiota [MAM]) that exhibit distinct compositions from LAM [ – ]. From the biogeographic viewpoint, the microorganisms in MAM are more likely to directly interact with the intestinal epithelium than those in LAM, indicating the pivotal role of the former in the tumor microenvironment of CRC [ – ]. Nevertheless, the possible association of the periodontal pathogens of MAM with CRC development remains to be elucidated. Therefore, this study aimed to investigate the involvement of periodontal pathogens in CRC and explore the biological mechanism highlighting the MAM and LAM.
Mice All the animal experiments were approved by the Committee for the Care and Use of Laboratory Animals of Niigata University (approval numbers: SA01272 and SA01375) and conducted in accordance with the Regulations and Guidelines for the Scientific and Ethical Care and Use of Laboratory Animals of the Science Council of Japan. Specific pathogen-free male C57BL/6 mice (6–8 weeks old) were obtained from Japan SLC, Inc. (Shizuoka, Japan). They were maintained in a specific sterile colony under completely controlled conditions (12-hour light/dark cycle with lighting at 8:00 am) and allowed access to a commercial diet and water ad libitum . As a procedure to alleviate suffering, cervical dislocation was performed following euthanasia by CO2 inhalation. Additionally, as a humane endpoint, euthanasia was carried out if a weight loss of more than 20% was observed within 7 days. Additionally, this study complies with the ARRIVE guidelines, ensuring rigorous and ethical reporting of all animal research procedures. AOM-DSS-induced CRC in mice In this study, AOM/DSS (azoxymethane/dextran sodium sulfate)-induced experimental CRC models were used as previously described . After a 1-week adaptation period, the mice were randomly allocated to three groups: sham group ( n = 8), P. g -treated group ( n = 9), and P. i -treated group ( n = 8). The sample size for each group was determined based on a power analysis to detect a significant difference in tumor development between groups, with an expected effect size of 0.8, a two-sided significance level (alpha) of 0.05, and a desired power of 80%. Previous studies on CRC models in mice have demonstrated similar effect sizes with comparable group sizes, supporting the adequacy of our chosen sample size to ensure statistically meaningful results. The mice in all groups were intraperitoneally injected with AOM (10 mg/kg; Wako, Osaka, Japan) at the start day of the experiment. Then, they were given drinking water containing 2.5% DSS (36,000–50000 MW, MP Biomedical) for 1 week, followed by DSS-free drinking water for 2 weeks. The DSS treatment cycle was repeated three times. The body weights of the mice were monitored daily. At 9 weeks, the mice were sacrificed for sample collection. Their whole intestines were immediately removed and opened longitudinally. The number of visible tumors on the colorectal surface was counted with the naked eyes. For microbiota analysis in mice, LAM was derived from fecal samples, and MAM was obtained by swabbing the polyps and surrounding mucosal areas. Bacterial cultures and pathogen administration Two putative periodontal pathogens, namely, P. gingivalis strain W83 and P . intermedia strain ATCC25611, were cultured as previously described . They were cultivated in a modified Gifu Anaerobic Medium Broth (Nissui, Tokyo, Japan) using AnaeroPack™ (Mitsubishi Gas Chemical Co., Inc., Tokyo, Japan) and stored in an anaerobic jar (Becton Dickinson Microbiology System, Cockeysville, MD, USA) at a temperature of 37°C. By generating a standard curve relating the plate spread colony forming unit (CFU) and optical density (OD) at 600 nm, the OD measurement was used to estimate the number of cultured bacteria. After calibrating the number of bacteria by adjusting the OD value, the bacterial suspension was centrifuged at 3,000 rpm for 20 min. Then, the supernatant was discarded, and bacterial pellets were suspended in phosphate-buffered saline (PBS) with 2% carboxymethyl cellulose for in vivo experiments. Subsequently, 100 uL of the suspension at a concentration of 10 9 CFU/mL of live bacteria was orally administered three times a week throughout the experimental period according to the protocol .The sham group was administered the same volume of vehicle as a control. Mice with severe weight loss exceeding 20% of baseline body weight during the study, or any signs of distress that could not be alleviated by standard procedures, were excluded from the study. Histological staining of intestinal tissues For histochemical staining, the harvested intestinal tissues were rinsed with PBS, fixed overnight in 10% phosphate-buffered formalin solution, and then embedded in paraffin. The paraffin sections were cut and dewaxed before being stained with hematoxylin and eosin. For immunohistochemistry, the sections were incubated with anti-β-catenin antibody (1:800, Rabbit-polyclonal, Proteintech, Rosemont, IL, USA) and PCNA antibody (1:800, Rabbit-polyclonal, Proteintech) overnight. Immunoreactivity was detected using the ImmPRESS® HRP Horse Anti-Rabbit IgG Polymer Kit (Vector Laboratories, Inc., Burlingame, CA, USA) and a secondary antibody-labeled polymer method. Counterstaining was performed using hematoxylin. Subsequently, the sections were imaged via microscopy (Biozero BZ-8000; Keyence Corporation, Osaka, Japan) and quantified using the Image J software. Total DNA extraction and sequencing of the 16S rRNA gene Bacterial DNA from mice samples was extracted using the DNeasy Blood and Tissue Kit (Qiagen, Venlo, Netherlands) according to the manufacturer’s protocol. The extracted DNA was amplified using the bacterial 16S rDNA PCR Kit (Takara Bio Inc., Shiga, Japan). After confirming the specific amplification via agarose gel electrophoresis, the bacterial 16S rDNA band was excised from the gel and purified using the QIAquick PCR Purification Kit (Qiagen) according to the manufacturer’s protocol. The gel-purified DNA was quantified using NanoDrop (Thermo Scientific, Waltham, MA, USA). Bioengineering Lab. Co., Ltd. (Kanagawa, Japan) performed 16S ribosomal RNA gene sequencing. Briefly, the amplicon sequence library was prepared via two-tailed PCR. To amplify both the V3 and V4 regions of the 16S ribosomal RNA, the first PCR was conducted using the following primers: forward primer 5′ACACTCTTTCCCTACACGACGCTCTTCCGATCT-NNNNN-CCTACGGGNGGCWGCAG; reverse primer 5′ GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT-NNNNN-GACTACHVGGGTATCTAATCC). The thermal conditions were 94°C for 2 min, followed by 98°C for 10 s, 55°C for 30 s, and then 78°C for 30 s, with a final extension at 68°C for 7 min. Purification was performed using AMPureXP (Beckman Coulter, Brea, CA, USA), and the primers were removed. To attach the necessary adapter sequences and unique dual index for library preparation, the second PCR was performed using the following primers: forward primer 5′-AATGATACGGCGACCACCGAGATCTACAC TATAGCCTTCGTCGGCAGCGTC-3’; reverse primer5’-CAAGCAGAAGACGGCATACGAGAT CTAGTACG GTCTCGTGGGCTCGG-3.’ The thermal conditions were 94°C for 2 min, followed by 94°C for 30 s, 60°C for 30 s, and 72°C for 30 s, with a final extension at 72°C for 5 min. The indexed libraries were cleaned and analyzed using the Fragment Analyzer system and the dsDNA 915 Reagent Kit (Advanced Analytical Technologies, Ames, IA, USA). The prepared libraries were utilized for paired-end sequencing using MiSeq v3 reagents and 2 × 300-bp reads on MiSeq (Illumina, San Diego, CA, USA). Microbiome analysis Sequence data were processed using the QIIME2 platform (ver. 2022.8) for microbiome analysis. After being denoised using the QIIME2 DADA2 plugin, the sequences were produced into amplicon sequence variants (ASVs). The ASVs were assigned to the database using the feature classifier plugin, and operational taxonomic units were defined based on 97% similarity clustering using QIIME2 with default parameters. Bacterial taxonomy assignment was performed using the Greengenes (ver. 13_8) database. α- and β-diversities (microbial diversities between samples) were analyzed using the QIIME2 diversity plugin with default parameters. The relative abundance of the microbiota composition at the phylum and family levels of each sample was calculated and visualized. Linear discriminant analysis effect size (LEfSe) analysis was conducted using the LEfSe package (ver1.0.8). Specifically, the nonparametric factorial Kruskal–Wallis and Wilcoxon rank-sum tests were employed to identify differences, and linear discriminant analysis (LDA) was further conducted to evaluate the microbial effects for each group. Conventional PCR and gel electrophoresis In this study, conventional PCR was performed using the Veriti PCR System (Applied Biosystems, Carlsbad, CA, USA) to detect bacteria in the mice samples. Amplification was conducted with predenaturation at 95°C for 30 s, followed by 40 cycles of 95°C for 10 s and 60°C for 30 s, using specific primers for P. gingivalis (forward primer 5’-AGGCAGCTTGCCATACTGCG-3’ and reverse primer 5’-ACTGTTAGCAACTACCGATGT-3’) and P. intermedia (forward primer 5’-CGTGGACCAAAGATTCATCGGTGGA-3’ and reverse primer 5’-CCGCTTTACTCCCCAACAAA-3’). PCR products were run on 1.5% agarose gels and visualized using SYBR® Safe DNA (Invitrogen Corporation, Carlsbad, CA, USA). Human specimen sampling This study was conducted in accordance with the principles of the Declaration of Helsinki, received approval from the Ethics Committee of Niigata University Medical and Dental Hospital (Approval No: 2021-0229), and was conducted over an experimental period from January 17, 2022, to March 31, 2023. Informed consent was obtained in writing from all subjects involved in the study. Written consent was also obtained from the patients for publication of this paper. Additionally, this study complies with the STROBE guidelines, ensuring transparency and comprehensive detail in the reporting of our observational research. A total of 20 patients who underwent endoscopic mucosal resection of colorectal lesions at the Division of Gastroenterology and Hepatology, Graduate School of Medical and Dental Sciences, Niigata University, were enrolled in this study. Inclusion criteria included patients aged 20 years or older, with confirmed colorectal lesions suitable for endoscopic mucosal resection, and with no history of systemic inflammatory diseases. Oral samples (saliva and subgingival dental plaque) and intestinal samples (feces and swab of intestinal mucosa) were collected. The exclusion criterion was consumption of antibiotics within 3 months before the study initiation. For saliva collection, the patients were instructed to spit into a sterile Falcon tube for 5 min. Two sterile paper points were inserted into the gingival sulcus for 10 s to collect subgingival dental plaque samples. The MAM was obtained by swabbing the surface of the polyp. All samples were immediately stored at − 80°C after collection. The clinical parameters and characteristics of the study participants are shown in . DNA extraction from human clinical specimens Samples suspended in preservation solution were transferred to EZ-Beads (Promega, Madison, WI, USA) and homogenized for 3 min. Then, the supernatant was boiled for 5 min. DNA was extracted using GeneFind V2 (Beckman Coulter) according to the manufacturer’s protocol. Bacterial detection using real-time PCR For human clinical specimens, real-time PCR was performed with the extracted DNA at a final volume of 20 μL per reaction using PowerUp™ SYBR™ Green Maser Mix (Applied Biosystems) on the QuantStudio 1 PCR system (Applied Biosystems). Amplification was conducted with predenaturation at 95°C for 30 s, followed by 40 cycles of 95°C for 10 s and 60°C for 30 s, using specific primers for P. gingivalis (forward primer 5’-AGGCAGCTTGCCATACTGCG-3’ and reverse primer 5’-ACTGTTAGCAACTACCGATGT-3’) and P. intermedia (forward primer 5’-CGTGGACCAAAGATTCATCGGTGGA-3’ and reverse primer 5’-CCGCTTTACTCCCCAACAAA-3’). In this study, we considered a cutoff Ct of < 38 as indicating positive bacterial detection in clinical specimens referred to previous publication . Cell culture The human intestinal epithelial cell line Caco-2 was provided by RIKEN BioResource Center (Tsukuba, Japan). The Caco-2 cells were cultured in Dulbecco’s Modified Eagle Medium supplemented with 10% fetal bovine serum, 100-U/mL penicillin, and 100-µg/mL streptomycin. Bacterial adhesion assay An adhesion assay was conducted using a standard bacterial adhesion assay . Briefly, the Caco-2 cells were cultured in subconfluently and then incubated with the indicated bacterial suspension at a multiplicity of infection of 100 for 2 h. OD measurement was performed to estimate and calibrate the number of bacteria for experiment. The bacterial -treated plate was washed three times with cold PBS, and the cells were removed from the plate using trypsin EDTA. To determine the number of CFU of the adhered bacterial cells, serial dilutions of lysates were plated on blood agar plates (Becton, Dickinson and Company, Franklin Lakes, NJ, USA). Statistical analysis All data were expressed as mean ± standard deviation. Statistical analyses were conducted using GraphPad Prism (GraphPad Software, San Diego, CA, USA). The Mann–Whitney U test was employed for two-group comparisons, whereas one-way analysis of variance with Tukey’s post hoc test was used for multiple-group comparisons. Meanwhile, Fisher’s exact test was employed for clinical samples. A P-value < 0.05 was considered statistically significant.
All the animal experiments were approved by the Committee for the Care and Use of Laboratory Animals of Niigata University (approval numbers: SA01272 and SA01375) and conducted in accordance with the Regulations and Guidelines for the Scientific and Ethical Care and Use of Laboratory Animals of the Science Council of Japan. Specific pathogen-free male C57BL/6 mice (6–8 weeks old) were obtained from Japan SLC, Inc. (Shizuoka, Japan). They were maintained in a specific sterile colony under completely controlled conditions (12-hour light/dark cycle with lighting at 8:00 am) and allowed access to a commercial diet and water ad libitum . As a procedure to alleviate suffering, cervical dislocation was performed following euthanasia by CO2 inhalation. Additionally, as a humane endpoint, euthanasia was carried out if a weight loss of more than 20% was observed within 7 days. Additionally, this study complies with the ARRIVE guidelines, ensuring rigorous and ethical reporting of all animal research procedures.
In this study, AOM/DSS (azoxymethane/dextran sodium sulfate)-induced experimental CRC models were used as previously described . After a 1-week adaptation period, the mice were randomly allocated to three groups: sham group ( n = 8), P. g -treated group ( n = 9), and P. i -treated group ( n = 8). The sample size for each group was determined based on a power analysis to detect a significant difference in tumor development between groups, with an expected effect size of 0.8, a two-sided significance level (alpha) of 0.05, and a desired power of 80%. Previous studies on CRC models in mice have demonstrated similar effect sizes with comparable group sizes, supporting the adequacy of our chosen sample size to ensure statistically meaningful results. The mice in all groups were intraperitoneally injected with AOM (10 mg/kg; Wako, Osaka, Japan) at the start day of the experiment. Then, they were given drinking water containing 2.5% DSS (36,000–50000 MW, MP Biomedical) for 1 week, followed by DSS-free drinking water for 2 weeks. The DSS treatment cycle was repeated three times. The body weights of the mice were monitored daily. At 9 weeks, the mice were sacrificed for sample collection. Their whole intestines were immediately removed and opened longitudinally. The number of visible tumors on the colorectal surface was counted with the naked eyes. For microbiota analysis in mice, LAM was derived from fecal samples, and MAM was obtained by swabbing the polyps and surrounding mucosal areas.
Two putative periodontal pathogens, namely, P. gingivalis strain W83 and P . intermedia strain ATCC25611, were cultured as previously described . They were cultivated in a modified Gifu Anaerobic Medium Broth (Nissui, Tokyo, Japan) using AnaeroPack™ (Mitsubishi Gas Chemical Co., Inc., Tokyo, Japan) and stored in an anaerobic jar (Becton Dickinson Microbiology System, Cockeysville, MD, USA) at a temperature of 37°C. By generating a standard curve relating the plate spread colony forming unit (CFU) and optical density (OD) at 600 nm, the OD measurement was used to estimate the number of cultured bacteria. After calibrating the number of bacteria by adjusting the OD value, the bacterial suspension was centrifuged at 3,000 rpm for 20 min. Then, the supernatant was discarded, and bacterial pellets were suspended in phosphate-buffered saline (PBS) with 2% carboxymethyl cellulose for in vivo experiments. Subsequently, 100 uL of the suspension at a concentration of 10 9 CFU/mL of live bacteria was orally administered three times a week throughout the experimental period according to the protocol .The sham group was administered the same volume of vehicle as a control. Mice with severe weight loss exceeding 20% of baseline body weight during the study, or any signs of distress that could not be alleviated by standard procedures, were excluded from the study.
For histochemical staining, the harvested intestinal tissues were rinsed with PBS, fixed overnight in 10% phosphate-buffered formalin solution, and then embedded in paraffin. The paraffin sections were cut and dewaxed before being stained with hematoxylin and eosin. For immunohistochemistry, the sections were incubated with anti-β-catenin antibody (1:800, Rabbit-polyclonal, Proteintech, Rosemont, IL, USA) and PCNA antibody (1:800, Rabbit-polyclonal, Proteintech) overnight. Immunoreactivity was detected using the ImmPRESS® HRP Horse Anti-Rabbit IgG Polymer Kit (Vector Laboratories, Inc., Burlingame, CA, USA) and a secondary antibody-labeled polymer method. Counterstaining was performed using hematoxylin. Subsequently, the sections were imaged via microscopy (Biozero BZ-8000; Keyence Corporation, Osaka, Japan) and quantified using the Image J software.
Bacterial DNA from mice samples was extracted using the DNeasy Blood and Tissue Kit (Qiagen, Venlo, Netherlands) according to the manufacturer’s protocol. The extracted DNA was amplified using the bacterial 16S rDNA PCR Kit (Takara Bio Inc., Shiga, Japan). After confirming the specific amplification via agarose gel electrophoresis, the bacterial 16S rDNA band was excised from the gel and purified using the QIAquick PCR Purification Kit (Qiagen) according to the manufacturer’s protocol. The gel-purified DNA was quantified using NanoDrop (Thermo Scientific, Waltham, MA, USA). Bioengineering Lab. Co., Ltd. (Kanagawa, Japan) performed 16S ribosomal RNA gene sequencing. Briefly, the amplicon sequence library was prepared via two-tailed PCR. To amplify both the V3 and V4 regions of the 16S ribosomal RNA, the first PCR was conducted using the following primers: forward primer 5′ACACTCTTTCCCTACACGACGCTCTTCCGATCT-NNNNN-CCTACGGGNGGCWGCAG; reverse primer 5′ GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT-NNNNN-GACTACHVGGGTATCTAATCC). The thermal conditions were 94°C for 2 min, followed by 98°C for 10 s, 55°C for 30 s, and then 78°C for 30 s, with a final extension at 68°C for 7 min. Purification was performed using AMPureXP (Beckman Coulter, Brea, CA, USA), and the primers were removed. To attach the necessary adapter sequences and unique dual index for library preparation, the second PCR was performed using the following primers: forward primer 5′-AATGATACGGCGACCACCGAGATCTACAC TATAGCCTTCGTCGGCAGCGTC-3’; reverse primer5’-CAAGCAGAAGACGGCATACGAGAT CTAGTACG GTCTCGTGGGCTCGG-3.’ The thermal conditions were 94°C for 2 min, followed by 94°C for 30 s, 60°C for 30 s, and 72°C for 30 s, with a final extension at 72°C for 5 min. The indexed libraries were cleaned and analyzed using the Fragment Analyzer system and the dsDNA 915 Reagent Kit (Advanced Analytical Technologies, Ames, IA, USA). The prepared libraries were utilized for paired-end sequencing using MiSeq v3 reagents and 2 × 300-bp reads on MiSeq (Illumina, San Diego, CA, USA).
Sequence data were processed using the QIIME2 platform (ver. 2022.8) for microbiome analysis. After being denoised using the QIIME2 DADA2 plugin, the sequences were produced into amplicon sequence variants (ASVs). The ASVs were assigned to the database using the feature classifier plugin, and operational taxonomic units were defined based on 97% similarity clustering using QIIME2 with default parameters. Bacterial taxonomy assignment was performed using the Greengenes (ver. 13_8) database. α- and β-diversities (microbial diversities between samples) were analyzed using the QIIME2 diversity plugin with default parameters. The relative abundance of the microbiota composition at the phylum and family levels of each sample was calculated and visualized. Linear discriminant analysis effect size (LEfSe) analysis was conducted using the LEfSe package (ver1.0.8). Specifically, the nonparametric factorial Kruskal–Wallis and Wilcoxon rank-sum tests were employed to identify differences, and linear discriminant analysis (LDA) was further conducted to evaluate the microbial effects for each group.
In this study, conventional PCR was performed using the Veriti PCR System (Applied Biosystems, Carlsbad, CA, USA) to detect bacteria in the mice samples. Amplification was conducted with predenaturation at 95°C for 30 s, followed by 40 cycles of 95°C for 10 s and 60°C for 30 s, using specific primers for P. gingivalis (forward primer 5’-AGGCAGCTTGCCATACTGCG-3’ and reverse primer 5’-ACTGTTAGCAACTACCGATGT-3’) and P. intermedia (forward primer 5’-CGTGGACCAAAGATTCATCGGTGGA-3’ and reverse primer 5’-CCGCTTTACTCCCCAACAAA-3’). PCR products were run on 1.5% agarose gels and visualized using SYBR® Safe DNA (Invitrogen Corporation, Carlsbad, CA, USA).
This study was conducted in accordance with the principles of the Declaration of Helsinki, received approval from the Ethics Committee of Niigata University Medical and Dental Hospital (Approval No: 2021-0229), and was conducted over an experimental period from January 17, 2022, to March 31, 2023. Informed consent was obtained in writing from all subjects involved in the study. Written consent was also obtained from the patients for publication of this paper. Additionally, this study complies with the STROBE guidelines, ensuring transparency and comprehensive detail in the reporting of our observational research. A total of 20 patients who underwent endoscopic mucosal resection of colorectal lesions at the Division of Gastroenterology and Hepatology, Graduate School of Medical and Dental Sciences, Niigata University, were enrolled in this study. Inclusion criteria included patients aged 20 years or older, with confirmed colorectal lesions suitable for endoscopic mucosal resection, and with no history of systemic inflammatory diseases. Oral samples (saliva and subgingival dental plaque) and intestinal samples (feces and swab of intestinal mucosa) were collected. The exclusion criterion was consumption of antibiotics within 3 months before the study initiation. For saliva collection, the patients were instructed to spit into a sterile Falcon tube for 5 min. Two sterile paper points were inserted into the gingival sulcus for 10 s to collect subgingival dental plaque samples. The MAM was obtained by swabbing the surface of the polyp. All samples were immediately stored at − 80°C after collection. The clinical parameters and characteristics of the study participants are shown in .
Samples suspended in preservation solution were transferred to EZ-Beads (Promega, Madison, WI, USA) and homogenized for 3 min. Then, the supernatant was boiled for 5 min. DNA was extracted using GeneFind V2 (Beckman Coulter) according to the manufacturer’s protocol.
For human clinical specimens, real-time PCR was performed with the extracted DNA at a final volume of 20 μL per reaction using PowerUp™ SYBR™ Green Maser Mix (Applied Biosystems) on the QuantStudio 1 PCR system (Applied Biosystems). Amplification was conducted with predenaturation at 95°C for 30 s, followed by 40 cycles of 95°C for 10 s and 60°C for 30 s, using specific primers for P. gingivalis (forward primer 5’-AGGCAGCTTGCCATACTGCG-3’ and reverse primer 5’-ACTGTTAGCAACTACCGATGT-3’) and P. intermedia (forward primer 5’-CGTGGACCAAAGATTCATCGGTGGA-3’ and reverse primer 5’-CCGCTTTACTCCCCAACAAA-3’). In this study, we considered a cutoff Ct of < 38 as indicating positive bacterial detection in clinical specimens referred to previous publication .
The human intestinal epithelial cell line Caco-2 was provided by RIKEN BioResource Center (Tsukuba, Japan). The Caco-2 cells were cultured in Dulbecco’s Modified Eagle Medium supplemented with 10% fetal bovine serum, 100-U/mL penicillin, and 100-µg/mL streptomycin.
An adhesion assay was conducted using a standard bacterial adhesion assay . Briefly, the Caco-2 cells were cultured in subconfluently and then incubated with the indicated bacterial suspension at a multiplicity of infection of 100 for 2 h. OD measurement was performed to estimate and calibrate the number of bacteria for experiment. The bacterial -treated plate was washed three times with cold PBS, and the cells were removed from the plate using trypsin EDTA. To determine the number of CFU of the adhered bacterial cells, serial dilutions of lysates were plated on blood agar plates (Becton, Dickinson and Company, Franklin Lakes, NJ, USA).
All data were expressed as mean ± standard deviation. Statistical analyses were conducted using GraphPad Prism (GraphPad Software, San Diego, CA, USA). The Mann–Whitney U test was employed for two-group comparisons, whereas one-way analysis of variance with Tukey’s post hoc test was used for multiple-group comparisons. Meanwhile, Fisher’s exact test was employed for clinical samples. A P-value < 0.05 was considered statistically significant.
Oral administration of P. gingivalis aggravates colitis-associated colorectal cancer in experimental mouse models To investigate the carcinogenic effect of periodontal pathogens, P. gingivalis and P. intermedia were orally administrated to AOM-DSS-induced CRC mouse models ( ). The body weights of the mice were monitored throughout the experimental period, and temporary weight loss due to AOM and DSS administration was observed; however, there were no differences in weight change between the groups ( ). Morphologically, polyps were found on the mucosal surface of the colon and were more abundant in the P. g -treated group than in the sham and P. i -treated groups ( ). Furthermore, histological evaluation revealed that the number of positive cells of both β-catenin and PCNA, which are colorectal carcinogenesis-related proteins, in the intestinal epithelium was significantly higher in the P. g -treated group than in the sham and P. i -treated groups ( , ). Taken together, these findings indicate that oral administration of P. gingivalis has carcinogenic potency in experimental AOM-DSS-induced CRC in mice. Microbiome analysis revealed distinct profiles of the LAM and MAM, with possible enrichment of orally administered P. gingivalis in the gut As alterations in the gut microbiota are associated with CRC, we conducted microbiome analysis with a focus on the LAM and MAM. For the α-diversity analysis, each diversity index exhibited no significant differences among the groups ( ). For the β-diversity analysis, the PCoA plot using unweighted UniFrac distance showed the tendency to segregate between the LAM and MAM ( ). Similarly, the distinct tendency of bacterial compositions was observed between the LAM and MAM at the phylum and family levels ( , ). Subsequently, we conducted LEfSe analysis to further characterize the taxonomic differences among the groups; the results showed that the P. g -treated groups exhibited a significantly higher relative abundance of the family Porphyromonadaceae , to which P. gingivalis belongs, compared to the sham group. ( ). No differences were observed in the abundance of the family Prevotellaceae which P. intermedia belongs to between the sham and P. i - treated groups. Relative abundance of Porphyromonadaceae in both the LAM and MAM was increased by the P. gingivalis treatment, indicating possible enrichment of orally administered P. gingivalis in the gut. P. gingivalis exhibited higher presence in the MAM than in the LAM To confirm the enrichment of orally administered bacteria in the MAM and LAM, we conducted PCR analysis using specific primers for P. gingivalis and P. intermedia . The conventional PCR analysis conducted in this study using mice samples revealed that P. gingivalis was present in both the LAM and MAM and was more abundant in the latter than in the former (MAM: 6 out of 9 samples, LAM: 2 out of 9 samples) ( ). An opposite trend was observed for P. intermedia , being more abundant in the LAM than in the MAM (LAM: 5 out of 8 samples, MAM: 3 out of 8 samples) ( ). We also performed similar quantification using clinical samples obtained from human patients with colonic polyps. Both P. gingivalis and P. intermedia were detectable in all types of samples, such as saliva, subgingival dental plaque, feces (referred to as LAM), and swab of intestinal mucosa (referred to as MAM) via real-time PCR ( ). Interestingly, P. gingivalis was found to be more abundant in the MAM than in the LAM ( ), indicating the extensive enrichment of P. gingivalis in the surface of the intestinal mucosa. P. gingivalis showed higher adhesion capacity to intestinal epithelial cells than P. intermedia Subsequently, we explored the interaction of the periodontal pathogens enriched near the intestinal mucosal surface with the host epithelial cells. An in vitro adhesion assay using human intestinal epithelial cells clearly revealed the higher adhesion capacity of P. gingivalis to the cells than P. intermedia ( ). Furthermore, the analysis of different strains of P. gingivalis revealed the strain-dependent adhesive capacity to the intestinal epithelial cells ( ). Moreover, we observed that the pretreatment of P. gingivalis with specific inhibitors to gingipain, a major virulence factor produced by P. gingivalis , significantly decreased its adhesion capacity to the cells ( ). P. gingivalis exhibited significantly higher adhesion ability compared to intestinal and oral commensal bacteria ( ).
P. gingivalis aggravates colitis-associated colorectal cancer in experimental mouse models To investigate the carcinogenic effect of periodontal pathogens, P. gingivalis and P. intermedia were orally administrated to AOM-DSS-induced CRC mouse models ( ). The body weights of the mice were monitored throughout the experimental period, and temporary weight loss due to AOM and DSS administration was observed; however, there were no differences in weight change between the groups ( ). Morphologically, polyps were found on the mucosal surface of the colon and were more abundant in the P. g -treated group than in the sham and P. i -treated groups ( ). Furthermore, histological evaluation revealed that the number of positive cells of both β-catenin and PCNA, which are colorectal carcinogenesis-related proteins, in the intestinal epithelium was significantly higher in the P. g -treated group than in the sham and P. i -treated groups ( , ). Taken together, these findings indicate that oral administration of P. gingivalis has carcinogenic potency in experimental AOM-DSS-induced CRC in mice.
P. gingivalis in the gut As alterations in the gut microbiota are associated with CRC, we conducted microbiome analysis with a focus on the LAM and MAM. For the α-diversity analysis, each diversity index exhibited no significant differences among the groups ( ). For the β-diversity analysis, the PCoA plot using unweighted UniFrac distance showed the tendency to segregate between the LAM and MAM ( ). Similarly, the distinct tendency of bacterial compositions was observed between the LAM and MAM at the phylum and family levels ( , ). Subsequently, we conducted LEfSe analysis to further characterize the taxonomic differences among the groups; the results showed that the P. g -treated groups exhibited a significantly higher relative abundance of the family Porphyromonadaceae , to which P. gingivalis belongs, compared to the sham group. ( ). No differences were observed in the abundance of the family Prevotellaceae which P. intermedia belongs to between the sham and P. i - treated groups. Relative abundance of Porphyromonadaceae in both the LAM and MAM was increased by the P. gingivalis treatment, indicating possible enrichment of orally administered P. gingivalis in the gut.
exhibited higher presence in the MAM than in the LAM To confirm the enrichment of orally administered bacteria in the MAM and LAM, we conducted PCR analysis using specific primers for P. gingivalis and P. intermedia . The conventional PCR analysis conducted in this study using mice samples revealed that P. gingivalis was present in both the LAM and MAM and was more abundant in the latter than in the former (MAM: 6 out of 9 samples, LAM: 2 out of 9 samples) ( ). An opposite trend was observed for P. intermedia , being more abundant in the LAM than in the MAM (LAM: 5 out of 8 samples, MAM: 3 out of 8 samples) ( ). We also performed similar quantification using clinical samples obtained from human patients with colonic polyps. Both P. gingivalis and P. intermedia were detectable in all types of samples, such as saliva, subgingival dental plaque, feces (referred to as LAM), and swab of intestinal mucosa (referred to as MAM) via real-time PCR ( ). Interestingly, P. gingivalis was found to be more abundant in the MAM than in the LAM ( ), indicating the extensive enrichment of P. gingivalis in the surface of the intestinal mucosa.
showed higher adhesion capacity to intestinal epithelial cells than P. intermedia Subsequently, we explored the interaction of the periodontal pathogens enriched near the intestinal mucosal surface with the host epithelial cells. An in vitro adhesion assay using human intestinal epithelial cells clearly revealed the higher adhesion capacity of P. gingivalis to the cells than P. intermedia ( ). Furthermore, the analysis of different strains of P. gingivalis revealed the strain-dependent adhesive capacity to the intestinal epithelial cells ( ). Moreover, we observed that the pretreatment of P. gingivalis with specific inhibitors to gingipain, a major virulence factor produced by P. gingivalis , significantly decreased its adhesion capacity to the cells ( ). P. gingivalis exhibited significantly higher adhesion ability compared to intestinal and oral commensal bacteria ( ).
This study demonstrates that ingested P. gingivalis , a putative periodontal pathogen, aggravates CRC in an AOM/DSS mouse model. Our animal study and clinical examination revealed that P. gingivalis was more abundant in the MAM than in the LAM. Furthermore, our in vitro study showed that P. gingivalis exhibited higher adhesion capacity to intestinal epithelial cells than P. intermedia . These results indicate that P. gingivalis is involved in the pathogenesis of CRC owing to its high MAM translocation and high adherence to the intestinal epithelium. In this study, AOM/DSS-induced mouse models were used to investigate CRC ( ). Various CRC mouse models, including genetically derived and chemically induced models, are available for CRC research. Genetically engineered Apc Min/+ mice exhibit mutation at the adenomatous polyposis coli (Apc) tumor suppressor gene and shows consequent predisposition to multiple intestinal neoplasia . Wang et al. recently reported that oral gavage of P. gingivalis increased the tumor count and volume in the Apc Min/+ mouse model . In addition, they demonstrated that P. gingivalis promoted CRC via NLRP3 inflammasome activation in vitro and in vivo . Among the chemically induced CRC models, the combination with the AOM and DSS is the most common method and representative and robust polyp induction is observed . AOM is a colonic epithelial cell-specific carcinogen that causes the formation of preneoplastic lesions by DNA damage. DSS is a chemical that induces colonic inflammation, which promotes the development of the initiated neoplastic cells into colorectal tumors. Recent studies using the AOM-DSS model have demonstrated that P. gingivalis administration exacerbates colorectal cancer severity by modulating specialized immune cells in the intestine, such as invariant natural killer T cells . Previously, we have reported that P. gingivalis administration worsens colitis severity in a DSS-induced model . Taken together, these findings suggest that the synergistic effects of P. gingivalis -mediated inflammation and immune modulation may contribute to the progression of colorectal cancer. Our microbiota analysis using mice samples revealed that the LAM and MAM exhibited distinct bacterial profiles, which is consistent with previous studies ( – ) [ , , ]. Miyauchi et al. conducted a comparative analysis between LAM and MAM in healthy volunteers and found clear differences in both UniFrac and principal coordinate analysis . Clavenna et al. recently reported the definite differences in the bacterial composition and diversity between LAM and MAM in patients with colonic polyp . Moreover, they found distinct signatures of MAM between low- and high-grade dysplastic colon polyps, indicating alterations of MAM with a tumor-stage-specific manner. Considering the biogeographic adjacency of MAM to the intestinal epithelium, MAM has strong impacts on the initiation and development of colonic tumors. Therefore, MAM could be a potential diagnostic and therapeutic target of CRC. In addition to the distinction between MAM and LAM, we examined the enrichment of P. gingivalis in MAM using PCR methods in both mice and human samples ( and ). This could be explained by the high translocation of P. gingivalis to intestinal mucosal surfaces owing to its specific virulence factors. The intestinal epithelium is covered by a thick mucus layer that functions as an intestinal barrier separating lumen bacteria from the intestinal mucosa. The mucus layer is composed of highly glycosylated mucin proteins that form a gel-like structure overlying the intestinal epithelium. Mucin 2 (MUC2) synthesized by goblet cells is the most abundant mucin protein in the small and large intestines . Gingipains, a group of complex arginine- or lysine-specific cysteine proteinases, have been recognized as a major virulence factor of P. gingivalis . A previous study reported that RgpB, a type of gingipains secreted by P. gingivalis , exhibited the ability to cleave MUC2 at a specific site, resulting in the disruption of the MUC2 polymeric framework . One of the factors contributing to the intestinal barrier is antimicrobial peptides, which prevent microorganisms from reaching the intestinal mucosa. Through mass spectrometry analysis, Carlisle et al. found that P. gingivalis culture supernatants fully or partially degrade human α- and β-defensins, which are major antimicrobial peptides secreted within the intestinal mucosa . In addition, Maisetta et al. reported the degradation of human β-defensin by gingipains and its subsequent reduction in vitro . Taken together, these observations suggest that the manipulation of the intestinal mucosal barrier by P. gingivalis allows its translocation and enrichment to the mucosal surface. In our in vitro study, we found that P. gingivalis exhibited higher adhesion capacity to intestinal epithelial cells than other periodontal pathogens ( ). Furthermore, we exhibited the strain- and gingipain-dependent adhesive capacities of P. gingivalis to the cells. Bacterial adherence/invasion to the host cell is an important initial phase for the successful establishment of infection and subsequent cellular response . The high adhesion/invasion capacities of P. gingivalis to oral epithelial cells have been extensively reported, with particular emphasis on the crucial roles of fimbriae and gingipains in the adhesion/invasion processes [ – ]. P. gingivalis fimbriae are classified into six types (types I to V and Ib) based on the fimA genes encoding FimA (a subunit of fimbriae), and the fimbria variations exhibit distinct adhesion properties . We believe that the differences in adhesion profiles observed in this study were due to the strain-specific variations in the fimbria types. The involvement of gingipains in epithelial cell adhesion has also been reported. Chen et al. demonstrated that gingipains possess a catalytic domain and hemagglutinin/adhesin domains, which contribute to the adherence of P. gingivalis to gingival epithelial cells . Moreover, Onoe et al. reported that the proteolytic function of gingipains is responsible for the maturation of fimbriae formation . These findings likely explain the reduction of cell adhesion capacity by the gingipain-specific inhibitors in this study. Considering the recent publication reporting the presence of P. gingivalis in human intestinal tissues , further investigations are warranted to determine the impact of fimbriae and gingipains on intestinal epithelial adhesion and the subsequent cancer phenotypes. This study has several limitations that need further investigation. First, this study did not include F. nucleatum as a comparison, as its association with CRC has already been well-documented in numerous studies. Rather than reiterating findings on F. nucleatum , we aimed to expand the focus to other periodontal pathogens, such as P. gingivalis and P. intermedia , to explore their potential role in CRC pathogenesis. However, including F. nucleatum as a positive control could have provided a useful comparative baseline. Second, the microbiota analysis in this study was limited to short-read sequencing, which did not achieve species-level identification. Therefore, we used qPCR to achieve species-level analysis for the two species of interest. A comprehensive metagenomic analysis would be necessary to provide broader species-level insights. Third, more comprehensive clinical studies are needed. In this study, the analysis was limited to patients with polyps, and future research should aim for a more extensive and large-scale approach. To obtain clearer insights, comparisons should include both healthy individuals and CRC patients, as well as evaluating variations across different CRC severity levels. Comparing bacterial enrichment in MAM between patients with and without polyps, as well as examining the effects of interventions such as periodontal treatment and oral care on the MAM flora, will deepen our understanding of the oral-gut connection in periodontal medicine. This understanding will provide new insights into the importance of maintaining healthy oral hygiene and the exploration of potential therapeutic interventions targeting the oral–gut microbial axis.
P. gingivalis is enriched in MAM, and its subsequent adhesion to intestinal epithelial cells is potentially involved in the pathogenesis of CRC.
S1_RAW _images The original uncropped gel images for . (PDF)
|
An active electronic, high-density epidural paddle array for chronic spinal cord neuromodulation | 0aa840e0-88c8-4a56-8aa2-1615cfd5c4ca | 11920892 | Musculoskeletal System[mh] | Introduction Epidural electrical stimulation (EES) of the spinal cord has been used for the treatment of neuropathic pain for almost six decades (Nahm ), with over 50 000 procedures performed in the United States each year (Krog et al ). In therapeutic or research applications, EES leads are surgically inserted into the epidural potential space on the dorsal aspect of the spinal cord and connected to an implantable or external pulse generator (IPG or EPG, respectively) using inline contacts. EES has been used in preclinical research settings to study central, peripheral, autonomic, and sensorimotor function (Musienko et al , Parker et al , Gad et al , Capogrosso et al , Calvert et al ), as well as in clinical research settings to restore sensorimotor function following a spinal cord injury or amputation (Chandrasekaran et al , Rowald et al , Lorach et al , Nanivadekar et al ). In parallel, epidural spinal recordings obtained through EES leads have been used as control signals for pain therapies (Nijhuis et al ), to study voluntary movement control (Burke et al ), and to examine somatosensory evoked spinal potentials (SEPs) (Nainzadeh et al , Urasaki et al , ÇiÇek et al , Tsirikos et al , Insola et al , Sala et al , Woodington et al ). However, all prior research studies and clinical applications utilize passive leads, requiring a one-to-one connection between electrical contacts on the tissue and the stimulation electronics. In practice, this has put a constraint on the number of contacts used to a maximum of 32 channels, resulting in limited selectivity during stimulation and resolution during recording. To ensure proper placement of the EES electrodes for activation of targeted spinal circuits, a key step is intraoperative testing during implantation (Falowski ). During intraoperative testing, electrical stimuli are delivered through the implanted paddle. The stimuli primarily activate sensory neurons, which then recruit motor neurons via reflex pathways (Capogrosso et al ). Using bilateral electromyography (EMG), the response of muscles at the same spinal level as the sensory targets is evaluated, and lead position is adjusted to achieve the desired placement (Shils and Arle ). To recruit more specific pools of neurons on the spinal cord, multipolar stimulation is applied in a current steering approach (Chandrasekaran et al , Rowald et al , Lorach et al , Mishra et al , Nanivadekar et al ), which requires access to multiple contacts in a confined region of interest. Subsequently, there have been efforts to design an EES paddle optimized for particular applications (for example, locomotor restoration), using neuroimaging data and computational modeling to guide the arrangements of contacts on the paddle (Rowald et al ). Additionally, standard EES paddles principally deployed for the management of neuropathic pain concentrate their electrodes on the central structures of the spinal cord, and do not place stimulating contacts over lateral regions (for example, spinal nerve entry points). Activation of lateral structures has resulted in strong motor recruitment of target muscles (Calvert et al ), which may be more selective than stimulation more proximal to contralateral motor pools. However, the distribution of afferent entry points is not uniform along the rostrocaudal axis, again highlighting the importance of proper intraoperative alignment when using sparsely distributed electrode contacts. The necessity for manual stimulation parameter selection is a major barrier to the widespread adoption of EES for functional improvement after spinal cord injury (Solinsky et al ). Consequently, there have been efforts to automate the optimization of stimulation parameters using machine learning models (Zhao et al , Govindarajan et al ). Although these approaches have shown improvement in speed, prior models were conditioned on each electrode independently. Such an approach is effective for paddles containing low numbers of contacts but requires significant computing power to run many models in parallel for high-count paddles. Additionally, conditioning on each electrode independently does not allow the model to infer what may happen during stimulation on electrodes excluded from the training set. As the number of stimulation contacts in a paddle increases, the time to collect training data scales linearly. Instead, if neural networks can infer the consequences of stimulation at stimulation contacts not included in the training dataset, collecting training data from all contacts may not be necessary to rapidly program EES on many-contact paddles. To address both placement limitations and stimulation parameter search optimization, a next-generation EES paddle must (1) span multiple spinal segments to target multiple sensorimotor pools, (2) provide a wide mediolateral span to target lateral structures, (3) enable localized current steering and bipolar re-referencing with densely packed electrodes, and (4) be reconfigurable for patient anatomy. These requirements are incompatible with conventional EES hardware with limited electrode contacts. Thus, we designed a smart-implant called HD64 to provide 60 electrodes of epidural current steering for EES across a 14.5 mm mediolateral span (almost 2x wider than commercial EES paddles) and 2.5 vertebral segment span (40 mm). Our smart-implant is only 2 mm thick, and sports a high-density electrode array integrated with a hermetic electronic multiplexing package and a 24:64 reconfigurable multiplexer application-specific integrated circuit (ASIC)—thus breaking the one-to-one wiring constraint of percutaneous EES hardware. The programmable smart-paddle enables the spatial layout of stimulating electrodes to be software-controlled dynamically, and is powered by a ±5 V AC power driver ASIC with fail-safe AC power leakage-detection circuits. The leakage detection circuit acts as a ground-fault detection system, powering down the ASIC if power flows to ground through an alternative path (and maintaining normal operation when power returns down the dedicated AGND and DGND return lines). This ensures the power required for the operation of the active implant remains isolated from the user of the HD64. For future human use of the device, we developed good manufacturing practice (GMP) processes (achieving a manufacturing yield of 85%) and performed ISO 14708 aging testing as well as ISO 10993-1:2021 biocompatibility testing. A comparison between three commercial EES paddles with the work presented here can be found in table . The culmination of this work resulted in the long-term evaluation of the smart HD64 paddle for EES using benchtop verification and in-vivo validation in two sheep with up to 2 paddles per animal for 15 months. During this time, our goals were to establish the utility of HD64 to perform high density EES studies and observe any device-related functional issues. We quantified the selectivity of EES-evoked motor responses delivered using dense bipoles, differences in SEPs recorded using high- and low-density bipoles, and we extended the state-of-the-art stimulation parameter inference machine learning models to high-density electrodes (Govindarajan et al ).
Methods 2.1. Treatment span of sensory-motor EES In the T11 to L1 vertebral regions, the transverse diameter of the human spinal cord is 8–9.6 mm (Ko et al , Fradet et al ), the spinal canal sagittal depth ranges between 15.4 and 19.54 mm, and the spinal canal transverse width spans 16.7–26.5 mm (Laporte et al , Busscher et al ). Recent work has highlighted the benefit of activation of lateral structures (Calvert et al ), and their potential as a target for locomotor neuroprosthesis following spinal cord injury (Rowald et al ). Conventional surgical paddles for pain are limited to dorsal column medio-lateral treatment (<7.5 mm with 4 electrode columns) across two vertebral segments (<49 mm with 8 electrodes rows). Extending the medio-lateral therapeutic span of EES delivery will facilitate accessing the dorsal horn and dorsal roots. Based on the desired treatment area, the paddle requirements are to treat a 13.7 × 43 mm stimulation area. Extending the treatment surface area for EES faces two challenges: (1) the lateral epidural potential volume adjacent to nerve roots is much thinner than at the midline, and (2) performing EES with a sparser electrode array would reduce the targeting resolution, potentially introducing off-target effects or non-specific activation. 2.1.1. Mechanical requirements for surgical introduction of the lead The paddle geometry was designed to mimic conventional EES paddles at the anatomical midline using a 2 mm thick geometry. Due to the 1 mm lateral epidural thickness, the paddle lateral edges were limited to 0.7 mm. Ten cross-sectional geometries of electrode profiles were developed in groups of three and sequentially evaluated. Each design was modeled using computer-aided design software and molded in silicone to exhibit various longitudinal and lateral flexibility profiles. The profile groups were sent to three neurosurgeons to evaluate the preferred paddle design (criteria for their evaluation included steerability, axial stiffness, and bendability) while accessing the epidural space through a small laminectomy. The resulting profile (figure (l)) was the final profile for two key reasons: (1) the lateral wings remained flexible and could bend through a conventional-width laminectomy, and (2) a relatively rigid paddle was strongly preferred as it enabled physicians to advance the paddle rostrally using forceps. 2.1.2. Smart-implant multiplexing architecture Using a fixed number of electrodes (e.g. 16 or 32) spread across a wider medio-lateral span would result in a sparser electrode density and result in a reduced EES targeting resolution. Given the lateral nerve root density, a sparse density was undesirable. Conventional implanted surgical paddles have reached 32 electrodes, but do not exhibit stimulation sites over the nerve roots and require management and connection of 4 lead tails below the skin. Scaling to 64 electrodes using current connector technology would require 8 lead-tails and 64-channel connector on the IPG, which would greatly increase bulk in the body and surgical complexity. Since EES generally needs access to many electrodes but only a subset are used at the same time, we developed a multiplexing architecture to programmably connect electrodes to the pulse generator. Specifically, we sought to develop a scalable manufacturing process to produce paddle electrodes. 2.2. Implant-grade design requirements HD64 is a smart-paddle containing embedded electronics that is powered and controlled by a pulse generator over a two-lead tail wiring scheme. To realize an active-multiplexed electrode array, we developed multiple new implant technologies including: (1) a high-voltage multiplexer that operates from charge-balanced AC power (figures (a) and (b)), with known noise performance (Rachinskiy et al ), (2) a high-density and low-cost 93-pin hermetic feedthrough package with >80% yield (figures (c)–(e)), (3) precision electrode technology (platinum–iridium with 100 μ m lines and 100 μ m spaces), and (4) medical-grade micro-bonding processes to permanently bond 85 electrical connections between the hermetic electronic package and electrode array. For long-term implanted use, we developed requirements based on our clinical inputs for people with spinal cord injury and/or chronic pain in the lower extremities. We established electrical and mechanical safety requirements and testing paradigms in accordance with ISO 14708-3:2017. We further developed a biocompatibility evaluation plan in accordance with ISO 10993-1:2018. The device also had to be manufactured according to GMP and undergo ethylene oxide (ETO) sterilization validation. 2.2.1. Electrical safety: AC-powered implanted satellite devices In this work, we describe two specific and critical design elements that enable new smart implants including: (1) ultra-low current satellite multiplexer operation using charge-balanced AC power over an implanted lead and pluggable-connector interface, (2) digital charged-balance programmable control of the distal satellite multiplexer. We tested commercial leads and connectors with ±10 V AC power using an accelerated aging paradigm in a sealed saline container. At the end of a 5 year implanted life equivalent, the saline was tested for residual traces of platinum–iridium using inductively coupled mass spectrometry to test for any electromigration. We determined that ±5 V AC through the lead-wire with a 2x voltage multiplier within the hermetic electronics package prevented any risk of electromigration. Additionally, we developed a separate IPG-side driver ASIC chip to deliver low-current ±5 V AC power with charged-balanced differential data lines to program the multiplexer. Board logic-level programming of a satellite multiplexer over a lead wire is not an acceptable risk, as any DC potential on the line can lead to electromigration at the connector. The driver ASIC was designed to deliver a programmable AC-current limited to 10–100 μ A and 10–500 kHz to power the multiplexing ASIC (note, this is independent of therapeutic stimulation frequency). Importantly, a novel real-time leakage current detection circuit was developed such that any broken wire or connection could be detected by the ASIC which would then generate an internal digital fault flag. 2.2.2. Biocompatibility evaluation For translation to future human implanted use, we determined that the following ISO 10991-1 tests must be performed: cytotoxicity, sensitization, irritation, material-mediated pyrogenicity, acute systemic toxicity, subacute toxicity, implantation, genotoxicity, ethylene oxide residuals, and partial chemical characterization. To achieve these endpoints, approximately 300 active-paddle test articles were sent to a certified ISO 10993-1 test house over a 14 month duration. All test articles successfully passed these tests, and no histological abnormalities were identified by the test house following any of the in vivo tests. A summary of the biocompatibility tests performed and their results are included in supplementary table 1. 2.2.3. Precision electrode technology Precision electrodes were developed with a highly-novel delamination-free process using medical-implant grade silicone (Nusil, Carpentaria, CA), platinum–iridium 90/10 conductors, and a nylon mesh reinforcement layer (figure (f)). In this process, the silicone and platinum–iridium are pre-processed using a proprietary laser patterning technique, resulting in 50-100 μ m electrode features in 50 μ m thick platinum–iridium 90/10. A flexible reinforcement mesh layer was embedded within the silicone to prevent stretching of the silicone to protect the electrode traces from fracture during stretching or repetitive pull cycles. The HD64 paddle consists of three silicone and two metal layers for a total substrate thickness of 400 μ m. Using a proprietary process, the silicone layers are chemically fused together to form a seamless and delamination-free construction for long-term implanted operation. 2.2.4. High-voltage 24:64 multiplexing ASIC with AC power and charge-balanced programming An onboard multiplexer ASIC (figure (a)) was developed to support two key functions needed for this architecture: (1) a fail-safe low-voltage and low-current AC powering scheme with on-chip ±9 V compliance voltage multipliers and a bidirectional charge-balanced digital read-write scheme, (2) a 24:64 switch matrix to allow 24 bidirectional wires to connect to any of the 60 high-density electrodes. Fabrication was performed using the X-FAB XH035 18V 350 nm silicon process to construct the ASIC. A polyimide redistribution layer was developed to provide a solder-ball flip-chip interface between the ASIC (figure (b)) and hermetic feedthrough (figure (c)). The multiplexer is designed to be dynamically programmed using a ±1.8 V charge-balanced, differential digital interface operating from ±5 V AC power. The AC power, 2 programming lines, 2 ground lines, and 19 bidirectional contact connections (24 conductors total) are distributed across 2 12-contact lead tails. Once powered, programming can be used to connect any of 24 input wires to 64 outputs. The multiplexer contains 64 blocks, each containing four switches to ensure every output electrode has a redundant connection to at least four stimulation/recording inputs. Switches were assigned to blocks in such a way as to maximize the number of switches that may become nonfunctional while still enabling sequential programming to raster through the entire 60-electrode array. The ASIC operates from AC power (±5 V AC, 10–500 kHz) and features real-time current-limiting. Charge balance is necessary to prevent corrosion and long-term electro-migration of the conductors. The ASIC uses an on-board rectifier with off-chip capacitors to convert the AC signal to ±9 V DC internally and 3.3 V. 2.2.5. High-density hermetic electronics package We developed a custom 93-feedthrough ceramic assembly (8 × 8 × 0.75 mm) brazed into a titanium flange. On the interior of the package, the ASIC was flip-chipped to the surface of the feedthrough array (figure (c)). A low-outgassing underfill was applied between the ASIC and the ceramic and inspected for voids. A moisture getter was applied to the interior titanium lid, which was then seam-welded to form a hermetic assembly. Surface mount capacitors were soldered to metal pads on the ceramic feedthrough surface. The lid was attached to the feedthrough array and a laser seam-weld was performed in a nitrogen–helium environment. On the exterior of the hermetic packaging, the feedthroughs emerge from the ceramic surface and serve as a surface for 93 subsequent micro-bonds to be performed with the electrode array (figures (d) and (e)). 2.2.6. Electrode-hermetic electronics integration The completed hermetic electronics assembly and high-density electrode assembly were then micro-bonded using a proprietary process. Rigorous machine vision was used to perform automated inspection of the 93 feedthroughs and receiving electrode pad positions. Components with >35Fixturing and vision cameras were used to align the components together until the feedthroughs and receiving pads were overlapping with <20 μ m offset (figure (f)). Machine vision was used to inspect the tolerances of the electrode, and the feedthrough component and components with >35 μ m of tolerance were discarded to ensure reliability of the bonding process. After the precision alignment was performed, a custom-made machine was developed to perform thermal welding between each platinum–iridium pad and each platinum–iridium feedthrough. 2.2.7. Injection molding and lead-tail attachment The proximal end of the HD64 smart paddle uses two connector tails, each with 12 in-line ring contacts (figure (h)), which have decades of long-term implant reliability (Letechipia et al ) and are compatible with IPGs. The lead-tails were manufactured and welded to the platinum–iridium receiving pads on the HD64 substrate. A silicone injection molding step was used to create the contoured paddle profile of the HD64, as well as to completely insulate between all of the feedthrough pins. The high-pressure injection molding process with appropriate ports and runners ensured that no air voids were present between feedthrough pins. All fixtures and processes were developed to ensure the silicone flow did not trap bubbles during the assembly process. Following this process, the HD64 was completely assembled and ready for use (figure (h)). 2.3. Benchtop evaluation of the HD64 2.3.1. Mechanical testing of the array body and lead tails To evaluate the performance of the HD64 following the mechanical forces experienced during surgical implantation and in the epidural potential space a set of mechanical tests were conducted. The tests assessed the electrical performance following flexion of independent sections of the paddle-lead tail assembly (as these sections experience different flexion profiles over their lifetime), and assessed the tensile performance of the entire assembly. To determine the tensile properties of the assembly, 29 HD64 samples (58 lead tails) underwent standardized tensile assessment. All samples were soaked in 0.9 gl −1 ± 10% NaCl solution at 37° ± 2° C. The initial lengths of the sample were measured using a calibrated ruler. The sample under test was then placed into a tensile testing fixture (MTS Systems, Eden Prairie, MN). The array body was placed in the bottom grip, using silicone pneumatic grips, and the proximal ends of the lead tail being assessed was secured in the top grip using a serrated fixture. The 5.5 N tensile load setting of the testing fixture was used to achieve a 5 N tensile force (50 mm min −1 velocity), which was held for 60 s. After testing both lead tails, the sample was removed from the fixture and the final lengths of the samples were measured. Electrical isolation between each conductor and contact site. To pass this assessment, lead tails must exhibit permanent elongation less than 5%, and electrical isolation between all tested nets. The mean permanent elongation observed was All 29 samples measured permanent elongation less than 5% ((3.6 × 10 −4 ± 4.3 × 10 −4 )%, mean ± standard deviation). The distribution of observed permanent elongation is presented in supplementary figure 1(a). Of the 29 samples tested, 1 failed electrical testing (a disconnect between a conductor in the lead tail and the multiplexing ASIC, and two shorts between pairs of lead tail conductors). Such a failure would be easily detectable during system testing prior to implantation of the device, and would simply result in a short delay of surgery while a backup device was located. The low failure rate observed following tensile testing, and low severity impact resulted in an acceptable level of risk. During surgical implantation, sections of the HD64 experience differing flexion forces. Therefore, independent tests were developed to adequately assess the electrical performance of the HD64 assembly after exposure to these forces. Firstly, to simulate the flexion experienced by the array body during implantation and in the epidural potential space, 29 array bodies were flexed using a custom flexion testing fixture to ±22° for 1500 cycles, with a 0.5 s cycle duration. Following the flexion cycles, the electrical isolation between each conductor and contact site was assessed. In all 29 samples tested, all electrical connections remained nominal, with no open or short circuit conditions created. The second aspect to be tested was the most distal end of the lead tail, which is typically coiled and secured to form a strain relief loop. To assess this section of the lead tail, the paddle bodies were held secure, and the distal section of the lead tails were flexed to ±45° for 1500 cycles, with a 0.5 s cycle duration. Following the flexion cycles, the electrical isolation between each conductor and contact site was assessed. In 3 of the 29 tested samples new short or open circuit conditions were observed, though these were all below the 3 short or open conductor limit afforded by switch matrix redundancy. Finally, the impact of repeated flexion on the resistance of the distal section of the HD64 (simulating the implanted 2 year implanted lifetime required of the device for this study) was assessed. For this test, the distal section of the lead tail was sectioned to be 20 cm in length. With the paddle body secured, the lead tail was flexed to ±90° for 47 000 cycles, with a 0.5 s cycle duration. Following the flexing cycles, the lead tail was cut to separate it from the paddle body, and the 12 conductors in each lead tail was removed from the tail to facilitate resistance testing. The maximum allowable resistance for the 20 cm section is 6 Ω (giving a resistivity of 30 Ωm −1 ). The distribution of observed resistances is shown in supplementary figure 1(b). All measured conductors exhibited a post-flexion resistance below the 6 Ω criteria, indicating minimal impacts on lead resistance due to flexion over the implanted lifetime of the device. The successful completion of mechanical testing indicated a robust design, which continued to hermetic testing. 2.3.2. Hermeticity of the electronics package To protect the embedded active electronics from the ionic environment in the epidural space, it is necessary to ensure the hermeticity of the electronics package and feedthrough assembly. Hermeticity model calculations were performed using a 2 year shelf life and a 1 year implanted life. Hermetic tests for electronics (MIL-STD-883 Method 1014.15) have failure limits based on the dew point inside a free volume package. The internal free volume of the HD64 assembly was 0.02 cm 3 , which corresponds to a failure leakage rate of 5 × 10 −9 atm cm 3 s −1 of air. This is equivalent to 13.5 × 10 −9 atm cm 3 s −1 of leakage using helium, which is used to test hermetic components. Using the laser interferometer method, compliant with the standard, the leak rate was measured for 171 hermetic packages. Figure (g) shows the distribution of measured leak rate values. Less than 10% of assembled packages had gross leak rates and were sent for subsequent testing. All accepted packages recorded a leak rate at least 15 times lower than the failure threshold set by MIL-STD-883. Based on the empirical leak rates 5 × 10 −10 , hermeticity calculations suggest the internal free volume of the package will not reach a dew point until far beyond a 2 year shelf life and 2 year implanted use. Using our measured leakage values, the dimensions of the device, and the material properties of our moisture-getter, the calculated hermetic lifespan of the hermetic electronics package is approximately 4 years (Greenhouse, Lowry, and Romenesko , 71). Additional validation may be required prior to implanting the HD64 for longer than this calculated lifespan. We were therefore confident that the hermetic package could continue safely to 15 month in vivo testing. 2.3.3. Control of active-multiplexing As all 60 contacts are bidirectional (capable of simultaneous stimulation and recording), maintaining knowledge of the state of the HD64 multiplexer was necessary to demultiplex the recorded spinal responses and stimulation information post-hoc . A schematic representation of the connections and devices used in this manuscript is presented in figure (a). The HD64 is powered and controlled by an external controller (MB-Controller, Micro-Leads Medical, Somerville, MA), which communicates with a host PC over a serial connection. MATLAB (version 2023b, MathWorks, Natick, MA) functions were provided to connect to, configure, and read from the multiplexer. Stimulation was also controlled using MATLAB. A custom script was written to ensure that the multiplexed stimulation channel was connected to the target electrode contact. Additionally, changes to the multiplexer configuration were logged alongside stimulation information, allowing synchronization with the recorded electrophysiological signals. This enabled recorded multiplexed signals to be split as the multiplexer configuration changed and a sparse 60 × n matrix of demultiplexed signals to be created, where n is the recording length in samples. While the multiplexing ASIC onboard the HD64 is capable of rapidly switching between connection configurations (though this increases the noise floor from 1.11 μ Vrms using an Intan RHD2164 alone to 2.65 μ Vrms with the HD64 connected and rapidly switching (Rachinskiy et al )), in this study multiplexer configurations remained static during an experiment. 2.4. In vivo evaluation of the HD64 All surgical and animal handling procedures were completed with approval from the Brown University Institutional Animal Care and Use Committee (IACUC), the Providence VA Medical System IACUC, and in accordance with the National Institutes of Health Guidelines for Animal Research (Guide for the Care and Use of Laboratory Animals). Two sheep (both female, aged 4.19 ± 0.3 years, weighing 92.5 ± 2.5 kg) were used for this study (figures (b) and (c)). Recordings were performed with both sheep for 15 months. Animals were kept in separate cages in a controlled environment on a 12 h light/dark cycle with ad libitum access to water and were fed twice daily. The ovine model was chosen for this study as the spine and spinal cord are comparable in size and share many anatomical features with humans, and the use of the ovine model to study the spinal cord has been well established (Marcus et al , Parker et al , , , Wilson et al , Reddy et al ). A visual overview of the experimental setup is shown in figure (a). 2.4.1. Surgical procedures The sheep were implanted, as previously reported (Calvert et al ). However, S1 was implanted with a single paddle, as the implanting neurosurgeons identified a narrow epidural space at L3 in this animal. Using only the caudal paddle array limits the rostrocaudal span accessible in this sheep, though the target neural structures underlying each array will be assessed independently. To briefly describe our surgical approach, under propofol-based general total intravenous anesthesia, an L4–L6 laminectomy with medial facetectomy was performed. The rostral HD64 paddle (if included) was gently placed, then slid rostrally, after which the caudal paddle was placed. Strain relief loops were made, and then the lead-tails were tunneled to the skin, where they were externalized. Reference and ground electrodes (Cooner AS636 wire, Cooner Wire Company, Chatsworth, CA) were secured epidurally and in the paraspinal muscles, respectively. Strain relief loops were made, and these wires were also tunneled then externalized. Intraoperative testing was used to confirm device functionality. The animals were allowed to recover from anesthesia then returned to their pens. A second surgical procedure was performed to place intramuscular EMG recording equipment (L03, Data Sciences International, St. Paul, MN) in the lower extremity musculature of S1. The sheep was intubated and placed prone on the operating table. The sheep was placed in a V-shaped foam block to maintain stability throughout the procedure, and a rectangular foam block was placed under the hip to alleviate pressure on the hindlegs (Calvert et al ). The legs were hung laterally off of the operating table, and each of the hooves was placed in a sterile surgical glove and wrapped in a sterile bandage for manipulation during the procedure. On each side, the extensor digitorum longus, biceps femoris , and gastrocnemius were identified by palpating anatomic landmarks. Small incisions were made over the bellies of each of the three muscles to be instrumented, and a small subcutaneous pocket was made on the flank. All three channels were tunneled from the subcutaneous pocket to the most proximal muscle. There, a single recording channel and its reference wire were trimmed to length and stripped of insulation. The bare electrodes were inserted into the muscle belly perpendicular to the muscle striations, through 1–2 cm of muscle. The electrodes were secured by suturing to the muscle at the insertion and exit points of the muscle belly. The remaining two channels were tunneled to the next muscle, where the process was repeated before the final electrode was tunneled to the most distal muscle and secured in the same manner. Finally, the telemetry unit was placed in the subcutaneous pocket on the upper rear flank and secured using a suture. Approximately 0.5 g of vancomycin powder was irrigated into the subcutaneous pocket, and the pocket was closed. The process was then repeated on the other side. After brief intraoperative testing, the animal was allowed to recover from anesthesia then returned to its pen. Both sheep recovered full ambulation in less than 6 h. 2.4.2. Post-implantation monitoring Both sheep were monitored at least twice daily by trained veterinary staff, who assessed changes in bladder function, respiration rate, food and water intake, and gastrointestinal output. Locomotor performance was assessed by a trained animal technician, who administered over 20 min of overground or treadmill-based walking assessments per day. No signs of device-related adverse events were noted. 2.4.3. Recording of EES-evoked motor potentials At the beginning of the experimental session, the awake sheep was hoisted in a sling (Panepinto, Fort Collins, CO) until clearance between its hooves and the floor was observed. The HD64 was connected, enabling simultaneous stimulation, recording, and control of the multiplexer. Stimulation amplitude ranges were identified for each sheep, ranging from below motor threshold to the maximum comfortable response above motor threshold. Five stimulation amplitudes were selected in the comfortable range, and each stimulation was delivered at four frequencies (10, 25, 50, and 100 Hz). In monopolar stimulation trials, stimulation was independently applied to each of the 60 contacts. A cathode-leading, 3:1 asymmetrical, charge-balanced waveform was used in all cases. The cathodic-phase pulse width was 167 μ s, and the stimulation train duration was 300 ms. The stimulation anode was the implanted reference wire in the paraspinal muscles. The inter-train interval was randomly drawn from a uniform distribution spanning 1–2 s, and stimulus presentation order was randomized. In bipolar stimulation trials, bipolar pairs were defined such that the spacing between the cathode and anode represented the minimum spacing possible on either the HD64 or Medtronic 5-6-5 paddles. Here, stimulation was provided at 5 amplitude values in the comfortable range for each sheep at a frequency of 10 Hz. This 100 ms inter-pulse-interval was selected to maximize the latency between stimuli so that the neural response to the previous pulse had subsided prior to the delivery of the subsequent stimulation. The stimulation waveform shape, train duration, and inter-train interval were unchanged from the monopolar trials. Stimulation presentation was again randomized. Data was collected during stimulation across 60 electrode contacts per spinal paddle and 6–8 EMG channels. The spinal, EMG, and stimulation data were synchronized by injecting a known bitstream into the time series data of the spinal and muscular recordings. Stimulation sessions did not exceed three hours, and the sheep were constantly monitored for signs of distress and fed throughout each session. At the conclusion of recording, the animal was returned to their pen. 2.4.4. Recording of EES-evoked spinal potentials At the beginning of each session, the awake sheep was hoisted in a sling, and the HD64 was connected as described previously. The HD64 was routed to create a stimulating bipole between the caudal-most midline electrodes and a recording bipole, either 7.3 mm, 17.0 mm, 26.7 mm, or 31.6 mm more rostral than the stimulating bipole. As the recording device (A-M Systems Model 1800) consisted of only two recording channels, each of the bipoles was recorded sequentially. For each stimulation event, a cathode-leading, symmetrical EES pulse was delivered to the stimulation bipole. The EES amplitude was set to 2 mA, 4 mA, and 6 mA, while the width of the cathodic phase was kept constant at 25 μ s. 2.4.5. Recording TENS-evoked local field potentials on the spinal cord With the sheep hoisted in the sling apparatus, the bony anatomy of the right-side hind fetlock was palpated, and a ring of wool just proximal to the fetlock was shorn, extending approximately 10 cm proximally. The exposed skin was cleaned with isopropyl alcohol, which was allowed to air dry. 2’ × 2’ TENS (Transcutaneous Electrical Nerve Stimulation) patches (Balego, Minneapolis, MN) were cut to size, placed on the medial and lateral aspects of the cannon bone, then connected to the stimulator device (Model 4100, A-M Systems Inc, Carlsborg, WA). The stimulation monitoring channel was connected to an analog input on the EMG system for synchronization. Stimulation amplitude was set to 25 V, and stimulation pulse width was 250 μ s. The interstimulus interval was 500 ms. Spinal local field potentials (LFPs) were recorded from contacts on the HD64. The above process was then repeated for the left hindlimb. 2.5. Extension of state-of-the-art EES parameter inference neural networks 2.5.1. Model reparameterization and data acquisition To extend the state-of-the-art EES parameter inference neural network models to 60 electrodes, the input space was reparameterized. The original model accepted a 3D input feature vector: amplitude, frequency, and electrode index (Govindarajan et al ). The new model used here now accepts a 4D input vector: amplitude, frequency, electrode mediolateral coordinate, and electrode rostrocaudal coordinate (i.e. inputting an x – y location rather than hardcoding an arbitrary electrode number). By defining the electrode position on a continual space, positional relationships between each electrode could be learned. As in the original model, frequency and amplitude values were normalized to the range [0, 1). When surgically placing the electrodes, it is impractical to ensure the electrode is perfectly aligned with the spinal cord in two dimensions, and is sitting flush with the dorsal surface. Additionally, the paddle may shift slightly during closing. For each sheep, radiographs were collected following the implantation of the HD64 electrodes. By examining the orientation of each electrode contact, an affine transformation was computed to account for skew between the electrode paddles and the radiograph camera (or equivalently, the spinal cord, as the sheep was placed sternally with the radiograph camera perpendicular to the longitudinal plane of the spinal cord). In this transformed space, the relative position of the rostral paddle (if placed) was determined with respect to the caudal paddle. Then, using the known dimensions of the HD64, the coordinates of each electrode contact were calculated. The electrode coordinates were normalized to the same range as the amplitude and frequency (the left-most caudal electrode contact had a coordinate of (0, 0), with increasing values extending rightward and rostrally). This process enabled the contact separation of the HD64 to be correctly determined, even if the HD64 was not sitting perfectly aligned with the anatomy. The data acquisition phase proceeded as described for recording of monopolar EES-evoked motor responses. 2.5.2. Data handling & evaluation The data handling and preprocessing steps were performed as previously described (Govindarajan et al ). Briefly, the EMG recordings from bilateral extensor digitorum longus, biceps femoris , and gastrocnemius were high-pass filtered at 3 Hz, then a second-order infinite impulse response notch filter with a quality factor of 35 at 60 Hz. EMG envelopes were calculated by computing the moving RMS (root mean squared) value of each signal for a 300 ms window (50% window overlap). The enveloped data was epoched from 100 ms prior to the onset of stimulation to 300 ms after the conclusion of the stimulation train (700 ms epochs). The epoched data was labeled with the stimulation amplitude, frequency, and electrode coordinates determined by the map described previously. This dataset was the 100% density dataset, as it contained every stimulation electrode. The 50% density dataset was created by removing 30 electrode contacts from the 100% density dataset. Finally, the 25% density dataset was created by removing an additional 14 contacts from the 50% density dataset. The remaining data handling steps (outlier removal, unreliable sample removal, subthreshold EMG removal, and EMG summarization) were performed as previously described (Govindarajan et al ). Independent models were then trained for each density dataset. Following the completion of training and inference, stimulation proposals were generated for a set of target EMG responses obtained from testing EES parameter sets (40% of all conditions were held out, as in (Govindarajan et al )). Since these testing data were held out from the training data (the remaining 60% EES parameter conditions), the combination of target EMG responses and their EES parameters has never been shown to the network during the training phase. Proposals to achieve these target EMG responses were generated independently by all three models. If the stimulation contact coordinate inferred for a proposal was not encircled by a stimulation contact, the proposal was snapped to the nearest contact. The proposed stimulation amplitudes were checked to comply with the ranges the researcher found comfortable for each sheep and through software limits. To assess changes in the distribution of evoked motor responses, a set of stimulation parameters applied during the collection phase were repeated at the start of the evaluation phase. Then, each proposal was delivered 10 times, and the proposal order was randomized. After collection, the L1 error between the proposed EMG response and the achieved EMG response was calculated. 2.6. Statistics and data processing Following the completion of an experimental session, the recorded data were analyzed offline using custom-written code in MATLAB (R2023b) and Python 3.8.17 (using SciKit-Learn version 1.4.1 (Pedregosa et al ) and uniform manifold approximation and projection (UMAP) module (McInnes et al )). To compute recruitment curves, the recorded EMG signals were split into 500 ms stimulation-triggered epochs. This epoch length was selected to capture the 300 ms pulse train and residual motor effects. The data was then rectified, and the power was recruited by taking the absolute value and then summing all samples in each epoch. The power values were then normalized to a range of [0, 1] for each muscle by subtracting the minimum activation and dividing by the range of activations. The EMG power (rectified area under the curve, or rAUC) was averaged across the four stimulation trials for each amplitude, frequency, and electrode combination, and the standard deviation was computed. To identify statistical clusters of evoked EMG responses, the single-trial recruitment data was rearranged into a 2D table, where rows were stimulation electrodes, and the recruitment data for each stimulation frequency was horizontally concatenated to form the columns. Raw rAUCs were normalized using the Yeo–Johnson power transformation (Yeo and Johnson ), and used as input features for the UMAP algorithm (Gorman , McInnes et al ). A two-dimensional embedding was generated, where each point represents the rAUC response pattern across muscles, stimulation amplitudes, and frequencies for a given electrode (four points per electrode, corresponding to the four stimulation repetitions). Spectral clustering (Shi and Malik ) was used to identify between 1 and 16 clusters of electrodes in the dataset, representing groups of electrodes with similar rAUC response patterns. A silhouette analysis (Rousseeuw ) indicated that the optimal number of clusters was eight. If the four repeats of the same electrode did not fall into the same cluster, that electrode was assigned the cluster label corresponding to the majority of the four labels. The process of epoching, rectifying and integrating the EMG signal was repeated for the bipolar stimulation datasets. Before normalizing the bipolar data, the unnormalized monopolar data was included such that both monopolar and bipolar EMG responses were normalized to the same range. The EMG distributions for each stimulation configuration were compared using a Mann–Whitney U test. This non-parametric test was selected as it does not assume normality of the underlying distributions. Finally, the selectivity indexes were computed for each muscle in all stimulation electrode configurations (Badi et al , Bryson et al ). The distributions of selectivity indexes were also compared using a Mann–Whitney U test. Spinal evoked compound action potentials (ECAPs) and SEPs were first split into 500 μ s and 100 ms stimulation-triggered epochs, respectively. The epochs were averaged across 50 stimulation trials and filtered with a low-pass filter at 2000 Hz. The mean and standard deviation of the ECAPs were calculated for each amplitude and recording bipole. The latency of the peak of the response was identified for each of the 50 single trials (Chakravarthy et al ). A linear fit of the latency-distance relationship was calculated, and the gradient of this fit was calculated to determine the conduction velocity (CV) (Parker et al , Lam et al ). The SEPs were re-referenced by subtracting the inverting electrode recording from the non-inverting electrode recording for each recording bipole (Verma and Romanauski et al ). The mean SEP response was calculated for each recording channel. To assess the spatial correlation between recorded channels, the normalized zero-lag cross-correlation coefficient was computed between each recording channel for single trials (Swindale and Spacek ). For each channel, the mean pairwise correlation to other channels was calculated over each trial. To assess the uniqueness between channels, the absolute value of the correlation was computed and then subtracted from 1. That is, perfectly correlated channels, with a mean pairwise correlation coefficient of 1, would receive a score of 0, while completely uncorrelated channels would receive a score of 1. The distribution of uniqueness scores was compared using a Kruskal–Wallis test. This non-parametric test was selected as it does not assume normality of the underlying distributions. To evaluate the performance of the EMG response simulation neural network (the forward model), the L1 loss was computed by subtracting the predicted EMG vector from the evoked EMG vector and then taking the absolute value. The L1 loss was selected as the error term as this function penalizes errors linearly (rather than quadratically, in the case of a mean squared error loss function), which reduces the training effect of outlier data points which may become apparent due to volitional movements made by the sheep. The evaluation process occurred on a held-out dataset of mean EMG responses to 480 stimulation combinations, each with 4 repeats. The distributions were compared using a Kruskal–Wallis test. The random performance of each network was determined by generating a randomized prediction for each muscle from a uniform distribution ranging from 0 to the maximum response in the training dataset. Then, this randomized EMG vector was compared to the target vector. The L1 losses were split for each model by inclusion of the stimulating electrode in the training dataset. The L1 losses were compared within models using the Mann–Whitney U test. Finally, the L1 losses were split for each model by muscle, and their distributions were compared using a Kruskal–Wallis test. To evaluate the performance of the parameter inference neural network (the inverse model), the rAUC was computed for each of the responses to inferred parameters. The mean L1 error between the predicted and achieved EMG rAUC vectors was calculated across muscles. Additionally, the L1 error between the EMG responses to the original EES parameters and the responses to these same parameters sent during the evaluation session (which occurred some hours after the training session) was calculated. The distributions of L1 errors were compared using the Kruskal–Wallis test.
Treatment span of sensory-motor EES In the T11 to L1 vertebral regions, the transverse diameter of the human spinal cord is 8–9.6 mm (Ko et al , Fradet et al ), the spinal canal sagittal depth ranges between 15.4 and 19.54 mm, and the spinal canal transverse width spans 16.7–26.5 mm (Laporte et al , Busscher et al ). Recent work has highlighted the benefit of activation of lateral structures (Calvert et al ), and their potential as a target for locomotor neuroprosthesis following spinal cord injury (Rowald et al ). Conventional surgical paddles for pain are limited to dorsal column medio-lateral treatment (<7.5 mm with 4 electrode columns) across two vertebral segments (<49 mm with 8 electrodes rows). Extending the medio-lateral therapeutic span of EES delivery will facilitate accessing the dorsal horn and dorsal roots. Based on the desired treatment area, the paddle requirements are to treat a 13.7 × 43 mm stimulation area. Extending the treatment surface area for EES faces two challenges: (1) the lateral epidural potential volume adjacent to nerve roots is much thinner than at the midline, and (2) performing EES with a sparser electrode array would reduce the targeting resolution, potentially introducing off-target effects or non-specific activation. 2.1.1. Mechanical requirements for surgical introduction of the lead The paddle geometry was designed to mimic conventional EES paddles at the anatomical midline using a 2 mm thick geometry. Due to the 1 mm lateral epidural thickness, the paddle lateral edges were limited to 0.7 mm. Ten cross-sectional geometries of electrode profiles were developed in groups of three and sequentially evaluated. Each design was modeled using computer-aided design software and molded in silicone to exhibit various longitudinal and lateral flexibility profiles. The profile groups were sent to three neurosurgeons to evaluate the preferred paddle design (criteria for their evaluation included steerability, axial stiffness, and bendability) while accessing the epidural space through a small laminectomy. The resulting profile (figure (l)) was the final profile for two key reasons: (1) the lateral wings remained flexible and could bend through a conventional-width laminectomy, and (2) a relatively rigid paddle was strongly preferred as it enabled physicians to advance the paddle rostrally using forceps. 2.1.2. Smart-implant multiplexing architecture Using a fixed number of electrodes (e.g. 16 or 32) spread across a wider medio-lateral span would result in a sparser electrode density and result in a reduced EES targeting resolution. Given the lateral nerve root density, a sparse density was undesirable. Conventional implanted surgical paddles have reached 32 electrodes, but do not exhibit stimulation sites over the nerve roots and require management and connection of 4 lead tails below the skin. Scaling to 64 electrodes using current connector technology would require 8 lead-tails and 64-channel connector on the IPG, which would greatly increase bulk in the body and surgical complexity. Since EES generally needs access to many electrodes but only a subset are used at the same time, we developed a multiplexing architecture to programmably connect electrodes to the pulse generator. Specifically, we sought to develop a scalable manufacturing process to produce paddle electrodes.
Mechanical requirements for surgical introduction of the lead The paddle geometry was designed to mimic conventional EES paddles at the anatomical midline using a 2 mm thick geometry. Due to the 1 mm lateral epidural thickness, the paddle lateral edges were limited to 0.7 mm. Ten cross-sectional geometries of electrode profiles were developed in groups of three and sequentially evaluated. Each design was modeled using computer-aided design software and molded in silicone to exhibit various longitudinal and lateral flexibility profiles. The profile groups were sent to three neurosurgeons to evaluate the preferred paddle design (criteria for their evaluation included steerability, axial stiffness, and bendability) while accessing the epidural space through a small laminectomy. The resulting profile (figure (l)) was the final profile for two key reasons: (1) the lateral wings remained flexible and could bend through a conventional-width laminectomy, and (2) a relatively rigid paddle was strongly preferred as it enabled physicians to advance the paddle rostrally using forceps.
Smart-implant multiplexing architecture Using a fixed number of electrodes (e.g. 16 or 32) spread across a wider medio-lateral span would result in a sparser electrode density and result in a reduced EES targeting resolution. Given the lateral nerve root density, a sparse density was undesirable. Conventional implanted surgical paddles have reached 32 electrodes, but do not exhibit stimulation sites over the nerve roots and require management and connection of 4 lead tails below the skin. Scaling to 64 electrodes using current connector technology would require 8 lead-tails and 64-channel connector on the IPG, which would greatly increase bulk in the body and surgical complexity. Since EES generally needs access to many electrodes but only a subset are used at the same time, we developed a multiplexing architecture to programmably connect electrodes to the pulse generator. Specifically, we sought to develop a scalable manufacturing process to produce paddle electrodes.
Implant-grade design requirements HD64 is a smart-paddle containing embedded electronics that is powered and controlled by a pulse generator over a two-lead tail wiring scheme. To realize an active-multiplexed electrode array, we developed multiple new implant technologies including: (1) a high-voltage multiplexer that operates from charge-balanced AC power (figures (a) and (b)), with known noise performance (Rachinskiy et al ), (2) a high-density and low-cost 93-pin hermetic feedthrough package with >80% yield (figures (c)–(e)), (3) precision electrode technology (platinum–iridium with 100 μ m lines and 100 μ m spaces), and (4) medical-grade micro-bonding processes to permanently bond 85 electrical connections between the hermetic electronic package and electrode array. For long-term implanted use, we developed requirements based on our clinical inputs for people with spinal cord injury and/or chronic pain in the lower extremities. We established electrical and mechanical safety requirements and testing paradigms in accordance with ISO 14708-3:2017. We further developed a biocompatibility evaluation plan in accordance with ISO 10993-1:2018. The device also had to be manufactured according to GMP and undergo ethylene oxide (ETO) sterilization validation. 2.2.1. Electrical safety: AC-powered implanted satellite devices In this work, we describe two specific and critical design elements that enable new smart implants including: (1) ultra-low current satellite multiplexer operation using charge-balanced AC power over an implanted lead and pluggable-connector interface, (2) digital charged-balance programmable control of the distal satellite multiplexer. We tested commercial leads and connectors with ±10 V AC power using an accelerated aging paradigm in a sealed saline container. At the end of a 5 year implanted life equivalent, the saline was tested for residual traces of platinum–iridium using inductively coupled mass spectrometry to test for any electromigration. We determined that ±5 V AC through the lead-wire with a 2x voltage multiplier within the hermetic electronics package prevented any risk of electromigration. Additionally, we developed a separate IPG-side driver ASIC chip to deliver low-current ±5 V AC power with charged-balanced differential data lines to program the multiplexer. Board logic-level programming of a satellite multiplexer over a lead wire is not an acceptable risk, as any DC potential on the line can lead to electromigration at the connector. The driver ASIC was designed to deliver a programmable AC-current limited to 10–100 μ A and 10–500 kHz to power the multiplexing ASIC (note, this is independent of therapeutic stimulation frequency). Importantly, a novel real-time leakage current detection circuit was developed such that any broken wire or connection could be detected by the ASIC which would then generate an internal digital fault flag. 2.2.2. Biocompatibility evaluation For translation to future human implanted use, we determined that the following ISO 10991-1 tests must be performed: cytotoxicity, sensitization, irritation, material-mediated pyrogenicity, acute systemic toxicity, subacute toxicity, implantation, genotoxicity, ethylene oxide residuals, and partial chemical characterization. To achieve these endpoints, approximately 300 active-paddle test articles were sent to a certified ISO 10993-1 test house over a 14 month duration. All test articles successfully passed these tests, and no histological abnormalities were identified by the test house following any of the in vivo tests. A summary of the biocompatibility tests performed and their results are included in supplementary table 1. 2.2.3. Precision electrode technology Precision electrodes were developed with a highly-novel delamination-free process using medical-implant grade silicone (Nusil, Carpentaria, CA), platinum–iridium 90/10 conductors, and a nylon mesh reinforcement layer (figure (f)). In this process, the silicone and platinum–iridium are pre-processed using a proprietary laser patterning technique, resulting in 50-100 μ m electrode features in 50 μ m thick platinum–iridium 90/10. A flexible reinforcement mesh layer was embedded within the silicone to prevent stretching of the silicone to protect the electrode traces from fracture during stretching or repetitive pull cycles. The HD64 paddle consists of three silicone and two metal layers for a total substrate thickness of 400 μ m. Using a proprietary process, the silicone layers are chemically fused together to form a seamless and delamination-free construction for long-term implanted operation. 2.2.4. High-voltage 24:64 multiplexing ASIC with AC power and charge-balanced programming An onboard multiplexer ASIC (figure (a)) was developed to support two key functions needed for this architecture: (1) a fail-safe low-voltage and low-current AC powering scheme with on-chip ±9 V compliance voltage multipliers and a bidirectional charge-balanced digital read-write scheme, (2) a 24:64 switch matrix to allow 24 bidirectional wires to connect to any of the 60 high-density electrodes. Fabrication was performed using the X-FAB XH035 18V 350 nm silicon process to construct the ASIC. A polyimide redistribution layer was developed to provide a solder-ball flip-chip interface between the ASIC (figure (b)) and hermetic feedthrough (figure (c)). The multiplexer is designed to be dynamically programmed using a ±1.8 V charge-balanced, differential digital interface operating from ±5 V AC power. The AC power, 2 programming lines, 2 ground lines, and 19 bidirectional contact connections (24 conductors total) are distributed across 2 12-contact lead tails. Once powered, programming can be used to connect any of 24 input wires to 64 outputs. The multiplexer contains 64 blocks, each containing four switches to ensure every output electrode has a redundant connection to at least four stimulation/recording inputs. Switches were assigned to blocks in such a way as to maximize the number of switches that may become nonfunctional while still enabling sequential programming to raster through the entire 60-electrode array. The ASIC operates from AC power (±5 V AC, 10–500 kHz) and features real-time current-limiting. Charge balance is necessary to prevent corrosion and long-term electro-migration of the conductors. The ASIC uses an on-board rectifier with off-chip capacitors to convert the AC signal to ±9 V DC internally and 3.3 V. 2.2.5. High-density hermetic electronics package We developed a custom 93-feedthrough ceramic assembly (8 × 8 × 0.75 mm) brazed into a titanium flange. On the interior of the package, the ASIC was flip-chipped to the surface of the feedthrough array (figure (c)). A low-outgassing underfill was applied between the ASIC and the ceramic and inspected for voids. A moisture getter was applied to the interior titanium lid, which was then seam-welded to form a hermetic assembly. Surface mount capacitors were soldered to metal pads on the ceramic feedthrough surface. The lid was attached to the feedthrough array and a laser seam-weld was performed in a nitrogen–helium environment. On the exterior of the hermetic packaging, the feedthroughs emerge from the ceramic surface and serve as a surface for 93 subsequent micro-bonds to be performed with the electrode array (figures (d) and (e)). 2.2.6. Electrode-hermetic electronics integration The completed hermetic electronics assembly and high-density electrode assembly were then micro-bonded using a proprietary process. Rigorous machine vision was used to perform automated inspection of the 93 feedthroughs and receiving electrode pad positions. Components with >35Fixturing and vision cameras were used to align the components together until the feedthroughs and receiving pads were overlapping with <20 μ m offset (figure (f)). Machine vision was used to inspect the tolerances of the electrode, and the feedthrough component and components with >35 μ m of tolerance were discarded to ensure reliability of the bonding process. After the precision alignment was performed, a custom-made machine was developed to perform thermal welding between each platinum–iridium pad and each platinum–iridium feedthrough. 2.2.7. Injection molding and lead-tail attachment The proximal end of the HD64 smart paddle uses two connector tails, each with 12 in-line ring contacts (figure (h)), which have decades of long-term implant reliability (Letechipia et al ) and are compatible with IPGs. The lead-tails were manufactured and welded to the platinum–iridium receiving pads on the HD64 substrate. A silicone injection molding step was used to create the contoured paddle profile of the HD64, as well as to completely insulate between all of the feedthrough pins. The high-pressure injection molding process with appropriate ports and runners ensured that no air voids were present between feedthrough pins. All fixtures and processes were developed to ensure the silicone flow did not trap bubbles during the assembly process. Following this process, the HD64 was completely assembled and ready for use (figure (h)).
Electrical safety: AC-powered implanted satellite devices In this work, we describe two specific and critical design elements that enable new smart implants including: (1) ultra-low current satellite multiplexer operation using charge-balanced AC power over an implanted lead and pluggable-connector interface, (2) digital charged-balance programmable control of the distal satellite multiplexer. We tested commercial leads and connectors with ±10 V AC power using an accelerated aging paradigm in a sealed saline container. At the end of a 5 year implanted life equivalent, the saline was tested for residual traces of platinum–iridium using inductively coupled mass spectrometry to test for any electromigration. We determined that ±5 V AC through the lead-wire with a 2x voltage multiplier within the hermetic electronics package prevented any risk of electromigration. Additionally, we developed a separate IPG-side driver ASIC chip to deliver low-current ±5 V AC power with charged-balanced differential data lines to program the multiplexer. Board logic-level programming of a satellite multiplexer over a lead wire is not an acceptable risk, as any DC potential on the line can lead to electromigration at the connector. The driver ASIC was designed to deliver a programmable AC-current limited to 10–100 μ A and 10–500 kHz to power the multiplexing ASIC (note, this is independent of therapeutic stimulation frequency). Importantly, a novel real-time leakage current detection circuit was developed such that any broken wire or connection could be detected by the ASIC which would then generate an internal digital fault flag.
Biocompatibility evaluation For translation to future human implanted use, we determined that the following ISO 10991-1 tests must be performed: cytotoxicity, sensitization, irritation, material-mediated pyrogenicity, acute systemic toxicity, subacute toxicity, implantation, genotoxicity, ethylene oxide residuals, and partial chemical characterization. To achieve these endpoints, approximately 300 active-paddle test articles were sent to a certified ISO 10993-1 test house over a 14 month duration. All test articles successfully passed these tests, and no histological abnormalities were identified by the test house following any of the in vivo tests. A summary of the biocompatibility tests performed and their results are included in supplementary table 1.
Precision electrode technology Precision electrodes were developed with a highly-novel delamination-free process using medical-implant grade silicone (Nusil, Carpentaria, CA), platinum–iridium 90/10 conductors, and a nylon mesh reinforcement layer (figure (f)). In this process, the silicone and platinum–iridium are pre-processed using a proprietary laser patterning technique, resulting in 50-100 μ m electrode features in 50 μ m thick platinum–iridium 90/10. A flexible reinforcement mesh layer was embedded within the silicone to prevent stretching of the silicone to protect the electrode traces from fracture during stretching or repetitive pull cycles. The HD64 paddle consists of three silicone and two metal layers for a total substrate thickness of 400 μ m. Using a proprietary process, the silicone layers are chemically fused together to form a seamless and delamination-free construction for long-term implanted operation.
High-voltage 24:64 multiplexing ASIC with AC power and charge-balanced programming An onboard multiplexer ASIC (figure (a)) was developed to support two key functions needed for this architecture: (1) a fail-safe low-voltage and low-current AC powering scheme with on-chip ±9 V compliance voltage multipliers and a bidirectional charge-balanced digital read-write scheme, (2) a 24:64 switch matrix to allow 24 bidirectional wires to connect to any of the 60 high-density electrodes. Fabrication was performed using the X-FAB XH035 18V 350 nm silicon process to construct the ASIC. A polyimide redistribution layer was developed to provide a solder-ball flip-chip interface between the ASIC (figure (b)) and hermetic feedthrough (figure (c)). The multiplexer is designed to be dynamically programmed using a ±1.8 V charge-balanced, differential digital interface operating from ±5 V AC power. The AC power, 2 programming lines, 2 ground lines, and 19 bidirectional contact connections (24 conductors total) are distributed across 2 12-contact lead tails. Once powered, programming can be used to connect any of 24 input wires to 64 outputs. The multiplexer contains 64 blocks, each containing four switches to ensure every output electrode has a redundant connection to at least four stimulation/recording inputs. Switches were assigned to blocks in such a way as to maximize the number of switches that may become nonfunctional while still enabling sequential programming to raster through the entire 60-electrode array. The ASIC operates from AC power (±5 V AC, 10–500 kHz) and features real-time current-limiting. Charge balance is necessary to prevent corrosion and long-term electro-migration of the conductors. The ASIC uses an on-board rectifier with off-chip capacitors to convert the AC signal to ±9 V DC internally and 3.3 V.
High-density hermetic electronics package We developed a custom 93-feedthrough ceramic assembly (8 × 8 × 0.75 mm) brazed into a titanium flange. On the interior of the package, the ASIC was flip-chipped to the surface of the feedthrough array (figure (c)). A low-outgassing underfill was applied between the ASIC and the ceramic and inspected for voids. A moisture getter was applied to the interior titanium lid, which was then seam-welded to form a hermetic assembly. Surface mount capacitors were soldered to metal pads on the ceramic feedthrough surface. The lid was attached to the feedthrough array and a laser seam-weld was performed in a nitrogen–helium environment. On the exterior of the hermetic packaging, the feedthroughs emerge from the ceramic surface and serve as a surface for 93 subsequent micro-bonds to be performed with the electrode array (figures (d) and (e)).
Electrode-hermetic electronics integration The completed hermetic electronics assembly and high-density electrode assembly were then micro-bonded using a proprietary process. Rigorous machine vision was used to perform automated inspection of the 93 feedthroughs and receiving electrode pad positions. Components with >35Fixturing and vision cameras were used to align the components together until the feedthroughs and receiving pads were overlapping with <20 μ m offset (figure (f)). Machine vision was used to inspect the tolerances of the electrode, and the feedthrough component and components with >35 μ m of tolerance were discarded to ensure reliability of the bonding process. After the precision alignment was performed, a custom-made machine was developed to perform thermal welding between each platinum–iridium pad and each platinum–iridium feedthrough.
Injection molding and lead-tail attachment The proximal end of the HD64 smart paddle uses two connector tails, each with 12 in-line ring contacts (figure (h)), which have decades of long-term implant reliability (Letechipia et al ) and are compatible with IPGs. The lead-tails were manufactured and welded to the platinum–iridium receiving pads on the HD64 substrate. A silicone injection molding step was used to create the contoured paddle profile of the HD64, as well as to completely insulate between all of the feedthrough pins. The high-pressure injection molding process with appropriate ports and runners ensured that no air voids were present between feedthrough pins. All fixtures and processes were developed to ensure the silicone flow did not trap bubbles during the assembly process. Following this process, the HD64 was completely assembled and ready for use (figure (h)).
Benchtop evaluation of the HD64 2.3.1. Mechanical testing of the array body and lead tails To evaluate the performance of the HD64 following the mechanical forces experienced during surgical implantation and in the epidural potential space a set of mechanical tests were conducted. The tests assessed the electrical performance following flexion of independent sections of the paddle-lead tail assembly (as these sections experience different flexion profiles over their lifetime), and assessed the tensile performance of the entire assembly. To determine the tensile properties of the assembly, 29 HD64 samples (58 lead tails) underwent standardized tensile assessment. All samples were soaked in 0.9 gl −1 ± 10% NaCl solution at 37° ± 2° C. The initial lengths of the sample were measured using a calibrated ruler. The sample under test was then placed into a tensile testing fixture (MTS Systems, Eden Prairie, MN). The array body was placed in the bottom grip, using silicone pneumatic grips, and the proximal ends of the lead tail being assessed was secured in the top grip using a serrated fixture. The 5.5 N tensile load setting of the testing fixture was used to achieve a 5 N tensile force (50 mm min −1 velocity), which was held for 60 s. After testing both lead tails, the sample was removed from the fixture and the final lengths of the samples were measured. Electrical isolation between each conductor and contact site. To pass this assessment, lead tails must exhibit permanent elongation less than 5%, and electrical isolation between all tested nets. The mean permanent elongation observed was All 29 samples measured permanent elongation less than 5% ((3.6 × 10 −4 ± 4.3 × 10 −4 )%, mean ± standard deviation). The distribution of observed permanent elongation is presented in supplementary figure 1(a). Of the 29 samples tested, 1 failed electrical testing (a disconnect between a conductor in the lead tail and the multiplexing ASIC, and two shorts between pairs of lead tail conductors). Such a failure would be easily detectable during system testing prior to implantation of the device, and would simply result in a short delay of surgery while a backup device was located. The low failure rate observed following tensile testing, and low severity impact resulted in an acceptable level of risk. During surgical implantation, sections of the HD64 experience differing flexion forces. Therefore, independent tests were developed to adequately assess the electrical performance of the HD64 assembly after exposure to these forces. Firstly, to simulate the flexion experienced by the array body during implantation and in the epidural potential space, 29 array bodies were flexed using a custom flexion testing fixture to ±22° for 1500 cycles, with a 0.5 s cycle duration. Following the flexion cycles, the electrical isolation between each conductor and contact site was assessed. In all 29 samples tested, all electrical connections remained nominal, with no open or short circuit conditions created. The second aspect to be tested was the most distal end of the lead tail, which is typically coiled and secured to form a strain relief loop. To assess this section of the lead tail, the paddle bodies were held secure, and the distal section of the lead tails were flexed to ±45° for 1500 cycles, with a 0.5 s cycle duration. Following the flexion cycles, the electrical isolation between each conductor and contact site was assessed. In 3 of the 29 tested samples new short or open circuit conditions were observed, though these were all below the 3 short or open conductor limit afforded by switch matrix redundancy. Finally, the impact of repeated flexion on the resistance of the distal section of the HD64 (simulating the implanted 2 year implanted lifetime required of the device for this study) was assessed. For this test, the distal section of the lead tail was sectioned to be 20 cm in length. With the paddle body secured, the lead tail was flexed to ±90° for 47 000 cycles, with a 0.5 s cycle duration. Following the flexing cycles, the lead tail was cut to separate it from the paddle body, and the 12 conductors in each lead tail was removed from the tail to facilitate resistance testing. The maximum allowable resistance for the 20 cm section is 6 Ω (giving a resistivity of 30 Ωm −1 ). The distribution of observed resistances is shown in supplementary figure 1(b). All measured conductors exhibited a post-flexion resistance below the 6 Ω criteria, indicating minimal impacts on lead resistance due to flexion over the implanted lifetime of the device. The successful completion of mechanical testing indicated a robust design, which continued to hermetic testing. 2.3.2. Hermeticity of the electronics package To protect the embedded active electronics from the ionic environment in the epidural space, it is necessary to ensure the hermeticity of the electronics package and feedthrough assembly. Hermeticity model calculations were performed using a 2 year shelf life and a 1 year implanted life. Hermetic tests for electronics (MIL-STD-883 Method 1014.15) have failure limits based on the dew point inside a free volume package. The internal free volume of the HD64 assembly was 0.02 cm 3 , which corresponds to a failure leakage rate of 5 × 10 −9 atm cm 3 s −1 of air. This is equivalent to 13.5 × 10 −9 atm cm 3 s −1 of leakage using helium, which is used to test hermetic components. Using the laser interferometer method, compliant with the standard, the leak rate was measured for 171 hermetic packages. Figure (g) shows the distribution of measured leak rate values. Less than 10% of assembled packages had gross leak rates and were sent for subsequent testing. All accepted packages recorded a leak rate at least 15 times lower than the failure threshold set by MIL-STD-883. Based on the empirical leak rates 5 × 10 −10 , hermeticity calculations suggest the internal free volume of the package will not reach a dew point until far beyond a 2 year shelf life and 2 year implanted use. Using our measured leakage values, the dimensions of the device, and the material properties of our moisture-getter, the calculated hermetic lifespan of the hermetic electronics package is approximately 4 years (Greenhouse, Lowry, and Romenesko , 71). Additional validation may be required prior to implanting the HD64 for longer than this calculated lifespan. We were therefore confident that the hermetic package could continue safely to 15 month in vivo testing. 2.3.3. Control of active-multiplexing As all 60 contacts are bidirectional (capable of simultaneous stimulation and recording), maintaining knowledge of the state of the HD64 multiplexer was necessary to demultiplex the recorded spinal responses and stimulation information post-hoc . A schematic representation of the connections and devices used in this manuscript is presented in figure (a). The HD64 is powered and controlled by an external controller (MB-Controller, Micro-Leads Medical, Somerville, MA), which communicates with a host PC over a serial connection. MATLAB (version 2023b, MathWorks, Natick, MA) functions were provided to connect to, configure, and read from the multiplexer. Stimulation was also controlled using MATLAB. A custom script was written to ensure that the multiplexed stimulation channel was connected to the target electrode contact. Additionally, changes to the multiplexer configuration were logged alongside stimulation information, allowing synchronization with the recorded electrophysiological signals. This enabled recorded multiplexed signals to be split as the multiplexer configuration changed and a sparse 60 × n matrix of demultiplexed signals to be created, where n is the recording length in samples. While the multiplexing ASIC onboard the HD64 is capable of rapidly switching between connection configurations (though this increases the noise floor from 1.11 μ Vrms using an Intan RHD2164 alone to 2.65 μ Vrms with the HD64 connected and rapidly switching (Rachinskiy et al )), in this study multiplexer configurations remained static during an experiment.
Mechanical testing of the array body and lead tails To evaluate the performance of the HD64 following the mechanical forces experienced during surgical implantation and in the epidural potential space a set of mechanical tests were conducted. The tests assessed the electrical performance following flexion of independent sections of the paddle-lead tail assembly (as these sections experience different flexion profiles over their lifetime), and assessed the tensile performance of the entire assembly. To determine the tensile properties of the assembly, 29 HD64 samples (58 lead tails) underwent standardized tensile assessment. All samples were soaked in 0.9 gl −1 ± 10% NaCl solution at 37° ± 2° C. The initial lengths of the sample were measured using a calibrated ruler. The sample under test was then placed into a tensile testing fixture (MTS Systems, Eden Prairie, MN). The array body was placed in the bottom grip, using silicone pneumatic grips, and the proximal ends of the lead tail being assessed was secured in the top grip using a serrated fixture. The 5.5 N tensile load setting of the testing fixture was used to achieve a 5 N tensile force (50 mm min −1 velocity), which was held for 60 s. After testing both lead tails, the sample was removed from the fixture and the final lengths of the samples were measured. Electrical isolation between each conductor and contact site. To pass this assessment, lead tails must exhibit permanent elongation less than 5%, and electrical isolation between all tested nets. The mean permanent elongation observed was All 29 samples measured permanent elongation less than 5% ((3.6 × 10 −4 ± 4.3 × 10 −4 )%, mean ± standard deviation). The distribution of observed permanent elongation is presented in supplementary figure 1(a). Of the 29 samples tested, 1 failed electrical testing (a disconnect between a conductor in the lead tail and the multiplexing ASIC, and two shorts between pairs of lead tail conductors). Such a failure would be easily detectable during system testing prior to implantation of the device, and would simply result in a short delay of surgery while a backup device was located. The low failure rate observed following tensile testing, and low severity impact resulted in an acceptable level of risk. During surgical implantation, sections of the HD64 experience differing flexion forces. Therefore, independent tests were developed to adequately assess the electrical performance of the HD64 assembly after exposure to these forces. Firstly, to simulate the flexion experienced by the array body during implantation and in the epidural potential space, 29 array bodies were flexed using a custom flexion testing fixture to ±22° for 1500 cycles, with a 0.5 s cycle duration. Following the flexion cycles, the electrical isolation between each conductor and contact site was assessed. In all 29 samples tested, all electrical connections remained nominal, with no open or short circuit conditions created. The second aspect to be tested was the most distal end of the lead tail, which is typically coiled and secured to form a strain relief loop. To assess this section of the lead tail, the paddle bodies were held secure, and the distal section of the lead tails were flexed to ±45° for 1500 cycles, with a 0.5 s cycle duration. Following the flexion cycles, the electrical isolation between each conductor and contact site was assessed. In 3 of the 29 tested samples new short or open circuit conditions were observed, though these were all below the 3 short or open conductor limit afforded by switch matrix redundancy. Finally, the impact of repeated flexion on the resistance of the distal section of the HD64 (simulating the implanted 2 year implanted lifetime required of the device for this study) was assessed. For this test, the distal section of the lead tail was sectioned to be 20 cm in length. With the paddle body secured, the lead tail was flexed to ±90° for 47 000 cycles, with a 0.5 s cycle duration. Following the flexing cycles, the lead tail was cut to separate it from the paddle body, and the 12 conductors in each lead tail was removed from the tail to facilitate resistance testing. The maximum allowable resistance for the 20 cm section is 6 Ω (giving a resistivity of 30 Ωm −1 ). The distribution of observed resistances is shown in supplementary figure 1(b). All measured conductors exhibited a post-flexion resistance below the 6 Ω criteria, indicating minimal impacts on lead resistance due to flexion over the implanted lifetime of the device. The successful completion of mechanical testing indicated a robust design, which continued to hermetic testing.
Hermeticity of the electronics package To protect the embedded active electronics from the ionic environment in the epidural space, it is necessary to ensure the hermeticity of the electronics package and feedthrough assembly. Hermeticity model calculations were performed using a 2 year shelf life and a 1 year implanted life. Hermetic tests for electronics (MIL-STD-883 Method 1014.15) have failure limits based on the dew point inside a free volume package. The internal free volume of the HD64 assembly was 0.02 cm 3 , which corresponds to a failure leakage rate of 5 × 10 −9 atm cm 3 s −1 of air. This is equivalent to 13.5 × 10 −9 atm cm 3 s −1 of leakage using helium, which is used to test hermetic components. Using the laser interferometer method, compliant with the standard, the leak rate was measured for 171 hermetic packages. Figure (g) shows the distribution of measured leak rate values. Less than 10% of assembled packages had gross leak rates and were sent for subsequent testing. All accepted packages recorded a leak rate at least 15 times lower than the failure threshold set by MIL-STD-883. Based on the empirical leak rates 5 × 10 −10 , hermeticity calculations suggest the internal free volume of the package will not reach a dew point until far beyond a 2 year shelf life and 2 year implanted use. Using our measured leakage values, the dimensions of the device, and the material properties of our moisture-getter, the calculated hermetic lifespan of the hermetic electronics package is approximately 4 years (Greenhouse, Lowry, and Romenesko , 71). Additional validation may be required prior to implanting the HD64 for longer than this calculated lifespan. We were therefore confident that the hermetic package could continue safely to 15 month in vivo testing.
Control of active-multiplexing As all 60 contacts are bidirectional (capable of simultaneous stimulation and recording), maintaining knowledge of the state of the HD64 multiplexer was necessary to demultiplex the recorded spinal responses and stimulation information post-hoc . A schematic representation of the connections and devices used in this manuscript is presented in figure (a). The HD64 is powered and controlled by an external controller (MB-Controller, Micro-Leads Medical, Somerville, MA), which communicates with a host PC over a serial connection. MATLAB (version 2023b, MathWorks, Natick, MA) functions were provided to connect to, configure, and read from the multiplexer. Stimulation was also controlled using MATLAB. A custom script was written to ensure that the multiplexed stimulation channel was connected to the target electrode contact. Additionally, changes to the multiplexer configuration were logged alongside stimulation information, allowing synchronization with the recorded electrophysiological signals. This enabled recorded multiplexed signals to be split as the multiplexer configuration changed and a sparse 60 × n matrix of demultiplexed signals to be created, where n is the recording length in samples. While the multiplexing ASIC onboard the HD64 is capable of rapidly switching between connection configurations (though this increases the noise floor from 1.11 μ Vrms using an Intan RHD2164 alone to 2.65 μ Vrms with the HD64 connected and rapidly switching (Rachinskiy et al )), in this study multiplexer configurations remained static during an experiment.
In vivo evaluation of the HD64 All surgical and animal handling procedures were completed with approval from the Brown University Institutional Animal Care and Use Committee (IACUC), the Providence VA Medical System IACUC, and in accordance with the National Institutes of Health Guidelines for Animal Research (Guide for the Care and Use of Laboratory Animals). Two sheep (both female, aged 4.19 ± 0.3 years, weighing 92.5 ± 2.5 kg) were used for this study (figures (b) and (c)). Recordings were performed with both sheep for 15 months. Animals were kept in separate cages in a controlled environment on a 12 h light/dark cycle with ad libitum access to water and were fed twice daily. The ovine model was chosen for this study as the spine and spinal cord are comparable in size and share many anatomical features with humans, and the use of the ovine model to study the spinal cord has been well established (Marcus et al , Parker et al , , , Wilson et al , Reddy et al ). A visual overview of the experimental setup is shown in figure (a). 2.4.1. Surgical procedures The sheep were implanted, as previously reported (Calvert et al ). However, S1 was implanted with a single paddle, as the implanting neurosurgeons identified a narrow epidural space at L3 in this animal. Using only the caudal paddle array limits the rostrocaudal span accessible in this sheep, though the target neural structures underlying each array will be assessed independently. To briefly describe our surgical approach, under propofol-based general total intravenous anesthesia, an L4–L6 laminectomy with medial facetectomy was performed. The rostral HD64 paddle (if included) was gently placed, then slid rostrally, after which the caudal paddle was placed. Strain relief loops were made, and then the lead-tails were tunneled to the skin, where they were externalized. Reference and ground electrodes (Cooner AS636 wire, Cooner Wire Company, Chatsworth, CA) were secured epidurally and in the paraspinal muscles, respectively. Strain relief loops were made, and these wires were also tunneled then externalized. Intraoperative testing was used to confirm device functionality. The animals were allowed to recover from anesthesia then returned to their pens. A second surgical procedure was performed to place intramuscular EMG recording equipment (L03, Data Sciences International, St. Paul, MN) in the lower extremity musculature of S1. The sheep was intubated and placed prone on the operating table. The sheep was placed in a V-shaped foam block to maintain stability throughout the procedure, and a rectangular foam block was placed under the hip to alleviate pressure on the hindlegs (Calvert et al ). The legs were hung laterally off of the operating table, and each of the hooves was placed in a sterile surgical glove and wrapped in a sterile bandage for manipulation during the procedure. On each side, the extensor digitorum longus, biceps femoris , and gastrocnemius were identified by palpating anatomic landmarks. Small incisions were made over the bellies of each of the three muscles to be instrumented, and a small subcutaneous pocket was made on the flank. All three channels were tunneled from the subcutaneous pocket to the most proximal muscle. There, a single recording channel and its reference wire were trimmed to length and stripped of insulation. The bare electrodes were inserted into the muscle belly perpendicular to the muscle striations, through 1–2 cm of muscle. The electrodes were secured by suturing to the muscle at the insertion and exit points of the muscle belly. The remaining two channels were tunneled to the next muscle, where the process was repeated before the final electrode was tunneled to the most distal muscle and secured in the same manner. Finally, the telemetry unit was placed in the subcutaneous pocket on the upper rear flank and secured using a suture. Approximately 0.5 g of vancomycin powder was irrigated into the subcutaneous pocket, and the pocket was closed. The process was then repeated on the other side. After brief intraoperative testing, the animal was allowed to recover from anesthesia then returned to its pen. Both sheep recovered full ambulation in less than 6 h. 2.4.2. Post-implantation monitoring Both sheep were monitored at least twice daily by trained veterinary staff, who assessed changes in bladder function, respiration rate, food and water intake, and gastrointestinal output. Locomotor performance was assessed by a trained animal technician, who administered over 20 min of overground or treadmill-based walking assessments per day. No signs of device-related adverse events were noted. 2.4.3. Recording of EES-evoked motor potentials At the beginning of the experimental session, the awake sheep was hoisted in a sling (Panepinto, Fort Collins, CO) until clearance between its hooves and the floor was observed. The HD64 was connected, enabling simultaneous stimulation, recording, and control of the multiplexer. Stimulation amplitude ranges were identified for each sheep, ranging from below motor threshold to the maximum comfortable response above motor threshold. Five stimulation amplitudes were selected in the comfortable range, and each stimulation was delivered at four frequencies (10, 25, 50, and 100 Hz). In monopolar stimulation trials, stimulation was independently applied to each of the 60 contacts. A cathode-leading, 3:1 asymmetrical, charge-balanced waveform was used in all cases. The cathodic-phase pulse width was 167 μ s, and the stimulation train duration was 300 ms. The stimulation anode was the implanted reference wire in the paraspinal muscles. The inter-train interval was randomly drawn from a uniform distribution spanning 1–2 s, and stimulus presentation order was randomized. In bipolar stimulation trials, bipolar pairs were defined such that the spacing between the cathode and anode represented the minimum spacing possible on either the HD64 or Medtronic 5-6-5 paddles. Here, stimulation was provided at 5 amplitude values in the comfortable range for each sheep at a frequency of 10 Hz. This 100 ms inter-pulse-interval was selected to maximize the latency between stimuli so that the neural response to the previous pulse had subsided prior to the delivery of the subsequent stimulation. The stimulation waveform shape, train duration, and inter-train interval were unchanged from the monopolar trials. Stimulation presentation was again randomized. Data was collected during stimulation across 60 electrode contacts per spinal paddle and 6–8 EMG channels. The spinal, EMG, and stimulation data were synchronized by injecting a known bitstream into the time series data of the spinal and muscular recordings. Stimulation sessions did not exceed three hours, and the sheep were constantly monitored for signs of distress and fed throughout each session. At the conclusion of recording, the animal was returned to their pen. 2.4.4. Recording of EES-evoked spinal potentials At the beginning of each session, the awake sheep was hoisted in a sling, and the HD64 was connected as described previously. The HD64 was routed to create a stimulating bipole between the caudal-most midline electrodes and a recording bipole, either 7.3 mm, 17.0 mm, 26.7 mm, or 31.6 mm more rostral than the stimulating bipole. As the recording device (A-M Systems Model 1800) consisted of only two recording channels, each of the bipoles was recorded sequentially. For each stimulation event, a cathode-leading, symmetrical EES pulse was delivered to the stimulation bipole. The EES amplitude was set to 2 mA, 4 mA, and 6 mA, while the width of the cathodic phase was kept constant at 25 μ s. 2.4.5. Recording TENS-evoked local field potentials on the spinal cord With the sheep hoisted in the sling apparatus, the bony anatomy of the right-side hind fetlock was palpated, and a ring of wool just proximal to the fetlock was shorn, extending approximately 10 cm proximally. The exposed skin was cleaned with isopropyl alcohol, which was allowed to air dry. 2’ × 2’ TENS (Transcutaneous Electrical Nerve Stimulation) patches (Balego, Minneapolis, MN) were cut to size, placed on the medial and lateral aspects of the cannon bone, then connected to the stimulator device (Model 4100, A-M Systems Inc, Carlsborg, WA). The stimulation monitoring channel was connected to an analog input on the EMG system for synchronization. Stimulation amplitude was set to 25 V, and stimulation pulse width was 250 μ s. The interstimulus interval was 500 ms. Spinal local field potentials (LFPs) were recorded from contacts on the HD64. The above process was then repeated for the left hindlimb.
Surgical procedures The sheep were implanted, as previously reported (Calvert et al ). However, S1 was implanted with a single paddle, as the implanting neurosurgeons identified a narrow epidural space at L3 in this animal. Using only the caudal paddle array limits the rostrocaudal span accessible in this sheep, though the target neural structures underlying each array will be assessed independently. To briefly describe our surgical approach, under propofol-based general total intravenous anesthesia, an L4–L6 laminectomy with medial facetectomy was performed. The rostral HD64 paddle (if included) was gently placed, then slid rostrally, after which the caudal paddle was placed. Strain relief loops were made, and then the lead-tails were tunneled to the skin, where they were externalized. Reference and ground electrodes (Cooner AS636 wire, Cooner Wire Company, Chatsworth, CA) were secured epidurally and in the paraspinal muscles, respectively. Strain relief loops were made, and these wires were also tunneled then externalized. Intraoperative testing was used to confirm device functionality. The animals were allowed to recover from anesthesia then returned to their pens. A second surgical procedure was performed to place intramuscular EMG recording equipment (L03, Data Sciences International, St. Paul, MN) in the lower extremity musculature of S1. The sheep was intubated and placed prone on the operating table. The sheep was placed in a V-shaped foam block to maintain stability throughout the procedure, and a rectangular foam block was placed under the hip to alleviate pressure on the hindlegs (Calvert et al ). The legs were hung laterally off of the operating table, and each of the hooves was placed in a sterile surgical glove and wrapped in a sterile bandage for manipulation during the procedure. On each side, the extensor digitorum longus, biceps femoris , and gastrocnemius were identified by palpating anatomic landmarks. Small incisions were made over the bellies of each of the three muscles to be instrumented, and a small subcutaneous pocket was made on the flank. All three channels were tunneled from the subcutaneous pocket to the most proximal muscle. There, a single recording channel and its reference wire were trimmed to length and stripped of insulation. The bare electrodes were inserted into the muscle belly perpendicular to the muscle striations, through 1–2 cm of muscle. The electrodes were secured by suturing to the muscle at the insertion and exit points of the muscle belly. The remaining two channels were tunneled to the next muscle, where the process was repeated before the final electrode was tunneled to the most distal muscle and secured in the same manner. Finally, the telemetry unit was placed in the subcutaneous pocket on the upper rear flank and secured using a suture. Approximately 0.5 g of vancomycin powder was irrigated into the subcutaneous pocket, and the pocket was closed. The process was then repeated on the other side. After brief intraoperative testing, the animal was allowed to recover from anesthesia then returned to its pen. Both sheep recovered full ambulation in less than 6 h.
Post-implantation monitoring Both sheep were monitored at least twice daily by trained veterinary staff, who assessed changes in bladder function, respiration rate, food and water intake, and gastrointestinal output. Locomotor performance was assessed by a trained animal technician, who administered over 20 min of overground or treadmill-based walking assessments per day. No signs of device-related adverse events were noted.
Recording of EES-evoked motor potentials At the beginning of the experimental session, the awake sheep was hoisted in a sling (Panepinto, Fort Collins, CO) until clearance between its hooves and the floor was observed. The HD64 was connected, enabling simultaneous stimulation, recording, and control of the multiplexer. Stimulation amplitude ranges were identified for each sheep, ranging from below motor threshold to the maximum comfortable response above motor threshold. Five stimulation amplitudes were selected in the comfortable range, and each stimulation was delivered at four frequencies (10, 25, 50, and 100 Hz). In monopolar stimulation trials, stimulation was independently applied to each of the 60 contacts. A cathode-leading, 3:1 asymmetrical, charge-balanced waveform was used in all cases. The cathodic-phase pulse width was 167 μ s, and the stimulation train duration was 300 ms. The stimulation anode was the implanted reference wire in the paraspinal muscles. The inter-train interval was randomly drawn from a uniform distribution spanning 1–2 s, and stimulus presentation order was randomized. In bipolar stimulation trials, bipolar pairs were defined such that the spacing between the cathode and anode represented the minimum spacing possible on either the HD64 or Medtronic 5-6-5 paddles. Here, stimulation was provided at 5 amplitude values in the comfortable range for each sheep at a frequency of 10 Hz. This 100 ms inter-pulse-interval was selected to maximize the latency between stimuli so that the neural response to the previous pulse had subsided prior to the delivery of the subsequent stimulation. The stimulation waveform shape, train duration, and inter-train interval were unchanged from the monopolar trials. Stimulation presentation was again randomized. Data was collected during stimulation across 60 electrode contacts per spinal paddle and 6–8 EMG channels. The spinal, EMG, and stimulation data were synchronized by injecting a known bitstream into the time series data of the spinal and muscular recordings. Stimulation sessions did not exceed three hours, and the sheep were constantly monitored for signs of distress and fed throughout each session. At the conclusion of recording, the animal was returned to their pen.
Recording of EES-evoked spinal potentials At the beginning of each session, the awake sheep was hoisted in a sling, and the HD64 was connected as described previously. The HD64 was routed to create a stimulating bipole between the caudal-most midline electrodes and a recording bipole, either 7.3 mm, 17.0 mm, 26.7 mm, or 31.6 mm more rostral than the stimulating bipole. As the recording device (A-M Systems Model 1800) consisted of only two recording channels, each of the bipoles was recorded sequentially. For each stimulation event, a cathode-leading, symmetrical EES pulse was delivered to the stimulation bipole. The EES amplitude was set to 2 mA, 4 mA, and 6 mA, while the width of the cathodic phase was kept constant at 25 μ s.
Recording TENS-evoked local field potentials on the spinal cord With the sheep hoisted in the sling apparatus, the bony anatomy of the right-side hind fetlock was palpated, and a ring of wool just proximal to the fetlock was shorn, extending approximately 10 cm proximally. The exposed skin was cleaned with isopropyl alcohol, which was allowed to air dry. 2’ × 2’ TENS (Transcutaneous Electrical Nerve Stimulation) patches (Balego, Minneapolis, MN) were cut to size, placed on the medial and lateral aspects of the cannon bone, then connected to the stimulator device (Model 4100, A-M Systems Inc, Carlsborg, WA). The stimulation monitoring channel was connected to an analog input on the EMG system for synchronization. Stimulation amplitude was set to 25 V, and stimulation pulse width was 250 μ s. The interstimulus interval was 500 ms. Spinal local field potentials (LFPs) were recorded from contacts on the HD64. The above process was then repeated for the left hindlimb.
Extension of state-of-the-art EES parameter inference neural networks 2.5.1. Model reparameterization and data acquisition To extend the state-of-the-art EES parameter inference neural network models to 60 electrodes, the input space was reparameterized. The original model accepted a 3D input feature vector: amplitude, frequency, and electrode index (Govindarajan et al ). The new model used here now accepts a 4D input vector: amplitude, frequency, electrode mediolateral coordinate, and electrode rostrocaudal coordinate (i.e. inputting an x – y location rather than hardcoding an arbitrary electrode number). By defining the electrode position on a continual space, positional relationships between each electrode could be learned. As in the original model, frequency and amplitude values were normalized to the range [0, 1). When surgically placing the electrodes, it is impractical to ensure the electrode is perfectly aligned with the spinal cord in two dimensions, and is sitting flush with the dorsal surface. Additionally, the paddle may shift slightly during closing. For each sheep, radiographs were collected following the implantation of the HD64 electrodes. By examining the orientation of each electrode contact, an affine transformation was computed to account for skew between the electrode paddles and the radiograph camera (or equivalently, the spinal cord, as the sheep was placed sternally with the radiograph camera perpendicular to the longitudinal plane of the spinal cord). In this transformed space, the relative position of the rostral paddle (if placed) was determined with respect to the caudal paddle. Then, using the known dimensions of the HD64, the coordinates of each electrode contact were calculated. The electrode coordinates were normalized to the same range as the amplitude and frequency (the left-most caudal electrode contact had a coordinate of (0, 0), with increasing values extending rightward and rostrally). This process enabled the contact separation of the HD64 to be correctly determined, even if the HD64 was not sitting perfectly aligned with the anatomy. The data acquisition phase proceeded as described for recording of monopolar EES-evoked motor responses. 2.5.2. Data handling & evaluation The data handling and preprocessing steps were performed as previously described (Govindarajan et al ). Briefly, the EMG recordings from bilateral extensor digitorum longus, biceps femoris , and gastrocnemius were high-pass filtered at 3 Hz, then a second-order infinite impulse response notch filter with a quality factor of 35 at 60 Hz. EMG envelopes were calculated by computing the moving RMS (root mean squared) value of each signal for a 300 ms window (50% window overlap). The enveloped data was epoched from 100 ms prior to the onset of stimulation to 300 ms after the conclusion of the stimulation train (700 ms epochs). The epoched data was labeled with the stimulation amplitude, frequency, and electrode coordinates determined by the map described previously. This dataset was the 100% density dataset, as it contained every stimulation electrode. The 50% density dataset was created by removing 30 electrode contacts from the 100% density dataset. Finally, the 25% density dataset was created by removing an additional 14 contacts from the 50% density dataset. The remaining data handling steps (outlier removal, unreliable sample removal, subthreshold EMG removal, and EMG summarization) were performed as previously described (Govindarajan et al ). Independent models were then trained for each density dataset. Following the completion of training and inference, stimulation proposals were generated for a set of target EMG responses obtained from testing EES parameter sets (40% of all conditions were held out, as in (Govindarajan et al )). Since these testing data were held out from the training data (the remaining 60% EES parameter conditions), the combination of target EMG responses and their EES parameters has never been shown to the network during the training phase. Proposals to achieve these target EMG responses were generated independently by all three models. If the stimulation contact coordinate inferred for a proposal was not encircled by a stimulation contact, the proposal was snapped to the nearest contact. The proposed stimulation amplitudes were checked to comply with the ranges the researcher found comfortable for each sheep and through software limits. To assess changes in the distribution of evoked motor responses, a set of stimulation parameters applied during the collection phase were repeated at the start of the evaluation phase. Then, each proposal was delivered 10 times, and the proposal order was randomized. After collection, the L1 error between the proposed EMG response and the achieved EMG response was calculated.
Model reparameterization and data acquisition To extend the state-of-the-art EES parameter inference neural network models to 60 electrodes, the input space was reparameterized. The original model accepted a 3D input feature vector: amplitude, frequency, and electrode index (Govindarajan et al ). The new model used here now accepts a 4D input vector: amplitude, frequency, electrode mediolateral coordinate, and electrode rostrocaudal coordinate (i.e. inputting an x – y location rather than hardcoding an arbitrary electrode number). By defining the electrode position on a continual space, positional relationships between each electrode could be learned. As in the original model, frequency and amplitude values were normalized to the range [0, 1). When surgically placing the electrodes, it is impractical to ensure the electrode is perfectly aligned with the spinal cord in two dimensions, and is sitting flush with the dorsal surface. Additionally, the paddle may shift slightly during closing. For each sheep, radiographs were collected following the implantation of the HD64 electrodes. By examining the orientation of each electrode contact, an affine transformation was computed to account for skew between the electrode paddles and the radiograph camera (or equivalently, the spinal cord, as the sheep was placed sternally with the radiograph camera perpendicular to the longitudinal plane of the spinal cord). In this transformed space, the relative position of the rostral paddle (if placed) was determined with respect to the caudal paddle. Then, using the known dimensions of the HD64, the coordinates of each electrode contact were calculated. The electrode coordinates were normalized to the same range as the amplitude and frequency (the left-most caudal electrode contact had a coordinate of (0, 0), with increasing values extending rightward and rostrally). This process enabled the contact separation of the HD64 to be correctly determined, even if the HD64 was not sitting perfectly aligned with the anatomy. The data acquisition phase proceeded as described for recording of monopolar EES-evoked motor responses.
Data handling & evaluation The data handling and preprocessing steps were performed as previously described (Govindarajan et al ). Briefly, the EMG recordings from bilateral extensor digitorum longus, biceps femoris , and gastrocnemius were high-pass filtered at 3 Hz, then a second-order infinite impulse response notch filter with a quality factor of 35 at 60 Hz. EMG envelopes were calculated by computing the moving RMS (root mean squared) value of each signal for a 300 ms window (50% window overlap). The enveloped data was epoched from 100 ms prior to the onset of stimulation to 300 ms after the conclusion of the stimulation train (700 ms epochs). The epoched data was labeled with the stimulation amplitude, frequency, and electrode coordinates determined by the map described previously. This dataset was the 100% density dataset, as it contained every stimulation electrode. The 50% density dataset was created by removing 30 electrode contacts from the 100% density dataset. Finally, the 25% density dataset was created by removing an additional 14 contacts from the 50% density dataset. The remaining data handling steps (outlier removal, unreliable sample removal, subthreshold EMG removal, and EMG summarization) were performed as previously described (Govindarajan et al ). Independent models were then trained for each density dataset. Following the completion of training and inference, stimulation proposals were generated for a set of target EMG responses obtained from testing EES parameter sets (40% of all conditions were held out, as in (Govindarajan et al )). Since these testing data were held out from the training data (the remaining 60% EES parameter conditions), the combination of target EMG responses and their EES parameters has never been shown to the network during the training phase. Proposals to achieve these target EMG responses were generated independently by all three models. If the stimulation contact coordinate inferred for a proposal was not encircled by a stimulation contact, the proposal was snapped to the nearest contact. The proposed stimulation amplitudes were checked to comply with the ranges the researcher found comfortable for each sheep and through software limits. To assess changes in the distribution of evoked motor responses, a set of stimulation parameters applied during the collection phase were repeated at the start of the evaluation phase. Then, each proposal was delivered 10 times, and the proposal order was randomized. After collection, the L1 error between the proposed EMG response and the achieved EMG response was calculated.
Statistics and data processing Following the completion of an experimental session, the recorded data were analyzed offline using custom-written code in MATLAB (R2023b) and Python 3.8.17 (using SciKit-Learn version 1.4.1 (Pedregosa et al ) and uniform manifold approximation and projection (UMAP) module (McInnes et al )). To compute recruitment curves, the recorded EMG signals were split into 500 ms stimulation-triggered epochs. This epoch length was selected to capture the 300 ms pulse train and residual motor effects. The data was then rectified, and the power was recruited by taking the absolute value and then summing all samples in each epoch. The power values were then normalized to a range of [0, 1] for each muscle by subtracting the minimum activation and dividing by the range of activations. The EMG power (rectified area under the curve, or rAUC) was averaged across the four stimulation trials for each amplitude, frequency, and electrode combination, and the standard deviation was computed. To identify statistical clusters of evoked EMG responses, the single-trial recruitment data was rearranged into a 2D table, where rows were stimulation electrodes, and the recruitment data for each stimulation frequency was horizontally concatenated to form the columns. Raw rAUCs were normalized using the Yeo–Johnson power transformation (Yeo and Johnson ), and used as input features for the UMAP algorithm (Gorman , McInnes et al ). A two-dimensional embedding was generated, where each point represents the rAUC response pattern across muscles, stimulation amplitudes, and frequencies for a given electrode (four points per electrode, corresponding to the four stimulation repetitions). Spectral clustering (Shi and Malik ) was used to identify between 1 and 16 clusters of electrodes in the dataset, representing groups of electrodes with similar rAUC response patterns. A silhouette analysis (Rousseeuw ) indicated that the optimal number of clusters was eight. If the four repeats of the same electrode did not fall into the same cluster, that electrode was assigned the cluster label corresponding to the majority of the four labels. The process of epoching, rectifying and integrating the EMG signal was repeated for the bipolar stimulation datasets. Before normalizing the bipolar data, the unnormalized monopolar data was included such that both monopolar and bipolar EMG responses were normalized to the same range. The EMG distributions for each stimulation configuration were compared using a Mann–Whitney U test. This non-parametric test was selected as it does not assume normality of the underlying distributions. Finally, the selectivity indexes were computed for each muscle in all stimulation electrode configurations (Badi et al , Bryson et al ). The distributions of selectivity indexes were also compared using a Mann–Whitney U test. Spinal evoked compound action potentials (ECAPs) and SEPs were first split into 500 μ s and 100 ms stimulation-triggered epochs, respectively. The epochs were averaged across 50 stimulation trials and filtered with a low-pass filter at 2000 Hz. The mean and standard deviation of the ECAPs were calculated for each amplitude and recording bipole. The latency of the peak of the response was identified for each of the 50 single trials (Chakravarthy et al ). A linear fit of the latency-distance relationship was calculated, and the gradient of this fit was calculated to determine the conduction velocity (CV) (Parker et al , Lam et al ). The SEPs were re-referenced by subtracting the inverting electrode recording from the non-inverting electrode recording for each recording bipole (Verma and Romanauski et al ). The mean SEP response was calculated for each recording channel. To assess the spatial correlation between recorded channels, the normalized zero-lag cross-correlation coefficient was computed between each recording channel for single trials (Swindale and Spacek ). For each channel, the mean pairwise correlation to other channels was calculated over each trial. To assess the uniqueness between channels, the absolute value of the correlation was computed and then subtracted from 1. That is, perfectly correlated channels, with a mean pairwise correlation coefficient of 1, would receive a score of 0, while completely uncorrelated channels would receive a score of 1. The distribution of uniqueness scores was compared using a Kruskal–Wallis test. This non-parametric test was selected as it does not assume normality of the underlying distributions. To evaluate the performance of the EMG response simulation neural network (the forward model), the L1 loss was computed by subtracting the predicted EMG vector from the evoked EMG vector and then taking the absolute value. The L1 loss was selected as the error term as this function penalizes errors linearly (rather than quadratically, in the case of a mean squared error loss function), which reduces the training effect of outlier data points which may become apparent due to volitional movements made by the sheep. The evaluation process occurred on a held-out dataset of mean EMG responses to 480 stimulation combinations, each with 4 repeats. The distributions were compared using a Kruskal–Wallis test. The random performance of each network was determined by generating a randomized prediction for each muscle from a uniform distribution ranging from 0 to the maximum response in the training dataset. Then, this randomized EMG vector was compared to the target vector. The L1 losses were split for each model by inclusion of the stimulating electrode in the training dataset. The L1 losses were compared within models using the Mann–Whitney U test. Finally, the L1 losses were split for each model by muscle, and their distributions were compared using a Kruskal–Wallis test. To evaluate the performance of the parameter inference neural network (the inverse model), the rAUC was computed for each of the responses to inferred parameters. The mean L1 error between the predicted and achieved EMG rAUC vectors was calculated across muscles. Additionally, the L1 error between the EMG responses to the original EES parameters and the responses to these same parameters sent during the evaluation session (which occurred some hours after the training session) was calculated. The distributions of L1 errors were compared using the Kruskal–Wallis test.
Results 3.1. Successful chronic implantation of the HD64 To evaluate the safety of chronic implantation of the HD64 devices, we monitored the condition of the sheep continuously throughout the study and evaluated our ability to control the multiplexer on the active array. We implanted two sheep with 1 (S1) or 2 (S2) HD64 EES paddles which remained implanted for 15 months. The multiplexers were configured 1598 and 4562 times on 34 and 51 unique days, respectively, throughout the study. No device-related malfunctions were observed, and control of the multiplexer remained stable. The impedance from the distal connector end of the lead-tails to the contacts (including the paths through the multiplexer) were measured at several points up to 274 d post implant. Over this time, no significant changes in the distribution of impedances were observed ( p > 0.05, Mann–Whitney U test) (figure (d)). While the HD64 was implanted in the animals, we observed consistent stimulation to EMG responses and spinal responses to spinal and peripheral stimulation. 3.2. Active multiplexing enables precise motor recruitment After implantation, we evaluated the benefit of a high-density electrode paddle in generating targeted motor recruitment from EES by recording motor-evoked responses from 6 to 8 bilateral lower extremity EMG sensors. Following EES, the EMG signal was rectified, then the rAUC was calculated and plotted as a function of amplitude (figure (a)). Applying monopolar EES to each of the 60 contacts in S1 produced a detectable motor response in at least 1 of 6 muscles. For each electrode contact, the EES amplitude required to reach 33% of the maximum activation for each muscle was identified (figure (b)). Contacts marked N.R. could not recruit the muscle to 33% of its maximum activation. Our analysis identified a lower EES amplitude required to recruit ipsilateral muscles than contralateral muscles, consistent with previous literature (Calvert et al ). The results for S2 are presented in supplemental figure 2. The spatial diversity of EES-evoked motor responses was examined by performing unsupervised clustering. UMAP dimensionality reduction was performed on the recruitment data for each of the 60 electrodes. The low-dimensional embeddings were then clustered by statistical similarity. Eight independent clusters were identified. Electrodes for each cluster exhibited colocalization (figure (c), supplemental figure 3) on the HD64 paddle. To simulate the performance of a commercially available EES paddle (5-6-5, Medtronic, Minneapolis, MN), a scale drawing of the commercial paddle was overlaid on the HD64 clusters. While some commercial paddle contacts overlaid a single cluster, many contacts straddled two or more independent clusters, and cluster 5 was not reachable (figure (d)). Consequently, the motor responses evoked using the commercial paddle may lack the fidelity available using the HD64. To explore the diversity of the recruitment patterns across clusters, the mean recruitment for each electrode in each cluster was calculated and then normalized using the Yeo–Johnson power transformation (Yeo and Johnson ). Figure (e) provides an example of the recruitment patterns observed for each cluster at a stimulation frequency of 50 Hz. To further explore the stimulation fidelity afforded using a high-density electrode paddle compared to a commercially available paddle, we examined the effect of current steering. Our stimulating analog front-ends were equipped with a single current source, enabling us to create a single bipole. When no anodic contact was specified, current returned through the ground wire implanted in the paraspinal muscles. This creates a much more dispersed electric field than if the current return was more proximal to the cathodic contact. By using an anodic stimulation contact, current is simultaneously sourced and sinked from the cathodic and anodic contacts, respectively. Thus, the electric field is ‘steered’ towards the anodic contact, changing the intensity and dispersion of the electric field. EES was delivered using bipolar stimulation pairs spaced using the minimum inter-electrode spacing available on the HD64 (‘narrow’, supp. figures 4(a) and (g)), or the minimum spacing available on the 5-6-5 (‘wide’, supp. figures 4(d) and (g)), and was compared to each other, or a monopolar baseline (supp. figures 4(a) and (d)). For each lower extremity muscle, the rAUC and selectivity of the response to stimulation was calculated, as described previously. Using a Mann–Whitney U test, a significant difference was identified between the narrow and wide bipolar stimulations for all stimulation amplitudes ( p < 0.05), though never in all muscles, indicative of a difference in the selectivity of activation (supp. figure 4(h)). Comparing the selectivity of the narrow and wide bipolar fields, a significant difference was identified in at least one muscle for all stimulation amplitudes. At the maximum EES amplitude tested (1500 μ A) 66% of instrumented muscles demonstrated significantly different response selectivity (supp. figure 4(i)). Other bipolar arrangements are presented in supplemental figure 5. 3.3. Active multiplexing enables localized referencing To evaluate the benefits of dense recording bipoles, we began examining epidural spinal responses to spinal and peripheral stimulation. The experimental configuration for this section is shown in figure (a). To examine the propagation of ECAPs on the spinal cord using the HD64, a EES bipole was created on the caudal midline of the paddle. Four bipolar recording pairs were established along the midline of the paddle, with centers distanced 7.3 mm, 17.0 mm, 26.7 mm, and 31.6 mm rostrally from the center of the stimulating bipole respectively (figure (b)). Single cathode-leading, biphasic, symmetrical EES pulses were delivered with amplitudes 2 mA, 4 mA, and 6 mA. The pulse width remained constant at 25 μ s. Note that this pulse width is shorter than the stimulation waveform used to study EES-evoked motor responses. This short pulse width and symmetrical aspect ratio was selected to minimize the effect of the stimulation artifact in the temporal window of interest immediately following stimulation. Decreasing the pulse width (and aspect ratio) mandated correspondingly increasing the stimulation amplitude range used. The responses recorded by each bipole are presented in figure (c). The elapsed time between stimulation and the peak of the ECAP was identified at each bipole for each of the 50 trials (figure (d)). Using the distance between the stimulating and recording bipoles and the time of peak response, the CV of the evoked response was calculated. For stimulation amplitudes 2 mA, 4 mA, and 6 mA, the conduction velocities were 114.9 m s −1 , 114.6 m s −1 , and 111.8 m s −1 , respectively. These conduction velocities are within the range identified in previous work (Capogrosso et al , Parker et al ). Additionally, the conduction velocities presented less than 2.5% deviation as stimulation amplitude increased, consistent with activating similar distributions of neural fibers. The presence of such an orthodromically propagating response is a fundamental demonstration of the HD64’s ability to faithfully convey potentials of neurogenic origin. We examined the SEPs in response to peripheral application of TENS using three recording strategies, each containing six recording channels (supp. figure 6(a)). The monopolar recording strategy used six consecutive HD64 contacts, with a recording reference wire implanted in the epidural space. The bipolar HD64 arrangement created six consecutive bipolar recording pairs using the minimum inter-electrode spacing available on the HD64. Finally, the bipolar 5-6-5 arrangement simulated the best-case bipolar arrangement using a commercial spinal paddle by creating six bipolar recording pairs with an interelectrode distance similar to that found on the Medtronic 5-6-5 (a to-scale drawing of this paddle is presented in blue in supp. figure 6(a)). Supplementary figure 6(b) shows the trial average (50 trials) SEP in response to 25 V TENS applied to the right fetlock for all three arrangements. While the SEP is clearly prominent in the monopolar arrangement, the responses recorded by all channels are highly correlated. To identify the degree of correlation in recorded SEPs between channels, the mean pairwise zero-lag cross-correlation coefficient was calculated for each channel on single trials. The effect of arbitrary bipole orientation was removed by taking the absolute value of the correlation coefficient. A statistically significant difference between the distribution of correlations was identified for all recording arrangements ( p < 0.05, Kruskal–Wallis test) (supp. figure 6(c)). Spectral comparisons are made in supplemental figure 7. 3.4. Continuous encoding of electrode position enables inference over electrode space Increasing the number of stimulation contacts increases the solution space of automated parameter inference neural networks. We leveraged the dense grid of contacts on the HD64 to enable spatial encoding of EES parameters during training and inference. The reparameterized neural network models were successfully trained to infer evoked EMG power following EES for all three electrode densities (figure (a)). The cumulative L1 loss between the predicted and actual EMG power for the held-out dataset is shown at the top of figure (b) for each instance of the ‘forward’ EMG prediction model. Significant differences were found between the 25% model and all other models ( p < 0.05, Kruskal–Wallis test), but not between the 50% and 100% models ( p > 0.05, Kruskal–Wallis test). Examining the L1 prediction error for each model, a significantly higher error was produced when predicting responses to stimulation on unseen electrodes for the 25% model ( p < 0.05, Mann–Whitney U test). This was not present for the 50% model, indicating the 50% model’s ability to accurately infer EMG responses for unseen stimulation sites ( p > 0.05, Mann–Whitney U test) (figure (b) bottom). In all cases, the prediction error was less than random chance (indicated by blue dots in figure (b)). Decomposing the L1 loss by muscle reveals the same result identified previously, where the only significant difference in L1 distribution occurs between the 25% model and the remaining models ( p < 0.05, Kruskal–Wallis test), except for the left gastrocnemius, where prediction loss followed the same distribution for all models ( p > 0.05, Kruskal–Wallis test) (supp. figure 8). Following successful amortization of spinal sensorimotor computations by the forward models, we performed inference using the inverse models to identify EES parameters predicted to evoke target EMG responses. The inverse models produced joint posterior density functions over stimulation parameters and electrode spaces for 32 target EMG vectors. Figure (c) compares the posterior densities across the three inverse models for a single EMG target. Posteriors for additional EMG targets are provided in supplemental figure 9. Here, the 25% model predicts low likelihood across all electrode- and parameter-space. As the number of included electrode positions increases, the model localizes an optimal stimulation location, indicated by the focal region of high likelihood. To evaluate the performance of the models, we selected target EMG vectors from the held-out dataset for evaluation in vivo . The workflow and representative results are outlined in figure (d). The stimulation parameter and electrode combination that produced the highest likelihood is identified and then delivered using the stimulator device. The evoked EMG responses from the six instrumented muscles are recorded and then compared offline to the target EMG vector. This process was repeated for all three models. We then compared the L1 error between the target EMG response and the evoked EMG response across models (figure (e)). A nonparametric Kruskal–Wallis test revealed that the underlying distributions of L1 errors between targets and the responses were not significantly different between models ( p > 0.05). This indicates that adequate training data to infer stimulation parameters to faithfully reproduce target EMG activity could be collected using a sparse subset (25%) of the available stimulation contacts. Further, no significant differences were observed between the re-sent stimulation parameter (the ‘ground truth’) distribution and the proposal distributions. This indicates that the parameter inference performance approached the soft upper bound given by the natural variations in EMG recorded across time.
Successful chronic implantation of the HD64 To evaluate the safety of chronic implantation of the HD64 devices, we monitored the condition of the sheep continuously throughout the study and evaluated our ability to control the multiplexer on the active array. We implanted two sheep with 1 (S1) or 2 (S2) HD64 EES paddles which remained implanted for 15 months. The multiplexers were configured 1598 and 4562 times on 34 and 51 unique days, respectively, throughout the study. No device-related malfunctions were observed, and control of the multiplexer remained stable. The impedance from the distal connector end of the lead-tails to the contacts (including the paths through the multiplexer) were measured at several points up to 274 d post implant. Over this time, no significant changes in the distribution of impedances were observed ( p > 0.05, Mann–Whitney U test) (figure (d)). While the HD64 was implanted in the animals, we observed consistent stimulation to EMG responses and spinal responses to spinal and peripheral stimulation.
Active multiplexing enables precise motor recruitment After implantation, we evaluated the benefit of a high-density electrode paddle in generating targeted motor recruitment from EES by recording motor-evoked responses from 6 to 8 bilateral lower extremity EMG sensors. Following EES, the EMG signal was rectified, then the rAUC was calculated and plotted as a function of amplitude (figure (a)). Applying monopolar EES to each of the 60 contacts in S1 produced a detectable motor response in at least 1 of 6 muscles. For each electrode contact, the EES amplitude required to reach 33% of the maximum activation for each muscle was identified (figure (b)). Contacts marked N.R. could not recruit the muscle to 33% of its maximum activation. Our analysis identified a lower EES amplitude required to recruit ipsilateral muscles than contralateral muscles, consistent with previous literature (Calvert et al ). The results for S2 are presented in supplemental figure 2. The spatial diversity of EES-evoked motor responses was examined by performing unsupervised clustering. UMAP dimensionality reduction was performed on the recruitment data for each of the 60 electrodes. The low-dimensional embeddings were then clustered by statistical similarity. Eight independent clusters were identified. Electrodes for each cluster exhibited colocalization (figure (c), supplemental figure 3) on the HD64 paddle. To simulate the performance of a commercially available EES paddle (5-6-5, Medtronic, Minneapolis, MN), a scale drawing of the commercial paddle was overlaid on the HD64 clusters. While some commercial paddle contacts overlaid a single cluster, many contacts straddled two or more independent clusters, and cluster 5 was not reachable (figure (d)). Consequently, the motor responses evoked using the commercial paddle may lack the fidelity available using the HD64. To explore the diversity of the recruitment patterns across clusters, the mean recruitment for each electrode in each cluster was calculated and then normalized using the Yeo–Johnson power transformation (Yeo and Johnson ). Figure (e) provides an example of the recruitment patterns observed for each cluster at a stimulation frequency of 50 Hz. To further explore the stimulation fidelity afforded using a high-density electrode paddle compared to a commercially available paddle, we examined the effect of current steering. Our stimulating analog front-ends were equipped with a single current source, enabling us to create a single bipole. When no anodic contact was specified, current returned through the ground wire implanted in the paraspinal muscles. This creates a much more dispersed electric field than if the current return was more proximal to the cathodic contact. By using an anodic stimulation contact, current is simultaneously sourced and sinked from the cathodic and anodic contacts, respectively. Thus, the electric field is ‘steered’ towards the anodic contact, changing the intensity and dispersion of the electric field. EES was delivered using bipolar stimulation pairs spaced using the minimum inter-electrode spacing available on the HD64 (‘narrow’, supp. figures 4(a) and (g)), or the minimum spacing available on the 5-6-5 (‘wide’, supp. figures 4(d) and (g)), and was compared to each other, or a monopolar baseline (supp. figures 4(a) and (d)). For each lower extremity muscle, the rAUC and selectivity of the response to stimulation was calculated, as described previously. Using a Mann–Whitney U test, a significant difference was identified between the narrow and wide bipolar stimulations for all stimulation amplitudes ( p < 0.05), though never in all muscles, indicative of a difference in the selectivity of activation (supp. figure 4(h)). Comparing the selectivity of the narrow and wide bipolar fields, a significant difference was identified in at least one muscle for all stimulation amplitudes. At the maximum EES amplitude tested (1500 μ A) 66% of instrumented muscles demonstrated significantly different response selectivity (supp. figure 4(i)). Other bipolar arrangements are presented in supplemental figure 5.
Active multiplexing enables localized referencing To evaluate the benefits of dense recording bipoles, we began examining epidural spinal responses to spinal and peripheral stimulation. The experimental configuration for this section is shown in figure (a). To examine the propagation of ECAPs on the spinal cord using the HD64, a EES bipole was created on the caudal midline of the paddle. Four bipolar recording pairs were established along the midline of the paddle, with centers distanced 7.3 mm, 17.0 mm, 26.7 mm, and 31.6 mm rostrally from the center of the stimulating bipole respectively (figure (b)). Single cathode-leading, biphasic, symmetrical EES pulses were delivered with amplitudes 2 mA, 4 mA, and 6 mA. The pulse width remained constant at 25 μ s. Note that this pulse width is shorter than the stimulation waveform used to study EES-evoked motor responses. This short pulse width and symmetrical aspect ratio was selected to minimize the effect of the stimulation artifact in the temporal window of interest immediately following stimulation. Decreasing the pulse width (and aspect ratio) mandated correspondingly increasing the stimulation amplitude range used. The responses recorded by each bipole are presented in figure (c). The elapsed time between stimulation and the peak of the ECAP was identified at each bipole for each of the 50 trials (figure (d)). Using the distance between the stimulating and recording bipoles and the time of peak response, the CV of the evoked response was calculated. For stimulation amplitudes 2 mA, 4 mA, and 6 mA, the conduction velocities were 114.9 m s −1 , 114.6 m s −1 , and 111.8 m s −1 , respectively. These conduction velocities are within the range identified in previous work (Capogrosso et al , Parker et al ). Additionally, the conduction velocities presented less than 2.5% deviation as stimulation amplitude increased, consistent with activating similar distributions of neural fibers. The presence of such an orthodromically propagating response is a fundamental demonstration of the HD64’s ability to faithfully convey potentials of neurogenic origin. We examined the SEPs in response to peripheral application of TENS using three recording strategies, each containing six recording channels (supp. figure 6(a)). The monopolar recording strategy used six consecutive HD64 contacts, with a recording reference wire implanted in the epidural space. The bipolar HD64 arrangement created six consecutive bipolar recording pairs using the minimum inter-electrode spacing available on the HD64. Finally, the bipolar 5-6-5 arrangement simulated the best-case bipolar arrangement using a commercial spinal paddle by creating six bipolar recording pairs with an interelectrode distance similar to that found on the Medtronic 5-6-5 (a to-scale drawing of this paddle is presented in blue in supp. figure 6(a)). Supplementary figure 6(b) shows the trial average (50 trials) SEP in response to 25 V TENS applied to the right fetlock for all three arrangements. While the SEP is clearly prominent in the monopolar arrangement, the responses recorded by all channels are highly correlated. To identify the degree of correlation in recorded SEPs between channels, the mean pairwise zero-lag cross-correlation coefficient was calculated for each channel on single trials. The effect of arbitrary bipole orientation was removed by taking the absolute value of the correlation coefficient. A statistically significant difference between the distribution of correlations was identified for all recording arrangements ( p < 0.05, Kruskal–Wallis test) (supp. figure 6(c)). Spectral comparisons are made in supplemental figure 7.
Continuous encoding of electrode position enables inference over electrode space Increasing the number of stimulation contacts increases the solution space of automated parameter inference neural networks. We leveraged the dense grid of contacts on the HD64 to enable spatial encoding of EES parameters during training and inference. The reparameterized neural network models were successfully trained to infer evoked EMG power following EES for all three electrode densities (figure (a)). The cumulative L1 loss between the predicted and actual EMG power for the held-out dataset is shown at the top of figure (b) for each instance of the ‘forward’ EMG prediction model. Significant differences were found between the 25% model and all other models ( p < 0.05, Kruskal–Wallis test), but not between the 50% and 100% models ( p > 0.05, Kruskal–Wallis test). Examining the L1 prediction error for each model, a significantly higher error was produced when predicting responses to stimulation on unseen electrodes for the 25% model ( p < 0.05, Mann–Whitney U test). This was not present for the 50% model, indicating the 50% model’s ability to accurately infer EMG responses for unseen stimulation sites ( p > 0.05, Mann–Whitney U test) (figure (b) bottom). In all cases, the prediction error was less than random chance (indicated by blue dots in figure (b)). Decomposing the L1 loss by muscle reveals the same result identified previously, where the only significant difference in L1 distribution occurs between the 25% model and the remaining models ( p < 0.05, Kruskal–Wallis test), except for the left gastrocnemius, where prediction loss followed the same distribution for all models ( p > 0.05, Kruskal–Wallis test) (supp. figure 8). Following successful amortization of spinal sensorimotor computations by the forward models, we performed inference using the inverse models to identify EES parameters predicted to evoke target EMG responses. The inverse models produced joint posterior density functions over stimulation parameters and electrode spaces for 32 target EMG vectors. Figure (c) compares the posterior densities across the three inverse models for a single EMG target. Posteriors for additional EMG targets are provided in supplemental figure 9. Here, the 25% model predicts low likelihood across all electrode- and parameter-space. As the number of included electrode positions increases, the model localizes an optimal stimulation location, indicated by the focal region of high likelihood. To evaluate the performance of the models, we selected target EMG vectors from the held-out dataset for evaluation in vivo . The workflow and representative results are outlined in figure (d). The stimulation parameter and electrode combination that produced the highest likelihood is identified and then delivered using the stimulator device. The evoked EMG responses from the six instrumented muscles are recorded and then compared offline to the target EMG vector. This process was repeated for all three models. We then compared the L1 error between the target EMG response and the evoked EMG response across models (figure (e)). A nonparametric Kruskal–Wallis test revealed that the underlying distributions of L1 errors between targets and the responses were not significantly different between models ( p > 0.05). This indicates that adequate training data to infer stimulation parameters to faithfully reproduce target EMG activity could be collected using a sparse subset (25%) of the available stimulation contacts. Further, no significant differences were observed between the re-sent stimulation parameter (the ‘ground truth’) distribution and the proposal distributions. This indicates that the parameter inference performance approached the soft upper bound given by the natural variations in EMG recorded across time.
Discussion In this work, we presented a novel, high-density, active electronic EES paddle, the HD64. We evaluated the hermeticity of the active electronics assembly, and found leak rates 15 times smaller than the threshold defined by MIL-STD-883K. Following ISO 10993-1:2018 biocompatibility testing, we confidently progressed to in vivo testing. Using the onboard electronics, we quantified the improvements in the diversity and selectivity of motor responses evoked using the HD64 compared to wider bipolar configurations. Further, we demonstrated the capabilities of the HD64 to provide superior referencing of somatosensory evoked potentials following peripheral nerve stimulation, significantly reducing the mean spatial correlation compared to bipoles with similar spacing to commercial paddles. Finally, we extended the current state-of-the-art stimulation parameter inference model to encode sensorimotor-EES relations, enabling increased spatial resolution of inferred EES parameters and utilizing 2.5x higher channel count compared to prior preclinical work and clinical standards. The HD64 meets all 4 criteria we established for a next-generation EES paddle. The rostrocaudal length of the treatment area enables delivery of stimulation to 2 vertebral segments per array. The length of the HD64 is within the bounds of existing paddle arrays, which are used commercially to target multiple spinal segments for pain modulation. The mediolateral span is sufficiently large to target lateral structures, such as the dorsal roots. This wider span is facilitated by the tapered cross section of the HD64, enabling the edges of the device to flex as needed in the epidural potential space. We have demonstrated localized current steering, by establishing cathodic and anodic currents on neighboring electrodes, and leveraged increased recording resolution to uncover spatial differences in SEPs. Finally, the reprogrammability enabled by the onboard multiplexer has been used to reposition the stimulating contact in response to optimal stimulation suggestions made by neural network models. When considering EES-enabled neurorehabilitation, the HD64 presents a rostrocaudal span comparable to that of commercial pain modulation arrays. This span limits the number of motor pools available for targeting during spatiotemporal motor-restorative EES, compared to larger clinical electrode arrays (e.g. Medtronic 5-6-5, Boston Scientific CoverEdge X). However, a stacked approach utilizing 2 HD64 arrays enables the benefits afforded by high channel counts to be maintained while targeting the necessary span of motor pools and possessing a clinically-relevant number of lead tails (for example, 2 HD64s present a longer rostrocaudal span, approximately 4 times the available stimulation contacts, and increased contact density compared to the Boston Scientific CoverEdgeX, while maintaining the same number of lead tails). An alternative approach to address an increased rostrocaudal span would be to increase the number of contacts as the size of the array increases. This approach would require 1 surgical entrance to the epidural potential space, compared to 2 in the stacked approach. However, using a long, contiguous paddle constrains the rostrocaudal span to the length of the electrode, which may not be appropriate for all patient anatomies. In the stacked approach, the rostrocaudal separation between the two paddles can be chosen to ensure optimal placement of HD64 contacts over motor pools (and space between paddles in areas without motor pools). Thus, using the stacked 2-paddle approach shown in S2, future EES for neurorehabilitation research leveraging the HD64 can access the required span of motor pools using highly selective stimulation. Looking forward, our work here is a fundamental demonstration of active electronics embedded into neural interface electrodes themselves, and the resultant improvements in stimulation targeting and recording resolution. While our work leverages multiplexing to achieve a high channel count, other architectures may use the same hermetic approach to integrate digital signal processing, neural network inference, or stimulation control directly into electrodes. We expect others to continue the path towards highly functional integrated neural interfaces to restore lost function following trauma or disease. 4.1. Active electronics can be chronically hermetically sealed in an EES paddle Our novel paddle contains an active multiplexing chip hermetically sealed on the body of the EES paddle, enabling communication with 60 bidirectional contacts through two 12-contact percutaneous lead tails. The increased contact density combinatoriality increases the number of possible stimulation configurations available for research and therapeutic applications. In contrast to this reconfigurability, current clinically available EES paddles sacrifice spatial resolution in favor of a general, ‘one-size-fits-most’ approach. In research studies, prior work has suggested a library of paddles of different sizes to suit person-to-person variability in anatomy (Rowald et al ). Instead, the reconfigurability of the HD64 allows for localized concentration of stimulating electrodes to activate a specific area of the spinal cord, enabling focal stimulation with current steering to prevent off-target effects without sacrificing the ability to simultaneously stimulate other distal regions. As the first example of an EES paddle including active electronics, we were particularly interested in assessing the ability of the encapsulant to maintain patency during chronic implantation. We found that the HD64 remained functional for at least 15 months after implantation and did not display symptoms of fluid ingress. Communication between the externalized stimulation and recording hardware and the onboard multiplexer remained robust throughout the study. This initial result is promising for the future of epidural medical devices containing active electronics. With current technologies, chips of similar dimensions to the present device are capable of significant computational power. Our results may open the door for diverse advances in on-paddle processing, including signal processing or closed-loop stimulation parameter control, as has been hypothesized in prior literature (Zhang et al ). While additional testing and regulatory approval is required before such technologies can be translated into clinical applications, this demonstration represents a promising first step. 4.2. Onboard active multiplexing enables chronic stimulation-evoked responses Our study benefits from decades of prior research showing the selective activation of motor neuron pools using EES applied to the lumbosacral spinal cord. Specifically, EES has been utilized heavily to restore walking ability following a spinal cord injury (Capogrosso et al , Angeli et al , Gill et al ). Other studies have used EES to establish somatotopic maps of stimulation location to recruit lower limb extremities (Hofstoetter et al ). Extending the mediolateral span through which EES can be delivered, while maintaining a high contact-density, enabled us to specifically target lateral structures of the spinal cord. Stimulation at these lateral sites generated stereotyped EMG responses statistically different from those achieved by medial stimulation, as determined by our spectral clustering analysis. These differences highlight the importance of lateral stimulation sites for enhancing the capability of locomotor-restorative EES. Such sites are not present on stimulation paddle arrays designed for chronic pain management. The statistically similar clusters of electrodes identified in figure (d) are not as clearly demarcated as hypothesized. Examining the recruitment curves in figure (e) provides some additional clarity. Clusters 0 and 1 evoke strong, bilateral activation, consistent with midline stimulation. Cluster 2 evokes a strong, right-side biased response, while cluster 3 evokes a weaker, left-side biased response. Cluster 5 evokes a weak, right-side specific response, consistent with right-side lateral stimulation, with the remaining clusters evoking weak, non-specific responses. Overall, contacts on the caudal 5 rows of contacts evoked the strongest responses (as seen in figure (b)). From prior literature, the stronger EMG activity in this region indicates the approximate location of a dorsal root entry zone (Hofstoetter et al ). In this region, the mediolateral organization identified in prior literature is maintained (Capogrosso et al , Rowald et al , Calvert et al , Lorach et al ). In S1, IM EMG recordings were taken, however S2 used surface EMG. Due to the proximity of instrumented muscles on the sheep leg, and the lower resolution of surface recordings, we did not observe selectivity between proximal and distal muscles as the stimulation site was moved rostrocaudally. This is reflected in supplementary figure 3. Instead, we observe the mediolateral striation described in prior literature (Capogrosso et al , Rowald et al , Calvert et al , Lorach et al ). Recording EES-evoked spinal responses, or ECAPs, can provide quantitative data regarding EES paddle status. ECAPs have been used as a control signal for pain-modulating EES (Mekhail et al ), and we have previously suggested their utility in the identification of neural anatomy (Calvert et al ). Using bipolar recording configurations on the HD64, we observed an orthodromically propagating response to EES. The conduction velocities of these responses are consistent with the published ranges for Type Ia and Ib axons, which have been shown to be recruited by EES in computational models (Capogrosso et al , Parker et al ). Further, our work demonstrates the formation of dense recording bipoles on the ovine spinal cord, and uses these recording bipoles to examine EES- and TENS-evoked spinal responses. This is timely for the field of neural interfaces, with the recent advent of high-density neural probes (Jun et al , Steinmetz et al , Chamanzar et al ) and electrocorticography grids (Rachinskiy et al , Palopoli-Trojani et al ). Using conventional EES paddles, contacts are sparsely distributed to ensure coverage of a large area using a small number of electrodes. This sets a high minimum limit on contact separation. When considering EES or sensing, we suggest that the active multiplexing on the HD64 enables dense bipoles to be created at arbitrary points of interest. Importantly, contact separation can be non-uniform (for example, tight spacing in regions of high spatial variability and wider spacing in less volatile regions) or change with time or desired function. These ‘activity-dependent’ paddle arrangements allow for the capabilities of the stimulation system to be adjusted to suit the needs of patients when completing various activities of daily living (for example, greater trunk control during sitting vs lower extremity motor control during locomotion in patients with motor dysfunction). 4.3. Spatial encoding of EES enables parameter inference over arbitrary channels Stimulation parameter inference remains an obstacle to widespread clinical implementation of EES (Solinsky et al ). Our previous parameter selection automation work (Govindarajan et al ), and other subsequent literature (Bonizzato et al ) treat stimulating electrodes as discrete variables. As such, spatial relationships (such as those uncovered in figure (d)) cannot be leveraged by the model. In this work, we instead evaluated a novel electrode encoding scheme that accounts for spatial relationships between electrode contacts. This approach is extensible to very high channel counts and allows the network to identify optimal stimulation locations not included in the training dataset. Using the results shown here, active electronics conducting neural network inference may be hermetically sealed into the paddle array body, enabling stimulation parameter inference to be conducted without additional hardware or communication requirements. As the number of available monopolar stimulation channels increases, the time required to collect training data from every contact scales linearly. Unaddressed, this is a barrier to clinical translation of parameter selection automation techniques and high-density EES paddles. Here, our work shows that exhaustive sampling of every stimulation contact is not required if the electrode encoding scheme is continuous. No significant decrease in EMG response prediction performance is observed for electrode inclusion rates as low as 50%, and stimulation parameter inference performance did not significantly vary across inclusion rates tested. The development of efficient machine learning tools, such as the one presented in this manuscript, greatly reduces the amount of training data required. Increasing efficiency and decreasing data collection efforts will aid in scaling EES for clinical use, which could yield meaningful benefits during spinal rehabilitation in patients with neural dysfunction. 4.4. Study limitations and implications for future research Our study introduces a high-density smart EES paddle with an integrated multiplexer and demonstrates the utility of increased electrode density. However, several limitations should be considered. While the characterization presented in this manuscript empirically supports the use of high-density electrode paddles in these applications, a more thorough assessment may be possible with the addition of computational models. Previous advances in electrode design were informed by finite element models of 15 healthy volunteers (Rowald et al ). Such models could quantify recruitment of off-target motor neuron pools after applying stimulation at bipoles of various sizes, however the work presented here utilizes an ovine model. The spinal cord of the sheep is similar to that of humans in terms of gross anatomy and size (critically, the transverse width of the epidural space in the lumbar region is 17.5 ± 1.2 mm (Wilke et al ) compared to 17.87 ± 1.47 mm in humans (Lee et al )) and has been used as a model for the study of spinal electrophysiology (Parker et al , , Chakravarthy et al , Calvert et al ), but the results of this manuscript may not be generalizable to human anatomy. The mediolateral span of the HD64 is approximately 4 mm wider than commercial EES paddles traditionally used for the treatment of neuropathic pain, however the HD64 presents a tapered profile. The HD64 maintains its full 2 mm thickness for a narrower mediolateral span than commercial SCS leads. Examining prior studies of spinal cord and spinal canal dimensions, the transverse diameter of the spinal cord in the T11 to L1 vertebral regions is between 8 and 9.6 mm (Ko et al , Fradet et al ). In the same region, the spinal canal sagittal depth is between 15.4 and 19.54 mm, while the canal transverse width is 16.7–26.5 mm (Laporte et al , Busscher et al ). Further, the spinal cord exhibits positive anteroposterior eccentricity in this region (that is, the centroid of the spinal cord is more anterior than the centroid of the spinal canal (Fradet et al )). A scale diagram showing the HD64 implanted in this region is depicted in supplementary figure 10. The previously published dimensions and tapered profile do not indicate substantial difficulty avoiding neural structures during implantation of the HD64 in the examined vertebral levels. Future studies are required to evaluate the therapeutic potential of high-density EES electrodes or to quantify the effect of stimulation focality on behavioral values such as sensation. One current barrier to clinical translation of the HD64 is the lack of existing implantable pulse generators (IPGs) capable of generating the power and communication signals necessary to control the onboard multiplexer. Additionally, the inclusion of active electronics in implanted electrodes increases the power draw of the implantable system, and must be considered when selecting an appropriate IPG battery size. An example rechargeable IPG contains a 200 mAh battery (Saluda Medical ), which is sufficient to deliver therapy for between 24 and 168 h (The Advanced Spine Center ). This equates to a typical current draw during therapy of between 1.19 and 8.3 mA. The HD64 consumes only 20 μ A in steady-state operation and 100 μ A when reconfiguring connections, representing a worst-case increase in current draw of less than 5%. Therefore, we predict the impacts on recharge interval or IPG battery capacity to be minimal. While the hermeticity evaluations performed in this manuscript exceeded our requirements for a 15 month implant duration, additional evaluation of the package may be required for more chronic implants. Our neural network models demonstrate sufficient accuracy to match EMG patterns, however, these experiments were conducted with the sheep at rest and elevated in a sling. A more functional task may instead consider applying a stimulation sequence to match a target kinematic trajectory. Additionally, our neural network parameter inference is limited to monopolar stimulation patterns and, therefore, cannot take advantage of the improved selectivity identified by bipolar stimulation in this manuscript. Bipolar stimulation dramatically increases the number of electrode configurations the network must evaluate; the additional complexity of which is a promising target for future research.
Active electronics can be chronically hermetically sealed in an EES paddle Our novel paddle contains an active multiplexing chip hermetically sealed on the body of the EES paddle, enabling communication with 60 bidirectional contacts through two 12-contact percutaneous lead tails. The increased contact density combinatoriality increases the number of possible stimulation configurations available for research and therapeutic applications. In contrast to this reconfigurability, current clinically available EES paddles sacrifice spatial resolution in favor of a general, ‘one-size-fits-most’ approach. In research studies, prior work has suggested a library of paddles of different sizes to suit person-to-person variability in anatomy (Rowald et al ). Instead, the reconfigurability of the HD64 allows for localized concentration of stimulating electrodes to activate a specific area of the spinal cord, enabling focal stimulation with current steering to prevent off-target effects without sacrificing the ability to simultaneously stimulate other distal regions. As the first example of an EES paddle including active electronics, we were particularly interested in assessing the ability of the encapsulant to maintain patency during chronic implantation. We found that the HD64 remained functional for at least 15 months after implantation and did not display symptoms of fluid ingress. Communication between the externalized stimulation and recording hardware and the onboard multiplexer remained robust throughout the study. This initial result is promising for the future of epidural medical devices containing active electronics. With current technologies, chips of similar dimensions to the present device are capable of significant computational power. Our results may open the door for diverse advances in on-paddle processing, including signal processing or closed-loop stimulation parameter control, as has been hypothesized in prior literature (Zhang et al ). While additional testing and regulatory approval is required before such technologies can be translated into clinical applications, this demonstration represents a promising first step.
Onboard active multiplexing enables chronic stimulation-evoked responses Our study benefits from decades of prior research showing the selective activation of motor neuron pools using EES applied to the lumbosacral spinal cord. Specifically, EES has been utilized heavily to restore walking ability following a spinal cord injury (Capogrosso et al , Angeli et al , Gill et al ). Other studies have used EES to establish somatotopic maps of stimulation location to recruit lower limb extremities (Hofstoetter et al ). Extending the mediolateral span through which EES can be delivered, while maintaining a high contact-density, enabled us to specifically target lateral structures of the spinal cord. Stimulation at these lateral sites generated stereotyped EMG responses statistically different from those achieved by medial stimulation, as determined by our spectral clustering analysis. These differences highlight the importance of lateral stimulation sites for enhancing the capability of locomotor-restorative EES. Such sites are not present on stimulation paddle arrays designed for chronic pain management. The statistically similar clusters of electrodes identified in figure (d) are not as clearly demarcated as hypothesized. Examining the recruitment curves in figure (e) provides some additional clarity. Clusters 0 and 1 evoke strong, bilateral activation, consistent with midline stimulation. Cluster 2 evokes a strong, right-side biased response, while cluster 3 evokes a weaker, left-side biased response. Cluster 5 evokes a weak, right-side specific response, consistent with right-side lateral stimulation, with the remaining clusters evoking weak, non-specific responses. Overall, contacts on the caudal 5 rows of contacts evoked the strongest responses (as seen in figure (b)). From prior literature, the stronger EMG activity in this region indicates the approximate location of a dorsal root entry zone (Hofstoetter et al ). In this region, the mediolateral organization identified in prior literature is maintained (Capogrosso et al , Rowald et al , Calvert et al , Lorach et al ). In S1, IM EMG recordings were taken, however S2 used surface EMG. Due to the proximity of instrumented muscles on the sheep leg, and the lower resolution of surface recordings, we did not observe selectivity between proximal and distal muscles as the stimulation site was moved rostrocaudally. This is reflected in supplementary figure 3. Instead, we observe the mediolateral striation described in prior literature (Capogrosso et al , Rowald et al , Calvert et al , Lorach et al ). Recording EES-evoked spinal responses, or ECAPs, can provide quantitative data regarding EES paddle status. ECAPs have been used as a control signal for pain-modulating EES (Mekhail et al ), and we have previously suggested their utility in the identification of neural anatomy (Calvert et al ). Using bipolar recording configurations on the HD64, we observed an orthodromically propagating response to EES. The conduction velocities of these responses are consistent with the published ranges for Type Ia and Ib axons, which have been shown to be recruited by EES in computational models (Capogrosso et al , Parker et al ). Further, our work demonstrates the formation of dense recording bipoles on the ovine spinal cord, and uses these recording bipoles to examine EES- and TENS-evoked spinal responses. This is timely for the field of neural interfaces, with the recent advent of high-density neural probes (Jun et al , Steinmetz et al , Chamanzar et al ) and electrocorticography grids (Rachinskiy et al , Palopoli-Trojani et al ). Using conventional EES paddles, contacts are sparsely distributed to ensure coverage of a large area using a small number of electrodes. This sets a high minimum limit on contact separation. When considering EES or sensing, we suggest that the active multiplexing on the HD64 enables dense bipoles to be created at arbitrary points of interest. Importantly, contact separation can be non-uniform (for example, tight spacing in regions of high spatial variability and wider spacing in less volatile regions) or change with time or desired function. These ‘activity-dependent’ paddle arrangements allow for the capabilities of the stimulation system to be adjusted to suit the needs of patients when completing various activities of daily living (for example, greater trunk control during sitting vs lower extremity motor control during locomotion in patients with motor dysfunction).
Spatial encoding of EES enables parameter inference over arbitrary channels Stimulation parameter inference remains an obstacle to widespread clinical implementation of EES (Solinsky et al ). Our previous parameter selection automation work (Govindarajan et al ), and other subsequent literature (Bonizzato et al ) treat stimulating electrodes as discrete variables. As such, spatial relationships (such as those uncovered in figure (d)) cannot be leveraged by the model. In this work, we instead evaluated a novel electrode encoding scheme that accounts for spatial relationships between electrode contacts. This approach is extensible to very high channel counts and allows the network to identify optimal stimulation locations not included in the training dataset. Using the results shown here, active electronics conducting neural network inference may be hermetically sealed into the paddle array body, enabling stimulation parameter inference to be conducted without additional hardware or communication requirements. As the number of available monopolar stimulation channels increases, the time required to collect training data from every contact scales linearly. Unaddressed, this is a barrier to clinical translation of parameter selection automation techniques and high-density EES paddles. Here, our work shows that exhaustive sampling of every stimulation contact is not required if the electrode encoding scheme is continuous. No significant decrease in EMG response prediction performance is observed for electrode inclusion rates as low as 50%, and stimulation parameter inference performance did not significantly vary across inclusion rates tested. The development of efficient machine learning tools, such as the one presented in this manuscript, greatly reduces the amount of training data required. Increasing efficiency and decreasing data collection efforts will aid in scaling EES for clinical use, which could yield meaningful benefits during spinal rehabilitation in patients with neural dysfunction.
Study limitations and implications for future research Our study introduces a high-density smart EES paddle with an integrated multiplexer and demonstrates the utility of increased electrode density. However, several limitations should be considered. While the characterization presented in this manuscript empirically supports the use of high-density electrode paddles in these applications, a more thorough assessment may be possible with the addition of computational models. Previous advances in electrode design were informed by finite element models of 15 healthy volunteers (Rowald et al ). Such models could quantify recruitment of off-target motor neuron pools after applying stimulation at bipoles of various sizes, however the work presented here utilizes an ovine model. The spinal cord of the sheep is similar to that of humans in terms of gross anatomy and size (critically, the transverse width of the epidural space in the lumbar region is 17.5 ± 1.2 mm (Wilke et al ) compared to 17.87 ± 1.47 mm in humans (Lee et al )) and has been used as a model for the study of spinal electrophysiology (Parker et al , , Chakravarthy et al , Calvert et al ), but the results of this manuscript may not be generalizable to human anatomy. The mediolateral span of the HD64 is approximately 4 mm wider than commercial EES paddles traditionally used for the treatment of neuropathic pain, however the HD64 presents a tapered profile. The HD64 maintains its full 2 mm thickness for a narrower mediolateral span than commercial SCS leads. Examining prior studies of spinal cord and spinal canal dimensions, the transverse diameter of the spinal cord in the T11 to L1 vertebral regions is between 8 and 9.6 mm (Ko et al , Fradet et al ). In the same region, the spinal canal sagittal depth is between 15.4 and 19.54 mm, while the canal transverse width is 16.7–26.5 mm (Laporte et al , Busscher et al ). Further, the spinal cord exhibits positive anteroposterior eccentricity in this region (that is, the centroid of the spinal cord is more anterior than the centroid of the spinal canal (Fradet et al )). A scale diagram showing the HD64 implanted in this region is depicted in supplementary figure 10. The previously published dimensions and tapered profile do not indicate substantial difficulty avoiding neural structures during implantation of the HD64 in the examined vertebral levels. Future studies are required to evaluate the therapeutic potential of high-density EES electrodes or to quantify the effect of stimulation focality on behavioral values such as sensation. One current barrier to clinical translation of the HD64 is the lack of existing implantable pulse generators (IPGs) capable of generating the power and communication signals necessary to control the onboard multiplexer. Additionally, the inclusion of active electronics in implanted electrodes increases the power draw of the implantable system, and must be considered when selecting an appropriate IPG battery size. An example rechargeable IPG contains a 200 mAh battery (Saluda Medical ), which is sufficient to deliver therapy for between 24 and 168 h (The Advanced Spine Center ). This equates to a typical current draw during therapy of between 1.19 and 8.3 mA. The HD64 consumes only 20 μ A in steady-state operation and 100 μ A when reconfiguring connections, representing a worst-case increase in current draw of less than 5%. Therefore, we predict the impacts on recharge interval or IPG battery capacity to be minimal. While the hermeticity evaluations performed in this manuscript exceeded our requirements for a 15 month implant duration, additional evaluation of the package may be required for more chronic implants. Our neural network models demonstrate sufficient accuracy to match EMG patterns, however, these experiments were conducted with the sheep at rest and elevated in a sling. A more functional task may instead consider applying a stimulation sequence to match a target kinematic trajectory. Additionally, our neural network parameter inference is limited to monopolar stimulation patterns and, therefore, cannot take advantage of the improved selectivity identified by bipolar stimulation in this manuscript. Bipolar stimulation dramatically increases the number of electrode configurations the network must evaluate; the additional complexity of which is a promising target for future research.
Conclusions In summary, we successfully designed and chronically implanted a high-density smart EES paddle, the HD64, which is the first to integrate active electronics into a hermetic package on the spinal cord. Our hermetic assembly was tested to exceed standards for active hermetic electronics. During chronic in vivo implantation, no device-related malfunctions, nor symptoms of moisture ingress were observed, enabling demonstration of fundamental stimulation-response characteristics in the ovine spinal cord. These results highlight the translational potential of high-density EES paddles, and provide a foundation for more advanced computation and processing to be integrated directly into neural interfaces.
|
SARS-CoV-2-Infektion und Auge | 2120bf30-a48f-455d-9d01-a6cf058d6686 | 7344348 | Ophthalmology[mh] | Die SARS-CoV-2-Pandemie hat zu gravierenden Anpassungen des Alltags weltweit geführt. Auch die langsame Lockerung der einschneidenden Maßnahmen erlaubt noch keine Normalität – weder im privaten Leben noch im beruflichen Umfeld. Dies wirkt sich ebenso auf das Gesundheitswesen und damit auch für die Augenheilkunde aus, jetzt und in der Zukunft. Die Infektion wird auch in Zukunft mit dem Namen von Li Wenliang, unserem augenärztlichen Kollegen aus Wuhan, verbunden bleiben, weil er rein klinisch schon frühzeitig die Gefahren der durch die neue Coronavirusvariante SARS-CoV‑2 verursachten Lungenentzündung COVID-19 erkannte und seine ärztlichen Kollegen davor warnte. Mit nur 33 Jahren verstarb Li Wenliang an den Folgen von COVID-19. Mit dem Leitthema „SARS-CoV-2-Infektion und Auge“ wollen die Autoren eine erste Orientierung geben und so zu einem besseren Verständnis der ophthalmologischen Fragestellungen im Zusammenhang mit der SARS-CoV-2-Infektion in der Augenheilkunde beitragen. Um die Versorgung von Augenpatienten mit nicht aufschiebbaren, operativ zu versorgenden akuten Erkrankungen unter Pandemiebedingungen sicherzustellen, sind spezielle Anpassungen des Betriebskonzeptes insbesondere den Augenkliniken abverlangt worden, deren Klinikum an der überregionalen intensivmedizinischen Versorgung von mit SARS-CoV‑2 infizierten Patienten teilgenommen hat. Exemplarisch berichtet Herr Pankaj Singh wie es die Frankfurter Universitäts-Augenklinik geschafft hat, den Hygiene- und Abstandsanforderungen im Bereich der stationären und ambulanten konservativen und operativen Versorgung gerecht zu werden. Von Frau Luisa Schwarz erfahren wir, welche besonderen Merkmale intensivpflichtige Patienten mit SARS-CoV-2-Infektion aufwiesen und wie diese optimal zu betreuen sind. Im Anschluss erklärt Herr Marius Ueffing Wesentliches zu den Grundlagen der SARS-CoV-2-Virusreplikation und Immunologie. Herr Tarek Bayyoud berichtet im Anschluss über erste Ergebnisse zu Untersuchungen an der menschlichen Hornhaut von verstorbenen Patienten nach Infektion mit SARS-CoV‑2. Ein RNA-Nachweis konnte bisher bei keinem der untersuchten Augen geführt werden. Zwar deckt sich dieser Befund nicht mit Berichten der European Eyebank Association, wonach bei Spendern das SARS-CoV2-Virus in 4 % bei 2 der befragten Hornhautbanken gefunden wurde. Allerdings ist in dieser Umfrage nicht näher spezifiziert, wie genau die Diagnostik erfolgte (PCR der HH, PCR des Mediums …) und zu welchem Zeitpunkt der Erkrankung die Entnahme, z. B. bei asymptomatischen Trägern, vorgenommen wurde. Diese Befunde deuten darauf hin, dass es in seltenen Fällen zu einer Infektion der Kornea mit SARS-CoV‑2 kommen kann. Dazu passt, dass die für die Infektion erforderliche ACE2- und TMPRSS2-Rezeptorausstattung tatsächlich auch in der Hornhaut vorhanden ist, wie Herr Sven Schnichels in seinem Beitrag zeigen wird. Ganz wesentlich erscheint es dabei, die Vorgehensweise zur Sicherung des Ausschlusses einer SARS-CoV-2-Virusinfektion der Hornhaut jetzt zu erarbeiten. Hierzu wird Herr Sebastian Thaler als eine Option die Modalitäten zur Untersuchung von Kulturmedien darstellen. Im Gegensatz zur Hornhaut sind ACE2 und TMPRSS2 in der Bindehaut nur in sehr geringen Mengen nachzuweisen, was eine konjunktivale Infektion durch SARS-CoV‑2 über diese Mediatoren wenig wahrscheinlich macht. Dennoch sind bei 1 % der Infizierten Bindehautentzündungen beschrieben. Herr Clemens Lange geht daher den Fragen nach: Kann SARS-CoV‑2 die Bindehaut infizieren und sich dort replizieren? Können Gesunde über den Tränenfilm infiziert werden?, und: Ist der Tränenfilm Erkrankter infektiös? Ganz wesentlich ist aber auch der Schutz der Mitarbeitenden in der Augenheilkunde. Nach eine aktuellen Umfrage von 2036 Ärzten in Weiterbildung in New York zwischen dem 03. und 12.04.2020 hatten 4,4 % einen positiven PCR-Nachweis. Dabei war der Anteil unter 282 befragten Anästhesisten mit 7,4 % am höchsten. Aber auch bei 177 Augenärzten in Weiterbildung wurde mit 5,1 % ein deutlich über dem Durchschnitt liegender Anteil von Betroffenen, noch vor den Otorhinolaryngologen mit 5 % von 40, gefunden. Interessanterweise waren die diagnostischen Radiologen mit 1 % und auch die Pathologen mit 3,7 % deutlich weniger betroffen. Da wir in der Augenheilkunde im Wesentlichen ältere und damit von Komorbiditäten betroffene Patienten behandeln, stellt die Versorgung dieser Bevölkerungsgruppe hohe Anforderungen an die Organisationsabläufe der Einrichtungen und das Personal. Herr Focke Ziemssen wird uns zum Immunstatus SARS-CoV‑2 in einer Stichprobe von Gesundheitsberufen und den Effekten auf den Klinikbetrieb berichten, Frau Katrin Wacker die speziellen ophthalmologischen Schutzmaßnahmen in der COVID-19-Pandemie darlegen, und Herr Alexander Rohkohl wird in seinem Beitrag unter anderem auch auf die besonderen Hygienemaßnahmen eingehen. Zum Verständnis des Infektionsgeschehens tragen auch die histologischen Befunde von 3 Post-mortem-Augen bei. Frau Karin Löffler beschreibt präzise die vorhandenen Pathologien der in Basel entnommenen Bulbi. Die von ihr gefundenen milden konjunktivalen Entzündungszellinfiltrationen unterscheiden sich dabei nicht von den üblichen Post-mortem-Befunden und lassen somit keine Rückschlüsse auf eine spezifische okuläre Beteiligung bei der SARS-CoV-2-Infektion zu. Ganz wesentlich können natürlich Infektionsketten vermieden werden, wenn Patienten den Besuch beim Augenarzt nur dann antreten müssen, wenn dies für die Behandlung unabdinglich ist. Die diesbezüglich von den deutschen Kliniken erarbeiteten Strategien werden von Herrn Lars-Olaf Hattenbach und Frau Nicole Eter dargestellt. Bereits bekannte Patienten können in manchen Fällen auch durch eine telemedizinische Beratung heute mit den Mitteln einer Videosprechstunde evaluiert und dadurch persönliche Arztbesuche eingespart werden. Zur Darstellung der Zufriedenheit mit einer Videokonsultation und deren Grenzen dient die Arbeit von Herrn Rokas Gerbutavicus. Die Corona-Pandemie wird uns mit großer Wahrscheinlichkeit auch noch im kommenden Jahr begleiten. Eine sichere und gleichzeitig wirksame Behandlung ist derzeit noch nicht in Sicht, und wann eine neutralisierende Impfung in ausreichendem Umfang zur Verfügung steht, ist ebenfalls noch nicht absehbar. Insofern bleiben die bisherigen Maßnahmen zur Infektionsvermeidung durch konsequente Umsetzung der Hygieneanforderungen ein Muss in der augenärztlichen Versorgung. Karl Ulrich Bartz-Schmidt
|
Development of LAMP assay for early detection of | 490777f4-2a3c-414b-bfcc-b60ac305adcd | 11869897 | Microbiology[mh] | Yersiniosis is a highly infectious disease causing significant economic losses in fish aquaculture worldwide, especially in salmonids . Yersinia ruckeri is the etiological agent of enteric red mouth disease (ERM) or Yersiniosis . Y. ruckeri is an anaerobic, gram-negative, rod-shaped bacteria that mainly enters the host body through the pavement cells of the gill lamellae, then into the intestine and bloodstream . Y. ruckeri is a facultative intracellular pathogen, beginning the infective stage in the extracellular space then inhabiting the macrophages during the intracellular phase. The pathogen can also lay dormant in some fish without causing clinical symptoms; however, the asymptomatic hosts transmit disease through defecation to more susceptible fish . The common symptoms of the disease include subcutaneous haemorrhage around the mouth, throat, tongue, gills, fins and gums in addition to an enlarged spleen and haemorrhages on the liver’s surface, pancreas, swim bladder or lateral muscles . The severity of infections in fish can vary, as some fish naturally recover from infections; however mass mortality has been observed during outbreaks . While there is no precise assessment of the economic toll caused solely by Y. ruckeri infections, it is one of the significant pathogens in aquaculture, contributing to a combined annual loss of over USD 6 billion . Y. ruckeri is classified into two biotypes (motile and non-motile) and four major serotypes, which have different surface antigens . Serotype O1b biotype I has predominantly been responsible for most outbreaks in the past, which led to the early development of a very effective vaccine; however, recent outbreaks in vaccinated fish caused by biotype II, has raised concerns about the ability to control Y. ruckeri infections . Without the availability of effective vaccines to control infection in all countries, there is an increased reliance on other control measures such as antibiotics and preventative management practices . Ideally, early pathogen detection will guide on-farm management decisions, enabling a swift response to increasing infection levels and supporting effective pathogen control. Traditional methods for the detection of Y. ruckeri are bacterial cultures , biochemical tests and serological tests . Although those methods are adequate for detection, they are laborious, can require particular growth conditions, and may not be applicable to all isolates, which can be challenging to differentiate between closely related species . Several molecular techniques, such as restriction fragmentation-length polymorphism (RFLP) , polymerase chain reaction (PCR) and real-time PCR can successfully detect low levels of the bacteria; however, they require expensive equipment, trained personnel, and are not suitable for point-of-care pathogen detection . Loop-mediated isothermal amplification (LAMP) is a method for amplifying DNA at a constant temperature. It uses four to six primers to target six to eight specific regions on a DNA template. The amplification starts with a strand-displacing DNA polymerase that initiates the synthesis of new DNA. Two of the primers form loop structures, which facilitate further rounds of amplification. One of the main benefits of LAMP is that it does not require a specialized thermocycler; instead, the reaction can take place at a stable temperature using either a heat block or a water bath. This makes LAMP a fast, straightforward, and economical choice for field applications, while still achieving the high sensitivity and specificity associated with traditional molecular techniques . There has been LAMP assays developed for use in the aquacultural industry, for detection of viral , bacterial and parasitic pathogens . Furthermore, there has been a LAMP assay developed for the detection of Y. ruckeri , termed enteric red mouth LAMP (ERM-LAMP), however this test requires extensive sample preparation and DNA extraction within a specialised laboratory . The existing ERM-LAMP assay utilizes five primers targeting the yruI/yruR gene of Y. ruckeri and can be detected after one hour of incubation at 63 °C using visual inspection, agarose gel electrophoresis or by real-time monitoring of turbidity. Its high sensitivity allows detection of as low as 10 pg of Y. ruckeri genomic DNA. However, the ERM-LAMP method relies on the collection of fish to carry out the initial DNA extraction using tissue samples . While this is the standard testing method, it is not ideal to sacrifice several expensive, large fish for routine surveillance, nor to require specialists for septic dissection. Therefore, using environmental water for sampling offers a safer and simpler routine point-of-care surveillance method . Here, we report the development of a Y. ruckeri- specific LAMP ( Yr- LAMP) assay and subsequent DNA extraction method suitable for field use from water, to allow rapid, reliable, and robust detection of Y. ruckeri within 1 h.
Yersinia ruckeri LAMP primer design Alignment of the glutamine synthetase ( glnA ) gene sequences was performed using Clustal Omega software to find regions of similarity in all Y. ruckeri strains and variations if compared with other non-target species. Four primer sets, designated as Yr#1, Yr#2, Yr#3, and Yr#4, were designed to target the glnA gene. Yr#1 and Yr#2 LAMP primer sets were generated using Primer Explorer V5 according to the DNA sequence of the glnA gene of Y. ruckeri . Conversely, Yr#3 and Yr#4 were designed manually following the LAMP protocol developed by . The four primer sets included inner, FIP & BIP, outer F3 & B3, and loop primer LF, except for set Yr#2 which has two loop primers, LF & LB. The suggested primers were synthesized by Integrated DNA Technologies (IDT). Yersinia spp. synthetic DNA preparation for assay optimization A synthetic positive and negative controls were designed for assay optimisation and validation, to cover 586, 599 and 593 base pairs (bp) of the glutamine synthetase ( glnA ) from Y. ruckeri ( AY333067.1 ), Y. rohdei ( AY333059.1 ) and Y. frederiksenii ( AY333030.1 ) respectively were synthesized by IDT. The synthetic DNA were ligated into pCRBlunt II-TOPO vector using the Zero Blunt™ TOPO™ PCR plasmid kit (Invitrogen, Waltham, MA, USA) according to manufacturer’s instructions and transformed in E. coli DH5α cells . Insert transformation of Y. ruckeri ( pYr ), Y. rohdei ( pYro ) and Y. frederiksenii ( pYf ) plasmids was confirmed by colony PCR, using M13 forward and reverse primers (Invitrogen, Waltham, MA, USA) . Transformed E. coli cells were grown on Luria-Bertani (LB) agar (1.5% (w/v) agar, 1% (w/v) tryptone, 1% (w/v) NaCl, 0.5% (w/v) yeast extract, pH 7.5), with the addition of 50 µg/ml of kanamycin to select for transformed cells. Plasmid isolations were performed using the FastGene ® Plasmid Mini Kit (NIPPON Genetics Co. Ltd, Tokyo, Japan) and, eluted in 50 µl of elution buffer and stored at −20 °C. Total plasmid concentration was determined by Qubit™ 1x dsDNA BR Assay Kit (Invitrogen, Waltham, MA, USA) using the Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA). Yersinia ruckeri LAMP– Yr- LAMP The four proposed Yr- LAMP primer sets were tested against 0.5 ng/µl of pYr initially using default primers concentration of 0.8 µM of FIP & BIP, 0.2 µM of F3 & B3 and 0.4 µM of LF and LB for Yr#2). All LAMP reaction volumes were 25 µl, consisting of 15 µl GspSSD2.0 Isothermal Mastermix (ISO-004; OptiGene, Horsham, England), 5 µl of primer mixture and 5 µl template (plasmid or genomic DNA extract). The template was replaced by TE buffer (10 Mm Tris-HCl and 0.1 mM ethylenediamine tetraacetic acid (EDTA), pH 8.0) in no template control (NTC) samples. Reactions were performed using the Genie ® II and Genie ® III machines (OptiGene Limited, Horsham, England) initiated with a pre-heating step of 40 °C for 1 min, followed by 30 min of isothermal amplification at 65 °C then an annealing step where the temperature dropped from 94 °C to 84 °C with a 0.5 °C/s drop rate. After obtaining the optimal primer set, a series of assays were conducted to determine the most effective primer concentration, varying the amounts of FIP, BIP, and LF. Reactions were performed and analysed using the Genie ® II and Genie ® III machines, as previously described, with 30 min of amplification. Primer concentrations were set at 0.2 µM for F3 and B3, 1.6 µM for FIP and BIP and 0.8 µM LF. Determination of sensitivity for Yr- LAMP The assay’s sensitivity was assessed using serial dilutions of pYr . The template was sequentially 10-fold diluted in TE buffer from 0.5 ng/µl until 0.5 × 10 −9 ng/µl and tested with optimised Yr#1 primer concentrations . The reactions were repeated 10 times and analysed using the Genie ® II machine, as previously described, with 30 min of amplification to assess the limit of detection (LOD). PCR for Y. ruckeri Conventional PCR (cPCR) was employed to amplify a portion of the glnA target region associated with the Yr -LAMP assay, facilitating a sensitivity comparison. The amplification utilized specific primers for Y. ruckeri ( glnA ), designed by . The PCR reaction mixture comprised a total volume of 25 µl, containing a final concentration of 1X Promega GoTaq ® Green Master Mix, 0.4 µM of each primer, and 5 µl of the template. The products of the PCR were analyzed through electrophoresis on a 1.5% (w/v) agarose gel, stained with 0.2 μg/mL ethidium bromide, and run at 100 v for 35 min to enable clear visualization of the amplification results. Yr- LAMP specificity testing Initial specificity testing was performed using 0.25 ng/µl of target plasmid ( pYr ) and 100-fold higher concentration (25 ng/µl) of plasmids containing gene homologous from closely related species, pYro or pYf . Reactions were performed and analysed using the Genie ® II and Genie ® III machines, as previously described, with 30 min of amplification. To further validate the test’s specificity, the Yr -LAMP assay was evaluated against synthetic DNA representing 999 bp of glnA from salmon pathogens, Aeromonas salmonicida subsp. Salmonicida , Flavobacterium psychrophilum , and Renibacterium salmoninarum . This evaluation also included a range of other bacterial species. Total genomic DNA was extracted using Bioneer AccuPrep Genomic DNA extraction kit from a bacterial specificity panel consisting of gram-positive bacteria including, Enterococcus faecalis , Staphylococcus aureus , Streptococcus agalactiae , S. pyogenes , S. salivarius and S. sanguinis and g ram-negative bacteria including, Escherichia coli , Pseudomonas aeruginosa , P. fluorescens and Vibrio natriegens species following manufacturer instructions. Total DNA concentrations were determined by Qubit™ 1× dsDNA BR Assay Kit (Invitrogen, Waltham, MA, USA), using Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and were stored at −20 °C until needed. Furthermore, both fresh and seawater samples were collected and screened. To assess the water collection contaminants, seawater samples and fresh lake water (La Trobe Lake, Victoria, Australia) were filtered using the filtration method outlined in . The released genomic DNA was used directly as LAMP template or stored at 4 °C until further use. The presence of bacteria in those concentrated water samples was confirmed using a universal bacterial PCR assay targeting the 16S gene of both gram-positive and gram-negative bacteria . The amplification was performed using a primer mixture of seven primers, Golden mixture 7 . The PCR reaction mixture totalled 25 µl, consisting of a final concentration of 1X Promega GoTaq ® Green Master Mix, 0.1 µM of each of the seven primers and 5 µl of concentrated seawater or lake water as a template. The PCR product was observed using 1.5% (w/v) agarose gel with 0.2 μg/mL ethidium bromide, with electrophoresis performed at 90 v for 70 min . LAMP was completed following the isolation of genomic DNA, plasmids containing gene homologous of closely related species, synthetic DNA of salmonid pathogens and the concentration of water samples. Using the optimised concentration of the selected Yr -LAMP primer set at 65 °C, the LAMP reaction was run for 30 min to detect the amplification. Reactions were performed in triplicates and tested by the Genie ® II and Genie ® III machines; as previously described, the annealing temperature of the assay was used to confirm the correct product. Isolation of DNA from water samples for in-field detection A single colony of E. coli transformed with pYr was grown in 5 ml of LB broth at 37 °C with shaking at 220 rpm until reaching OD 600 = 1.0, measured by the Halo DNAmaster (Dynamica). Cell numbers were calculated using the standardized equation of OD 600 of 1.0 = 8 × 10 8 cells/ml for E. coli . Environmental seawater was spiked with 10-fold serial dilutions of E. coli cells containing pYr . For each sample, 0.5 ml of cells was added to 45.5 ml of seawater (collected from St Kilda beach, Victoria, Australia) samples using the filtration method developed by . Briefly, the water sample was filtered using a Target2™ GMF (Glass MicroFiber) Syringe Filters with a 1.2 µm pores size. Then, samples were refiltered through a polyethersulphone (PES) Whatman™ Uniflo™ Syringe Filters with a 0.45 µm pores size. The filter unit was washed with 5 ml of sterile water, and subsequently back flushed with 200 µl of sterile water into a 1 cc syringe. The bacterial cells in the recovered water from the filter were lysed using 0.3 M KOH (1:1 ratio) as a lysis buffer, and 5 μl of the mix was directly used in the Yr -LAMP reaction . Data analysis The co-efficient of variation (CV%) was calculated to indicate repeatability using the equation: CV% = (Standard deviation/mean) × 100. The sensitivity experiment was performed 10 times to assess the inter-assay variation (CV%). Primer optimisation, specificity and spiking experiments were repeated three times to confirm the results.
LAMP primer design Alignment of the glutamine synthetase ( glnA ) gene sequences was performed using Clustal Omega software to find regions of similarity in all Y. ruckeri strains and variations if compared with other non-target species. Four primer sets, designated as Yr#1, Yr#2, Yr#3, and Yr#4, were designed to target the glnA gene. Yr#1 and Yr#2 LAMP primer sets were generated using Primer Explorer V5 according to the DNA sequence of the glnA gene of Y. ruckeri . Conversely, Yr#3 and Yr#4 were designed manually following the LAMP protocol developed by . The four primer sets included inner, FIP & BIP, outer F3 & B3, and loop primer LF, except for set Yr#2 which has two loop primers, LF & LB. The suggested primers were synthesized by Integrated DNA Technologies (IDT).
spp. synthetic DNA preparation for assay optimization A synthetic positive and negative controls were designed for assay optimisation and validation, to cover 586, 599 and 593 base pairs (bp) of the glutamine synthetase ( glnA ) from Y. ruckeri ( AY333067.1 ), Y. rohdei ( AY333059.1 ) and Y. frederiksenii ( AY333030.1 ) respectively were synthesized by IDT. The synthetic DNA were ligated into pCRBlunt II-TOPO vector using the Zero Blunt™ TOPO™ PCR plasmid kit (Invitrogen, Waltham, MA, USA) according to manufacturer’s instructions and transformed in E. coli DH5α cells . Insert transformation of Y. ruckeri ( pYr ), Y. rohdei ( pYro ) and Y. frederiksenii ( pYf ) plasmids was confirmed by colony PCR, using M13 forward and reverse primers (Invitrogen, Waltham, MA, USA) . Transformed E. coli cells were grown on Luria-Bertani (LB) agar (1.5% (w/v) agar, 1% (w/v) tryptone, 1% (w/v) NaCl, 0.5% (w/v) yeast extract, pH 7.5), with the addition of 50 µg/ml of kanamycin to select for transformed cells. Plasmid isolations were performed using the FastGene ® Plasmid Mini Kit (NIPPON Genetics Co. Ltd, Tokyo, Japan) and, eluted in 50 µl of elution buffer and stored at −20 °C. Total plasmid concentration was determined by Qubit™ 1x dsDNA BR Assay Kit (Invitrogen, Waltham, MA, USA) using the Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA).
LAMP– Yr- LAMP The four proposed Yr- LAMP primer sets were tested against 0.5 ng/µl of pYr initially using default primers concentration of 0.8 µM of FIP & BIP, 0.2 µM of F3 & B3 and 0.4 µM of LF and LB for Yr#2). All LAMP reaction volumes were 25 µl, consisting of 15 µl GspSSD2.0 Isothermal Mastermix (ISO-004; OptiGene, Horsham, England), 5 µl of primer mixture and 5 µl template (plasmid or genomic DNA extract). The template was replaced by TE buffer (10 Mm Tris-HCl and 0.1 mM ethylenediamine tetraacetic acid (EDTA), pH 8.0) in no template control (NTC) samples. Reactions were performed using the Genie ® II and Genie ® III machines (OptiGene Limited, Horsham, England) initiated with a pre-heating step of 40 °C for 1 min, followed by 30 min of isothermal amplification at 65 °C then an annealing step where the temperature dropped from 94 °C to 84 °C with a 0.5 °C/s drop rate. After obtaining the optimal primer set, a series of assays were conducted to determine the most effective primer concentration, varying the amounts of FIP, BIP, and LF. Reactions were performed and analysed using the Genie ® II and Genie ® III machines, as previously described, with 30 min of amplification. Primer concentrations were set at 0.2 µM for F3 and B3, 1.6 µM for FIP and BIP and 0.8 µM LF.
Yr- LAMP The assay’s sensitivity was assessed using serial dilutions of pYr . The template was sequentially 10-fold diluted in TE buffer from 0.5 ng/µl until 0.5 × 10 −9 ng/µl and tested with optimised Yr#1 primer concentrations . The reactions were repeated 10 times and analysed using the Genie ® II machine, as previously described, with 30 min of amplification to assess the limit of detection (LOD).
Y. ruckeri Conventional PCR (cPCR) was employed to amplify a portion of the glnA target region associated with the Yr -LAMP assay, facilitating a sensitivity comparison. The amplification utilized specific primers for Y. ruckeri ( glnA ), designed by . The PCR reaction mixture comprised a total volume of 25 µl, containing a final concentration of 1X Promega GoTaq ® Green Master Mix, 0.4 µM of each primer, and 5 µl of the template. The products of the PCR were analyzed through electrophoresis on a 1.5% (w/v) agarose gel, stained with 0.2 μg/mL ethidium bromide, and run at 100 v for 35 min to enable clear visualization of the amplification results.
LAMP specificity testing Initial specificity testing was performed using 0.25 ng/µl of target plasmid ( pYr ) and 100-fold higher concentration (25 ng/µl) of plasmids containing gene homologous from closely related species, pYro or pYf . Reactions were performed and analysed using the Genie ® II and Genie ® III machines, as previously described, with 30 min of amplification. To further validate the test’s specificity, the Yr -LAMP assay was evaluated against synthetic DNA representing 999 bp of glnA from salmon pathogens, Aeromonas salmonicida subsp. Salmonicida , Flavobacterium psychrophilum , and Renibacterium salmoninarum . This evaluation also included a range of other bacterial species. Total genomic DNA was extracted using Bioneer AccuPrep Genomic DNA extraction kit from a bacterial specificity panel consisting of gram-positive bacteria including, Enterococcus faecalis , Staphylococcus aureus , Streptococcus agalactiae , S. pyogenes , S. salivarius and S. sanguinis and g ram-negative bacteria including, Escherichia coli , Pseudomonas aeruginosa , P. fluorescens and Vibrio natriegens species following manufacturer instructions. Total DNA concentrations were determined by Qubit™ 1× dsDNA BR Assay Kit (Invitrogen, Waltham, MA, USA), using Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and were stored at −20 °C until needed. Furthermore, both fresh and seawater samples were collected and screened. To assess the water collection contaminants, seawater samples and fresh lake water (La Trobe Lake, Victoria, Australia) were filtered using the filtration method outlined in . The released genomic DNA was used directly as LAMP template or stored at 4 °C until further use. The presence of bacteria in those concentrated water samples was confirmed using a universal bacterial PCR assay targeting the 16S gene of both gram-positive and gram-negative bacteria . The amplification was performed using a primer mixture of seven primers, Golden mixture 7 . The PCR reaction mixture totalled 25 µl, consisting of a final concentration of 1X Promega GoTaq ® Green Master Mix, 0.1 µM of each of the seven primers and 5 µl of concentrated seawater or lake water as a template. The PCR product was observed using 1.5% (w/v) agarose gel with 0.2 μg/mL ethidium bromide, with electrophoresis performed at 90 v for 70 min . LAMP was completed following the isolation of genomic DNA, plasmids containing gene homologous of closely related species, synthetic DNA of salmonid pathogens and the concentration of water samples. Using the optimised concentration of the selected Yr -LAMP primer set at 65 °C, the LAMP reaction was run for 30 min to detect the amplification. Reactions were performed in triplicates and tested by the Genie ® II and Genie ® III machines; as previously described, the annealing temperature of the assay was used to confirm the correct product.
A single colony of E. coli transformed with pYr was grown in 5 ml of LB broth at 37 °C with shaking at 220 rpm until reaching OD 600 = 1.0, measured by the Halo DNAmaster (Dynamica). Cell numbers were calculated using the standardized equation of OD 600 of 1.0 = 8 × 10 8 cells/ml for E. coli . Environmental seawater was spiked with 10-fold serial dilutions of E. coli cells containing pYr . For each sample, 0.5 ml of cells was added to 45.5 ml of seawater (collected from St Kilda beach, Victoria, Australia) samples using the filtration method developed by . Briefly, the water sample was filtered using a Target2™ GMF (Glass MicroFiber) Syringe Filters with a 1.2 µm pores size. Then, samples were refiltered through a polyethersulphone (PES) Whatman™ Uniflo™ Syringe Filters with a 0.45 µm pores size. The filter unit was washed with 5 ml of sterile water, and subsequently back flushed with 200 µl of sterile water into a 1 cc syringe. The bacterial cells in the recovered water from the filter were lysed using 0.3 M KOH (1:1 ratio) as a lysis buffer, and 5 μl of the mix was directly used in the Yr -LAMP reaction .
The co-efficient of variation (CV%) was calculated to indicate repeatability using the equation: CV% = (Standard deviation/mean) × 100. The sensitivity experiment was performed 10 times to assess the inter-assay variation (CV%). Primer optimisation, specificity and spiking experiments were repeated three times to confirm the results.
Primer screening The target gene in this study, glutamine synthetase ( glnA ) from Y. ruckeri , was chosen for its divergent DNA sequence following alignment comparisons with similar genes within the national center of biotechnology information (NCBI) database. When manually identifying the optimal region of the glnA sequence for designing LAMP primers, an alignment of the glnA gene from Y. ruckeri revealed that the region between 70 and 585 bp showed over 99% similarity among Y. ruckeri strains and less than 90% similarity with various Yersinia species. Using the sequence mentioned above as a template for designing primers resulted in four primer sets with Yr#1 and Yr#2 software generated and Yr#3 and Yr#4 manually designed . Manually designed LAMP primers allowed for greater flexibility in targeting specific areas of glnA that had lower percentage similarity. The Yr#1, Yr#2, Yr#3 and Yr#4 were designed to amplify regions with approximate similarity 88%, 89%, 90% and 89%, respectively, to non-target sequences. Initial testing of the four different primer sets for Y. ruckeri - glnA resulted in varying time to positive amplification (Tp), ranging from 05:37 ± (7 s) to 17:47 ± (92 s) min . Primer set Yr#1, which consists of five primers designed to amplify 209 bp region from 219 to 427 bp of the Y. ruckeri glnA gene , gave the fastest Tp (05:37 min ± 7 s) and was chosen for all future experiments. Optimisation of Yr -LAMP primers Manipulation of the LAMP reaction conditions were required to improve the Tp and the assay sensitivity. Initially, performance was tested using different concentrations of inner, outer and loop primers of Yr#1 . Seven different concentrations of primers within Yr#1 set were tested against 0.5 ng/µl of pYr and, four different combinations of primer concentrations resulted in average Tp less than 5:42 min . Those four primer combinations were assessed using 0.25 ng/µl of the pYr and 100-fold greater concentration of closely related Yersinia species, Y. rohedi and Y. frederiksenii . The primer concentration of 1.6, 0.2, 0.8 µM for FIP & BIP, F3 & B3 and LF respectively, showed the highest sensitivity and specificity, with a Tp of 05:27 min ± (12 s) for pYr , no detection of pYro and a Tp of 24:00 min ± (0 s) for pYf . The amplification of the higher concentration of Y. frederiksenii plasmid was observed at 24 min, exceeding the 20-min cutoff time specified by the LAMP Mastermix manufacturer. Therefore, this primer set concentration was deemed specific to Y. ruckeri . Ultimately, 1.6, 0.2, 0.8 µM of FIP & BIP, F3 & B3 and LF primers, respectively, of Yr#1 set were chosen to be the optimal Yr -LAMP concentrations to discriminate between Y. ruckeri and closely related Yersinia species. Specificity assessment of Yr -LAMP Further specificity was performed using an optimized primer set against a range of gram-positive and gram-negative bacteria . Presence of bacteria in fresh and seawater samples was confirmed by PCR using universal bacterial primers targeting 16S gene. Agarose gel then confirmed that bacteria were present in the filtered water sample, as a multiple band profile was observed . Seawater sample #8 and lake water sample #4 were selected to resemble the environmental water in further experiments. A single amplification peak specific to pYr amplification was detected approximately at 6 min with Tm 88.2 °C. There was no amplification for any of the non-target samples during the 30 min LAMP amplification time , indicating that the Yr -LAMP assay was specific for Y. ruckeri and did not display cross-reactivity with any of the other panel bacteria or other microorganisms which naturally inhabit seawater or lake water. Analytical performance of Yr -LAMP assay LAMP sensitivity was evaluated using a 10-fold serial dilution starting at approximately 1.13 × 10 8 copies/µl of the Y. ruck eri glnA gene. The optimized Yr -LAMP assay conditions could successfully detect as low as 1.13 × 10 copies/µl of pYr in less than 20 min . All amplicons showed a consistent melting temperature of approximately 88 °C, even at a concentration of 0.5 × 10 −8 ng/µl. However, this low concentration produced unreliable Tp results, with inconsistent amplification across assays and a CV% greater than 15%. Excluding the 0.5 × 10 −8 ng/µl dilution, the maximum inter-assay CV% for the sensitivity tests was 9.9%. Based on the sensitivity test results, a sample is considered positive if amplification occurs within 20 min with an inter-assay CV% of ≤15%. Consequently, any subsequent fluorescent peaks are deemed negative. The corresponding cPCR could only amplify the 109 bp specific to the target sequence till the 0.5 × 10 −4 ng/µl equivalent to 1.13 × 10 4 copies/µl, three-folds less sensitive than Yr -LAMP assay . Environmental sampling performance of Yr -LAMP assay Seawater known to be free from Y. ruckeri was collected from St. Kilda beach and used to assess a filtration and simple extraction method suitable for field-use and LAMP detection. Each sample containing either 50 ml of water was spiked with decreasing quantities of E. coli cells containing the glnA gene of Y. ruckeri , or a negative control of unspiked seawater, was filtered and extracted with an equal volume of 0.3 M KOH. Yr -LAMP confirmed detection to the equivalent of 0.08 cells/µl within 14 min with a melting temperature of approximately 88 °C . However, the corresponding cPCR detected only 8.0 cells/µl .
The target gene in this study, glutamine synthetase ( glnA ) from Y. ruckeri , was chosen for its divergent DNA sequence following alignment comparisons with similar genes within the national center of biotechnology information (NCBI) database. When manually identifying the optimal region of the glnA sequence for designing LAMP primers, an alignment of the glnA gene from Y. ruckeri revealed that the region between 70 and 585 bp showed over 99% similarity among Y. ruckeri strains and less than 90% similarity with various Yersinia species. Using the sequence mentioned above as a template for designing primers resulted in four primer sets with Yr#1 and Yr#2 software generated and Yr#3 and Yr#4 manually designed . Manually designed LAMP primers allowed for greater flexibility in targeting specific areas of glnA that had lower percentage similarity. The Yr#1, Yr#2, Yr#3 and Yr#4 were designed to amplify regions with approximate similarity 88%, 89%, 90% and 89%, respectively, to non-target sequences. Initial testing of the four different primer sets for Y. ruckeri - glnA resulted in varying time to positive amplification (Tp), ranging from 05:37 ± (7 s) to 17:47 ± (92 s) min . Primer set Yr#1, which consists of five primers designed to amplify 209 bp region from 219 to 427 bp of the Y. ruckeri glnA gene , gave the fastest Tp (05:37 min ± 7 s) and was chosen for all future experiments.
Yr -LAMP primers Manipulation of the LAMP reaction conditions were required to improve the Tp and the assay sensitivity. Initially, performance was tested using different concentrations of inner, outer and loop primers of Yr#1 . Seven different concentrations of primers within Yr#1 set were tested against 0.5 ng/µl of pYr and, four different combinations of primer concentrations resulted in average Tp less than 5:42 min . Those four primer combinations were assessed using 0.25 ng/µl of the pYr and 100-fold greater concentration of closely related Yersinia species, Y. rohedi and Y. frederiksenii . The primer concentration of 1.6, 0.2, 0.8 µM for FIP & BIP, F3 & B3 and LF respectively, showed the highest sensitivity and specificity, with a Tp of 05:27 min ± (12 s) for pYr , no detection of pYro and a Tp of 24:00 min ± (0 s) for pYf . The amplification of the higher concentration of Y. frederiksenii plasmid was observed at 24 min, exceeding the 20-min cutoff time specified by the LAMP Mastermix manufacturer. Therefore, this primer set concentration was deemed specific to Y. ruckeri . Ultimately, 1.6, 0.2, 0.8 µM of FIP & BIP, F3 & B3 and LF primers, respectively, of Yr#1 set were chosen to be the optimal Yr -LAMP concentrations to discriminate between Y. ruckeri and closely related Yersinia species.
Yr -LAMP Further specificity was performed using an optimized primer set against a range of gram-positive and gram-negative bacteria . Presence of bacteria in fresh and seawater samples was confirmed by PCR using universal bacterial primers targeting 16S gene. Agarose gel then confirmed that bacteria were present in the filtered water sample, as a multiple band profile was observed . Seawater sample #8 and lake water sample #4 were selected to resemble the environmental water in further experiments. A single amplification peak specific to pYr amplification was detected approximately at 6 min with Tm 88.2 °C. There was no amplification for any of the non-target samples during the 30 min LAMP amplification time , indicating that the Yr -LAMP assay was specific for Y. ruckeri and did not display cross-reactivity with any of the other panel bacteria or other microorganisms which naturally inhabit seawater or lake water.
Yr -LAMP assay LAMP sensitivity was evaluated using a 10-fold serial dilution starting at approximately 1.13 × 10 8 copies/µl of the Y. ruck eri glnA gene. The optimized Yr -LAMP assay conditions could successfully detect as low as 1.13 × 10 copies/µl of pYr in less than 20 min . All amplicons showed a consistent melting temperature of approximately 88 °C, even at a concentration of 0.5 × 10 −8 ng/µl. However, this low concentration produced unreliable Tp results, with inconsistent amplification across assays and a CV% greater than 15%. Excluding the 0.5 × 10 −8 ng/µl dilution, the maximum inter-assay CV% for the sensitivity tests was 9.9%. Based on the sensitivity test results, a sample is considered positive if amplification occurs within 20 min with an inter-assay CV% of ≤15%. Consequently, any subsequent fluorescent peaks are deemed negative. The corresponding cPCR could only amplify the 109 bp specific to the target sequence till the 0.5 × 10 −4 ng/µl equivalent to 1.13 × 10 4 copies/µl, three-folds less sensitive than Yr -LAMP assay .
Yr -LAMP assay Seawater known to be free from Y. ruckeri was collected from St. Kilda beach and used to assess a filtration and simple extraction method suitable for field-use and LAMP detection. Each sample containing either 50 ml of water was spiked with decreasing quantities of E. coli cells containing the glnA gene of Y. ruckeri , or a negative control of unspiked seawater, was filtered and extracted with an equal volume of 0.3 M KOH. Yr -LAMP confirmed detection to the equivalent of 0.08 cells/µl within 14 min with a melting temperature of approximately 88 °C . However, the corresponding cPCR detected only 8.0 cells/µl .
In this study, we utilized competent E. coli cells integrated with synthetic DNA encoding the Y. ruckeri glutamine synthetase gene for DNA extraction and sensitivity testing. The use of E. coli was necessitated by the unavailability of Y. ruckeri cultures or clinically infected samples. Our DNA extraction method, which employed KOH, proved effective even on gram-positive bacteria, indicating its robustness and potential applicability across different bacterial types . Initial assessments demonstrated that while E. coli served as a useful surrogate due to its gram-negative nature, it is imperative to validate these findings with Y. ruckeri bacterial cells to ensure accurate sensitivity values. To advance this validation, efforts should be made to acquire Y. ruckeri samples and replicate the DNA extraction and sensitivity tests on Y. ruckeri cells and compare the results with those derived from E. coli . While the Yersinia genus encompasses a range of species, including Y. enterocolitica and Y. frederiksenii , some of which have been found in fish, not all are associated with fish diseases . Therefore, it is essential to identify a distinct target gene that ensures high specificity for detecting Y. ruckeri amidst other related and unrelated bacteria present in the fish environment . The 16S rRNA gene, which is approximately 1,500 base pairs in size, is one of the most conserved genes in bacteria due to its critical role in cell function. It is also the most extensively represented gene in the GenBank database, with over 90,000 sequences available . The aforementioned reasons make 16S rRNA the ideal candidate for designing specific PCR primers however, challenges may occur when determining eight distinct regions for designing LAMP primers . The glnA gene is a promising marker for identifying Y. ruckeri because it is conserved among various strains. A real-time PCR assay targeting the glnA gene of Y. ruckeri demonstrated 100% specificity in detecting ERM infection . The conservation of this gene sequence within Y. ruckeri makes the assay a versatile diagnostic tool for identifying new bacterial variants globally . The previously published ERM-LAMP assay utilized five primers targeting the yruI/yruR gene of Y. ruckeri in tissue samples. While the assay demonstrated high specificity for Y. ruckeri , it required one hour for amplification . The newly developed Yr -LAMP assay comprises five LAMP primers targeting the glnA gene of Y. ruckeri , which was conserved in all tested Y. ruckeri strains and varied from other species within the same genus . The assay was optimised to use primer concentrations of 1.6, 0.2, 0.8 µM of FIP & BIP, F3 & B3 and LF, respectively, of the Yr#1 primer set to achieve high specificity and sensitivity. All the amplicons had a similar melting temperature of approximately 88 °C, even at a concentration of 0.5 × 10 −8 ng/µl. However, this concentration resulted in stochastic amplification and was therefore considered inconsistent and excluded. While the existing ERM-LAMP and real time PCR has analytical sensitivity of 1 pg and 5 fg respectively, our newly developed Yr -LAMP assay can detect 0.5 × 10 −7 ng/µl of plasmid DNA which is equivalent to 11.3 copies/µl in under 20 min. This is a shorter detection time compared to both of those assays . Accurate sensitivity comparison among the three assays is not possible since our assay was tested on plasmids with the target gene, while the others used genomic DNA. Nonetheless, our Yr -LAMP assay is highly sensitive, three-fold more sensitive than the corresponding cPCR and with a detection limit comparable to other published LAMP assays . Previous studies tracked the route of Y. ruckeri in the host fish, the bacteria enter the fish via gills and shortly it reaches the intestine although it was not observed in kidney until the third day post infection and a week later was observed in the brain and other internal organs . Those findings support the concept of early detection of pathogen is a key factor for containing infection and mitigating the excess use of antimicrobial treatment . Sampling and extraction can be considered the most critical steps in diagnostics, they can promote the assay to be field-deployable or not. The current assays for detecting Y. ruckeri are either invasive using fish tissues or non-invasive using faeces or blood samples . Although the published Y. ruckeri detection methods are rapid (approximately one hour) for LAMP and real time PCR, the extraction step itself was long. Extracting genomic DNA from tissue samples, blood, or faecal samples with commercial extraction kits required several hours and purifying them from PCR inhibitors took even longer. Therefore, to address these challenges for our Yr -LAMP assay application, we developed field-tolerant sampling and extraction methods from environmental water . For testing the field performance of Yr -LAMP assay, environmental water spiked with serial dilutions of E. coli bearing the target gene was filtered and lysed using KOH. The Yr -LAMP assay could successfully detect as low as 0.08 cells/µl (3.9 × 10 −10 ng/µl of pYr ) of the initial collected water in less than 15 min. Relating these results to the sensitivity test, we can suggest that approximately 1.13 × 10 2 copies/µl of pYr were recovered and extracted from the initial 4,000 cells spiked into the 50 ml water. On the other hand, cPCR could only detect until 8.0 cells/µl of the initial 50 ml water sample, two-fold less sensitive than the Yr- LAMP assay. It was confirmed that Yr -LAMP only amplified the positive control and no other bacteria, suggesting that Yr -LAMP is specific. As the assay will be used on environmental samples there is potential for cross reactivity with preexisting microorganisms present in the water habitat, water samples from the beach and lake were collected, filtered, and concentrated using the filtering method. Although a diverse range of microorganisms exists in both freshwater and seawater, the microbiome extracted from these samples did not result in amplification by our developed LAMP assay. This outcome demonstrates the assay’s high specificity . Although using KOH for direct extraction of DNA in-field was successful and sensitive in previous work , it was observed to have a slight inhibitory effect on LAMP represented in a minor decline in Tm than the positive control. Those observations are consistent with other findings regarding the effect of inhibitors on the melting temperature of LAMP . However, despite this effect on Tm, the inhibition did not impact the assay’s sensitivity, and therefore it was not considered problematic. Studies have demonstrated that Y. ruckeri can survive for up to 90 days or more in both fresh and marine waters . Therefore, detecting Y. ruckeri in water samples can inform real-time management strategies. For instance, the presence of Y. ruckeri in sea cages may indicate the need for treatment or the delay of introducing new fish into the sea cages. The data reported in our Yr -LAMP assay was obtained by using real-time LAMP instruments, Genie ® ll or Genie ® lll (OptiGene, Horsham, England) real time fluorometer. Genie is a lightweight, portable device that can work with batteries. Despite the widespread use of the Genie instruments in numerous studies globally , they are constrained by the limited number of chambers, with Genie ® lll offering eight and Genie ® ll providing 16 wells. This restricts the number of samples available for testing to six and 14, respectively, excluding positive and negative controls. Furthermore, the capacity is reduced by half if duplicate tests are necessary, making these instruments unsuitable for high-throughput detection. For wide range surveillance, other detection alternatives can be used, such as lateral flow dipstick (LFD) or visualise colour detection. For example, developed an impressive LAMP assay capable of detecting Singapore grouper iridovirus (SGIV) directly from fin samples that were boiled in 50 µl of 0.02 N NaOH. The results of the LAMP assay were indicated by a colour change from yellow to pink, with the entire process taking approximately an hour and was able to detect less than 6 copies/µl of the virus. The straightforward extraction process combined with the rapid and sensitive nature of the assay makes it an effective on-site method for efficiently managing SGIV infections . LFD has gained recent popularity due to its versatility in detecting various pathogens, including bacteria , virus and parasites . Its ease of use, exemplified by the accessibility of COVID-19 test strips even for non-specialists, underscores its practicality . Looking ahead, exploring the feasibility of adapting the Yr -LAMP assay into an LFD format presents an opportunity to circumvent the testing limitations inherent in Genie devices. By integrating swabbing or the straightforward filtration and extraction methods demonstrated in our study, we aim to develop a high-capacity, device-free, point-of-care detection method suitable for deployment by untrained personnel.
Yersinia ruckeri poses significant economic risks in the aquaculture industry, necessitating regular surveillance to pre-emptively manage infections and reduce reliance on antimicrobial agents, thereby preventing outbreaks. The Yr -LAMP primers exhibit high specificity, effectively discriminating against organisms naturally occurring in fish environments and closely related Yersinia species. This method identifies the pathogen at extremely low levels, with an analytical sensitivity of 0.08 cells/µl in less than 20 min. Integrating these specific LAMP primers with filtration and KOH extraction enhances the system’s speed, reliability, affordability, and simplicity, making it feasible for farm workers to detect Y. ruckeri infections early and effectively control outbreaks.
10.7717/peerj.19015/supp-1 Supplemental Information 1 Conventional polymerase chain reaction (cPCR) primers and conditions used throughout the current research. The underlined regions represent the complementary to 5′–3′ sequence. 10.7717/peerj.19015/supp-2 Supplemental Information 2 Raw data.
|
Risk factors for postoperative acute kidney injury after cytoreductive surgery combined with hyperthermic intraperitoneal chemotherapy: a meta-analysis and systematic review | 661be154-8aa5-437c-82cb-9a948bd9f538 | 11796243 | Surgical Procedures, Operative[mh] | Peritoneal surface malignancies (PSM) are usually associated with poor prognosis and severe complications in patients . PSM can be either primary tumors of the peritoneum or peritoneal metastases originating from secondary spread of tumors from other organs, including intra-abdominal organs (e.g., gastrointestinal and ovarian tumors) or extra-abdominal organs (e.g., lung, breast, and kidney tumors) . The poor prognosis of PSM patients has always been a challenge for clinicians, despite the adoption of maximal tumor resection combined with preoperative as well as postoperative adjuvant intravenous chemotherapy . Cytoreductive surgery (CRS) combined with hyperthermic intraperitoneal chemotherapy (HIPEC) has been used as a first-line treatment option for patients with progressive tumors of PSM and related abdominal organs . Maximum resection of the tumor visible to the naked eye by CRS and continuous infusion of chemotherapeutic agents into the peritoneal cavity by means of HIPEC thereby inhibiting or killing microscopic residual cancer cells in the peritoneal cavity . In patients with peritoneal metastases (PM), this is the only treatment that has been proven in studies to significantly improve patients' 5-year survival . According to the results of the study, the 5-year survival rate of colorectal cancer-derived peritoneal metastases increased to 25–51% with CRS combined with HIPEC, while the 5-year survival rate of pseudomucinous tumors was as high as 60–80% . In addition, encouraging results have been achieved with prophylactic CRS + HIPEC in patients with progressive abdominal tumors at high risk of peritoneal metastases . PSM patients can be treated with CRS + HIPEC to improve survival, but the concern is that this combination is often accompanied by terribly high morbidity and mortality, and not everyone can tolerate this aggressive treatment modality . On the other hand, patients receiving combination therapy were more likely to experience grade 3 or higher grade adverse events compared to patients receiving CRS alone (34 of 131 patients vs. 20 of 130 patients, p = 0.035) . Acute kidney injury (AKI) is one of the most common complications after CRS + HIPEC, and its incidence has been reported to range from 1 to 48%, especially common in patients who possess a history of cisplatin use . And AKI as a potentially dangerous complication has been reported in several studies . And the occurrence of AKI is closely associated with considerable morbidity and mortality . The results of several studies have been shown that some risk factors are important predictors of postoperative CRS + HIPEC, including cisplatin use, decreased eGFR, age, gender, obesity, hypertension, and diabetes mellitus . However, no studies have systematically summarized and evaluated the association between these risk factors and the occurrence of AKI after CRS + HIPEC.CRS + HIPEC improves the prognosis of patients, but the occurrence of postoperative AKI is often accompanied by prolonged hospitalization with increased mortality . Therefore, we will focus on the effects of preoperative risk factors such as age, gender, body mass index (BMI), peritoneal cancer index (PCI), estimated glomerular filtration rate (eGFR), hemoglobin (Hb), diabetes mellitus, and hypertension, intraoperative risk factors such as intraoperative hypotension, intraoperative fluid management, and chemotherapeutic drug selection on AKI. In addition, the potential impact of different primary tumor sites on AKI risk will be explored. Through these analyses, we aim to provide clinicians with more accurate risk assessment tools and guide them to take targeted preventive measures preoperatively and intraoperatively to reduce the incidence of AKI and improve patient prognosis. In patients undergoing CRS + HIPEC, AKI is a serious postoperative complication that is closely related to patient prognosis. Although studies have examined the risk factors for AKI, the results vary widely and lack systematic summarization. Therefore, the main objectives of this meta-analysis were (1) to determine the incidence of AKI after CRS + HIPEC. (2) To identify the major risk factors associated with the development of AKI after CRS + HIPEC. (3) To assess the effectiveness of reported prevention strategies and interventions in reducing the incidence and severity of AKI. (4) To identify gaps in existing research and suggest directions for future studies to improve patient outcomes and optimize perioperative management.
This meta-analysis was conducted using the guidelines reported in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) declarative agreement as the primary guideline and was pre-registered in PROSPERO (CRD42024585269). The PRISMA checklist specific to this meta-analysis can be found in Appendix Table S1. Search strategy Two investigators (CDZ and WL) independently conducted a systematic search in the Cochrane Library, Embase, Web of Science, and PubMed databases with the aim of identifying studies up to September 1, 2024 on risk factors for AKI after CRS combined with HIPEC. We used MeSH terms and keywords including CRS, HIPEC, and AKI, connecting synonyms by “OR” and combining different terms by “AND”. We focused on English-language journal articles, excluded unpublished and non-English literature, and reviewed only titles and abstracts to simplify screening. The detailed search strategy is described in the Appendix document. Inclusion and exclusion criteria All included literature for this meta-analysis met the inclusion and exclusion criteria. The inclusion criteria were as follows: The design was a case–control, cohort, or cross-sectional study. Participants were adults (≥ 18 years of age) with primary or metastatic peritoneal tumors treated with CRS + HIPEC. Studies provided definitions of CRS and HIPEC and complete patient baseline data. Risk factors for AKI or the relationship between AKI and prognosis of patients after CRS + HIPEC were reported, and detailed event counts or odds ratios for the AKI and non-AKI groups were provided. AKI diagnosis using KDIGO or AKIN criteria . Study quality with a Newcastle–Ottawa Scale (NOS) score ≥ 6. Exclusion criteria were as follows: reviews, conference reports, letters, case reports, and animal experiments. Studies of poor quality and lacking complete data or results. Articles with inaccessible full text or missing content. Study selection and data extraction The study screening process began with de-duplication of all retrieved literature using Endnote 20.0 software. Two authors (CDZ and LLF) then performed an initial screening of the literature by reviewing titles and abstracts to exclude studies that did not meet the predetermined inclusion and exclusion criteria. After the initial screening, the full text was evaluated to identify eligible studies. Any disagreements between the two authors were resolved through discussion with a third researcher who made the final decision. The study screening process is shown in Fig. . We extracted the following data from the included studies using a specially designed form: Basic information about the article: first author, year of publication and type of study. Information about the participant population, including sample size, age, sex ratio, and region. Information on risk factors, divided into preoperative risk factors, intraoperative risk factors, and different primary tumor sites, the specific classification can be found in the results section. Study data and results, including event counts in the AKI and non-AKI groups for different risk factors, the magnitude of independent risk factors, and the incidence of potential risk factors in different studies. Quality assessment and statistical analysis Quality assessment was performed using the Newcastle–Ottawa Scale (NOS) for case–control and cohort studies and the Cochrane Manual for randomized clinical trials. Two authors (CDZ and WL) independently assessed studies in three domains: selection, comparability, and outcome/exposure. Scoring was done using a checklist containing 8 items, where 1–3 stars indicated low quality, 4–6 stars indicated moderate quality, and 7–9 stars indicated high quality. All included studies were rated as high quality with a score of ≥ 7 stars. See Appendix Table S2 for a detailed rating scale. We conducted this meta-analysis using Review Manager 5.4 and Stata 17.0 to assess the effect sizes of the risk factors, including the mean difference (MD) and the ratio of ratios (OR) and their 95% confidence intervals. Heterogeneity between studies was assessed using the Cochrane Q-test and I 2 statistic. Where significant heterogeneity was detected ( I 2 ≥ 50%), it was analyzed using a random-effects model, and sensitivity analyses were performed to determine the source of heterogeneity. Conversely, if heterogeneity was not significant ( I 2 < 50%), a fixed-effects model was used. Publication bias was assessed using funnel plots, Begg's test, and Egger's test to ensure stability of the findings. All tests were two-tailed and P < 0.05 was considered statistically significant.
Two investigators (CDZ and WL) independently conducted a systematic search in the Cochrane Library, Embase, Web of Science, and PubMed databases with the aim of identifying studies up to September 1, 2024 on risk factors for AKI after CRS combined with HIPEC. We used MeSH terms and keywords including CRS, HIPEC, and AKI, connecting synonyms by “OR” and combining different terms by “AND”. We focused on English-language journal articles, excluded unpublished and non-English literature, and reviewed only titles and abstracts to simplify screening. The detailed search strategy is described in the Appendix document.
All included literature for this meta-analysis met the inclusion and exclusion criteria. The inclusion criteria were as follows: The design was a case–control, cohort, or cross-sectional study. Participants were adults (≥ 18 years of age) with primary or metastatic peritoneal tumors treated with CRS + HIPEC. Studies provided definitions of CRS and HIPEC and complete patient baseline data. Risk factors for AKI or the relationship between AKI and prognosis of patients after CRS + HIPEC were reported, and detailed event counts or odds ratios for the AKI and non-AKI groups were provided. AKI diagnosis using KDIGO or AKIN criteria . Study quality with a Newcastle–Ottawa Scale (NOS) score ≥ 6. Exclusion criteria were as follows: reviews, conference reports, letters, case reports, and animal experiments. Studies of poor quality and lacking complete data or results. Articles with inaccessible full text or missing content.
The study screening process began with de-duplication of all retrieved literature using Endnote 20.0 software. Two authors (CDZ and LLF) then performed an initial screening of the literature by reviewing titles and abstracts to exclude studies that did not meet the predetermined inclusion and exclusion criteria. After the initial screening, the full text was evaluated to identify eligible studies. Any disagreements between the two authors were resolved through discussion with a third researcher who made the final decision. The study screening process is shown in Fig. . We extracted the following data from the included studies using a specially designed form: Basic information about the article: first author, year of publication and type of study. Information about the participant population, including sample size, age, sex ratio, and region. Information on risk factors, divided into preoperative risk factors, intraoperative risk factors, and different primary tumor sites, the specific classification can be found in the results section. Study data and results, including event counts in the AKI and non-AKI groups for different risk factors, the magnitude of independent risk factors, and the incidence of potential risk factors in different studies.
Quality assessment was performed using the Newcastle–Ottawa Scale (NOS) for case–control and cohort studies and the Cochrane Manual for randomized clinical trials. Two authors (CDZ and WL) independently assessed studies in three domains: selection, comparability, and outcome/exposure. Scoring was done using a checklist containing 8 items, where 1–3 stars indicated low quality, 4–6 stars indicated moderate quality, and 7–9 stars indicated high quality. All included studies were rated as high quality with a score of ≥ 7 stars. See Appendix Table S2 for a detailed rating scale. We conducted this meta-analysis using Review Manager 5.4 and Stata 17.0 to assess the effect sizes of the risk factors, including the mean difference (MD) and the ratio of ratios (OR) and their 95% confidence intervals. Heterogeneity between studies was assessed using the Cochrane Q-test and I 2 statistic. Where significant heterogeneity was detected ( I 2 ≥ 50%), it was analyzed using a random-effects model, and sensitivity analyses were performed to determine the source of heterogeneity. Conversely, if heterogeneity was not significant ( I 2 < 50%), a fixed-effects model was used. Publication bias was assessed using funnel plots, Begg's test, and Egger's test to ensure stability of the findings. All tests were two-tailed and P < 0.05 was considered statistically significant.
Literature search results and study characteristics A total of 63 articles on AKI after CRS + HIPEC were retrieved through the developed search strategy (Fig. ). All retrieved articles were imported into Endnote 20.0 software, and 16 duplicate articles were excluded using the software de-duplication function. By reading the titles and abstracts, 22 papers that did not meet the inclusion criteria were initially excluded. A total of 25 papers were retained after the initial screening. Among the remaining papers, we performed full-text reading and excluded a total of 18 articles that did not meet the inclusion requirements, including 6 papers that could not extract data, 3 papers with Newcastle–Ottawa Scale (NOS) scores lower than 6, 5 papers describing problems that were not related to the topic, and 4 papers that did not record the target results. In the end, a total of 7 papers were included in this systematic review and meta-analysis, all of which were retrospective studies and all of which described in detail the number of events or specific values of the different risk factors for postoperative AKI in the AKI group versus the Non-AKI group (Table ). This systematic review and meta-analysis investigated a total of 1550 patients who developed AKI after CRS + HIPEC in seven studies. The included studies were published online from 2017 to 2024. Two of the included studies were from the United States, two from China, two from Germany, and one from Portugal. The largest proportion of all patients were from the United States (34.39%), followed by China (28.39%), Germany (25.48%), and Portugal (11.74%). A total of 28 risk factors as well as 6 different tumor primary sites were involved in the analysis of the impact of postoperative AKI. The risk factors were divided into preoperative and intraoperative for independent analysis, and a summary of the detailed analysis results can be found in Table . The main primary sites of the tumors were appendix (39%) and colorectal cancer (21%), followed by gastric (13%),pseudomyxoma peritonei (9%), mesothelioma (6%),ovarian (6%), andothers (4%). detailed distribution can be found in Appendix Figure S1. Detailed characteristics of the seven included studies can be found in Table . Preoperative risk factors In the seven studies included in this meta-analysis, we pooled and analyzed a total of 20 potential preoperative risk factors. These included the use of drugs such as ACEI or ARB, Diuretics, and NSAIDs, as well as patient age, gender, BMI, and PCI. We also performed a pooled analysis of patients' preoperative Alb, preoperative Hb, preoperative eGFR, preoperative Urea, and Preoperative creatinine. In addition, the effect of various preoperative underlying diseases of the patients, such as Chronic kidney diseas, Diabetes mellitus, Heart disease, Hypertension, and the patients' preoperative Neoadjuvant therapy and Preoperative chemotherapy on the Postoperative AKI were also included in our analysis.Hospitalization (days) and ICU duration (days) as potential risk factors for postoperative AKI were also explored. All the data related to the above risk factors were collected through specially designed and scientifically based statistical forms, and preoperative risk factors with complete and comparable data were analyzed in a pooled manner, in which eight risk factors such as age, gender, BMI, PCI, eGFR, Hb, Diabetes mellitus, and Hypertension were considered to have a statistically significant ( p < 0.05), and the results of meta-analysis of all the above potential risk factors can be found in Table . Age A total of six studies recorded information on the distribution of age in the AKI group versus the Non-AKI group, and information on age was reported as mean ± standard deviation. The heterogeneity test suggested that there was no heterogeneity between studies ( I 2 = 0%, P = 0.50). Therefore, a fixed-effects model was chosen for the analysis and the pooled analysis showed (MD = 2.04, 95% CI: 0.55,3.52, p = 0.007) (Fig. A). We therefore conclude that advanced age is one of the preoperative risk factors for AKI after CRS + HIPEC. Sex A total of six studies described the gender distribution of patients, recorded using dichotomous variables. After performing the test for heterogeneity ( I 2 = 21%, P = 0.27), meta-analysis was performed using a fixed-effects model. The results suggested a significant and statistically significant difference between the two groups (OR = 1.53, 95% CI: 1.17,2.00, p = 0.002) (Fig. B). Therefore, we conclude that the risk of AKI after CRS + HIPEC is greater in male patients than in female patients. Gender is one of the preoperative risk factors for postoperative AKI. BMI A total of five studies reported detailed information on patients' BMI using mean ± standard deviation data recording. The heterogeneity test suggested no heterogeneity between studies ( I 2 = 0%, P = 0.65). Analysis using the fixed effect model showed a significant and statistically significant difference between the two groups (MD = 1.22, 95% CI: 0.42,2.03, p = 0.003) (Fig. C). Therefore, we can conclude that the risk of postoperative AKI after CRS + HIPEC increases as the value of BMI increases. Peritoneal cancer index A total of three studies referred to patients' preoperative Peritoneal cancer index (PCI), and the data were described using mean ± standard deviation recording. By heterogeneity test ( I 2 = 0%, P = 0.51), P > 0.1 suggests that there is no heterogeneity between studies, so we used fixed effect model for pooled analysis. The results showed a significant difference between the AKI and Non-AKI groups, and the results were statistically significant (MD = 3.79, 95% CI: 1.49,3.79, p = 0.003) (Fig. D). Therefore, we conclude that higher preoperative PCI is one of the significant risk factors for postoperative AKI. Preoperative eGFR and Hb A total of three studies reported details of Preoperative eGFR, recorded as mean ± standard deviation. Pooled analysis of the data using continuous variables and random effects model ( I 2 = 0%, p = 0.97) suggested that there was no heterogeneity between the two and there was a significant difference between the two groups (MD = -11.53, 95%CI: -17.89,-5.17, p = 0.0004) (Fig. E). Therefore, we conclude that a decrease in Preoperative eGFR leads to an increase in the incidence of AKI after CRS + HIPEC and that Preoperative eGFR is a significant preoperative risk factor. In addition, a total of 2 studies referred to the Preoperative Hb of patients and recorded detailed data in the form of mean ± standard deviation. Heterogeneity analysis showed no heterogeneity between the two ( I 2 = 0%, P = 0.94), which was analyzed using continuous variables and fixed effects model. The results suggested a significant difference between the two groups (MD = -5.87, 95%CI: -10.90,-0.83, p = 0.02) (Fig. F) and were statistically significant. In other words, a decrease in preoperative Hb increased the incidence of postoperative AKI. Diabetes mellitus and hypertension A total of five studies reported diabetes in 1310 patients, detailing the number of people in the AKI group compared with those in the Non-AKI group. The heterogeneity test suggested that there was no heterogeneity among the studies ( I 2 = 6%, P = 0.37), so we used a fixed-effects model for meta-analysis. The results suggested that diabetes mellitus was one of the significant risk factors for AKI after CRS + HIPEC, and the results were statistically significant (OR = 1.78, 95% CI: 1.15,2.75, p = 0.01) (Fig. G). In addition a total of four studies reported information on patients' hypertension, of which the results of Annika et al. versus Bai et al. indicated that preoperative hypertension was a risk factor for postoperative AKI ( P > 0.05). On the contrary, the findings of Lu et al. versus Lukas et al. concluded that hypertension does not lead to an increased risk of postoperative AKI. Therefore, we performed a meta-analysis using a random-effects model ( I 2 = 59%, p = 0.06), which showed a significant difference between the two groups (OR = 2.43, 95% CI: 1.37,4.31, p = 0.002) (Fig. H), and the results were statistically significant. For the high heterogeneity of the pooled analysis, we used a case-by-case exclusion method to explore the source of heterogeneity, and the results showed that the I 2 values stabilized and no significant source of heterogeneity was found. Other preoperative risk factors Other preoperative risk factors included the use of drugs such as ACEI or ARB, Diuretics, NSAIDs, laboratory findings such as preoperative Alb, preoperative Urea and Preoperative creatinine. Similarly, we performed a pooled analysis of the above risk factors, but the results suggested that there was no significant difference between the two groups (P > 0.05). Detailed results of heterogeneity analysis and pooled analysis can be found in Table . Intraoperative risk factors In our seven included articles, we defined a total of eight risk factors as intraoperative risk factors, including the use of chemotherapeutic agents cisplatin and mitomycin in HIPEC, IO fluid, IO SBP < 100, IO transfusion, IO vasopressors, operation time, urine output, and other parameters. We tested the heterogeneity of the above risk factors and selected the appropriate effect model for pooled analysis based on their results to obtain scientific conclusions. The results suggested that the use of chemotherapeutic drugs cisplatin, mitomycin and IO SBP < 100(min) were significantly and statistically different between the two groups. Detailed results are shown in Table . Operative time A total of three studies reported specific surgical times, using a mean ± standard deviation form of data recording. After testing for heterogeneity ( I 2 = 50%, P = 0.14), the pooled data were analyzed using a random effects model. The results showed that the duration of surgery was not significantly different between the two groups (MD = 21.92, 95%CI: -20.25,64.08, p = 0.31) (Fig. A) and the results were not statistically significant. In other words, it is not yet possible to consider the duration of surgery as one of the risk factors for AKI after CRS + HIPEC. Chemotherapy regimens A total of 5 of the 7 included studies in this meta-analysis mentioned and documented the number of events in the AKI group compared with the number of events in the Non-AKI group with different intraoperative chemotherapy regimens. Intraoperative chemotherapy regimens included the use of drugs such as cistplatin, oxilaplatin, and mitomycin, of which the data for cistplatin and mitomycin were comparable, so we analyzed the data for these two drugs separately. A total of four studies reported a comparison of the number of events in patients using the chemotherapeutic agent cistplatin in HIPEC, using a dichotomous variable recording format. Heterogeneity analysis was performed and revealed heterogeneity among the studies ( I 2 = 79%, p = 0.01), and the results suggested that intraoperative use of cistplatin greatly increased the risk of postoperative AKI (OR = 2.84, 95% CI: 1.27,6.35, p < 0.0001) (Fig. B). When we explored the source of heterogeneity using a cull-by-cull approach, we found that I 2 values stabilized (66%-79%) and no source of heterogeneity was identified. Three studies reported the use of the drug mitomycin between the two groups, and the heterogeneity analysis revealed a large heterogeneity between the studies ( I 2 = 70%, P = 0.04), so we used a case-by-case exclusion method to explore the source of heterogeneity. When we excluded the data from Annika et al. study, we found that the heterogeneity disappeared ( I 2 = 0%, P = 0.33), and thus we determined that the data from Annika et al.'s study was the source of heterogeneity. After eliminating the source of heterogeneity, we found that the data from the studies of Eduarda et al. and Juan et al. had considerable homogeneity, and by pooling and analyzing the data from both, we found that the use of mitomycin was significantly different between the two groups, and that the use of mitomycin was one of the protective factors given for postoperative AKI, and the results had a statistically significant (OR = 0.41, 95% CI: 0.26,0.63, p < 0.0001) (Fig. C). IO SBP < 100 mmHg(min) A total of 2 studies recorded the specific time (min) for intraoperative IO SBP < 100 mmHg and described the data as mean ± standard deviation. Pooled analysis of the data from Lu et al. and Lukas et al. using a random-effects model ( I 2 = 0%, p = 0.50) revealed a significant difference between the two groups and the results were statistically significant (MD = 10.95, 95%CI:3.15,18.75, p = 0.006) (Fig. D). Therefore, we got the conclusion that prolonged duration of intraoperative IO SBP < 100 leads to increased risk of postoperative AKI after CRS + HIPEC and is one of the intraoperative risk factors. Other intraoperative risk factors Other intraoperative risk factors included parameters such as IO fluid, IO transfusion, IO vasopressors, and urine output. Detailed results of heterogeneity analysis with meta-analysis can be found in Table . Two studies explored the effect of IO fluid on postoperative AKI, and the results told us that IO fluid was not one of the intraoperative risk factors (MD = 79.20, 95%CI:-135.33,293.72, p = 0.47). Three studies mentioned detailed data on IO transfusion, analyzed by random effects model ( I 2 = 60%, p = 0.08), which suggested that IO transfusion was not significantly different between the two groups (OR = 0.90, 95%CI:0.47,1.72, p = 0.74). Heterogeneity between studies disappeared after excluding data from the study by Lukas et al. ( I 2 = 0%, p = 0.79). However, the conclusions did not change (OR = 1.20, 95%CI:0.80,1.79, p = 0.38). 2 studies reported the effect of IO vasopressors on postoperative AKI, and the data were pooled and analyzed using a random-effects model, and the results suggested that the use of intraoperative vasopressors did not increase the risk of AKI. In addition, a total of 2 studies recorded patients' intraoperative urine output, and complete data were recorded by mean ± standard deviation. We also performed a meta-analysis, and the results suggested that the increase or decrease in intraoperative urine output was not one of the risk factors for postoperative AKI, and the results were not statistically significant (MD = 27.15, 95% CI:-127.21,181.51, p = 0.73). Primary tumor site A total of five out of seven included studies reported the comparative number of events and incidence of AKI after CRC + HIPEC for different primary tumor sites. In our dataset, the most common primary site was appendix (39%), followed by colorectal (21%) and gastric (13%), and the other sites and their percentages were pseudomyxoma peritonei (9%), mesothelioma (8%), ovarian (6%), and others (4%), respectively. For different sites of primary tumors, we separately and independently performed heterogeneity tests and selected appropriate models for meta-analysis to investigate whether the primary tumor site was a potential risk factor for postoperative AKI. The detailed results of the heterogeneity analysis and meta-analysis are shown in Table . Among them, a total of five studies reported the number of events of appendiceal site between the AKI group and Non-AKI group, and the data were pooled and analyzed by using a dichotomous variable with a random-effects model ( I 2 = 61%, P = 0.04), and the exclusion of the studies one by one revealed that the I 2 values stabilized, and no source of heterogeneity was found. The results suggested that patients with tumors at the appendix site had a significantly lower risk of postoperative AKI (OR = 0.48, 95%CI:0.24,0.98, p = 0.04) (Fig. A). In addition, four studies reported the role of primary tumors at mesothelioma and ovarian sites on postoperative AKI, respectively. Our pooled analysis of the data revealed that the risk of postoperative AKI, regardless of whether the tumor was at mesothelioma (OR = 2.54, 95%CI:1.21,5.30, p = 0.01) (Fig. B) or ovarian (OR = 2.31, 95%CI:1.37,3.89, p = 0.002) (Fig. C) sites, was substantially increased. Subgroup analysis Considering that the populations studied in our inclusion study came from all over the world, we divided them into three regions, including Europe, Asia, and the Americas, and conducted subgroup analyses of the age and gender factors to explore whether regional factors contribute to the creation of greater heterogeneity. The results suggested that in the Asian subgroup ( p = 0.74) and the European subgroup ( p = 0.09), the age factor was not significantly different between the AKI and Non-AKI groups, and the results were not statistically significant. In contrast, in the Americas subgroup, older patients were more likely to develop AKI after CRS + HIPEC ( p = 0.003) (Fig. A). Therefore, we conclude that the concept of age as a risk factor for postoperative AKI may be more appropriate for the American population. Subgroup analysis of the gender factor suggested that the results were consistent with the total effect in the American ( p = 0.007) and European ( p = 0.04) populations. In contrast, the Asian population showed the opposite results (Fig. B). In addition, since the included studies used two different diagnostic criteria, AKIN and KDIGO, for the definition of AKI, we grouped them with different diagnostic criteria for subgroup analysis. For diabetes mellitus, a preoperative risk factor, the results suggested that diabetes mellitus did not lead to an increased risk of postoperative AKI under the AKIN diagnostic criteria ( p = 0.34). And under the KDIGO diagnostic criteria, the results were consistent with the total effect ( p = 0.02) (Fig. D). In addition, we also performed subgroup analysis for appendix (Fig. E) and mesothelioma (Fig. F). The results all suggested significant differences between the two groups only under the KDIGO diagnostic criteria. Therefore, we conclude that for part of the meta-analysis results may be more accurate under the KDIGO diagnostic criteria. On the contrary, the subgroup analysis of the gender factor suggested that there was a significant difference between the two groups only under the AKIN diagnostic criteria (Fig. C). The results of the detailed subgroup analysis are shown in Table . Publication bias Due to the limited number of included studies, it was not possible to visualize publication bias among studies using funnel plots, so we used the Egger test and Begg test to explore whether there was significant publication bias among the included studies. In addition, we used a sensitivity analysis of the relevant risk factors using round-by-round exclusion hair to ensure the stability of the study results. We conducted Egger's test and Begg's test for age (Appendix Figure S2A-S2D), gender (Appendix Figure S3A-S3D), BMI (Appendix Figure S4A-S4D), DM (Appendix Figure S5A-S5D), and appendix (Appendix Figure S6A-S6D), respectively, and observed the change of the total effect value after excluding the included literature one by one. The results suggested that the p-values of Egger's test and Begg's test for the risk factors mentioned above were greater than 0.05, suggesting that there was no significant publication bias among the studies. Therefore, we conclude that publication bias has no significant effect on the findings of this meta-analysis, and the results are highly stable. The detailed Egger test and Begg test results are shown in Table .
A total of 63 articles on AKI after CRS + HIPEC were retrieved through the developed search strategy (Fig. ). All retrieved articles were imported into Endnote 20.0 software, and 16 duplicate articles were excluded using the software de-duplication function. By reading the titles and abstracts, 22 papers that did not meet the inclusion criteria were initially excluded. A total of 25 papers were retained after the initial screening. Among the remaining papers, we performed full-text reading and excluded a total of 18 articles that did not meet the inclusion requirements, including 6 papers that could not extract data, 3 papers with Newcastle–Ottawa Scale (NOS) scores lower than 6, 5 papers describing problems that were not related to the topic, and 4 papers that did not record the target results. In the end, a total of 7 papers were included in this systematic review and meta-analysis, all of which were retrospective studies and all of which described in detail the number of events or specific values of the different risk factors for postoperative AKI in the AKI group versus the Non-AKI group (Table ). This systematic review and meta-analysis investigated a total of 1550 patients who developed AKI after CRS + HIPEC in seven studies. The included studies were published online from 2017 to 2024. Two of the included studies were from the United States, two from China, two from Germany, and one from Portugal. The largest proportion of all patients were from the United States (34.39%), followed by China (28.39%), Germany (25.48%), and Portugal (11.74%). A total of 28 risk factors as well as 6 different tumor primary sites were involved in the analysis of the impact of postoperative AKI. The risk factors were divided into preoperative and intraoperative for independent analysis, and a summary of the detailed analysis results can be found in Table . The main primary sites of the tumors were appendix (39%) and colorectal cancer (21%), followed by gastric (13%),pseudomyxoma peritonei (9%), mesothelioma (6%),ovarian (6%), andothers (4%). detailed distribution can be found in Appendix Figure S1. Detailed characteristics of the seven included studies can be found in Table .
In the seven studies included in this meta-analysis, we pooled and analyzed a total of 20 potential preoperative risk factors. These included the use of drugs such as ACEI or ARB, Diuretics, and NSAIDs, as well as patient age, gender, BMI, and PCI. We also performed a pooled analysis of patients' preoperative Alb, preoperative Hb, preoperative eGFR, preoperative Urea, and Preoperative creatinine. In addition, the effect of various preoperative underlying diseases of the patients, such as Chronic kidney diseas, Diabetes mellitus, Heart disease, Hypertension, and the patients' preoperative Neoadjuvant therapy and Preoperative chemotherapy on the Postoperative AKI were also included in our analysis.Hospitalization (days) and ICU duration (days) as potential risk factors for postoperative AKI were also explored. All the data related to the above risk factors were collected through specially designed and scientifically based statistical forms, and preoperative risk factors with complete and comparable data were analyzed in a pooled manner, in which eight risk factors such as age, gender, BMI, PCI, eGFR, Hb, Diabetes mellitus, and Hypertension were considered to have a statistically significant ( p < 0.05), and the results of meta-analysis of all the above potential risk factors can be found in Table . Age A total of six studies recorded information on the distribution of age in the AKI group versus the Non-AKI group, and information on age was reported as mean ± standard deviation. The heterogeneity test suggested that there was no heterogeneity between studies ( I 2 = 0%, P = 0.50). Therefore, a fixed-effects model was chosen for the analysis and the pooled analysis showed (MD = 2.04, 95% CI: 0.55,3.52, p = 0.007) (Fig. A). We therefore conclude that advanced age is one of the preoperative risk factors for AKI after CRS + HIPEC. Sex A total of six studies described the gender distribution of patients, recorded using dichotomous variables. After performing the test for heterogeneity ( I 2 = 21%, P = 0.27), meta-analysis was performed using a fixed-effects model. The results suggested a significant and statistically significant difference between the two groups (OR = 1.53, 95% CI: 1.17,2.00, p = 0.002) (Fig. B). Therefore, we conclude that the risk of AKI after CRS + HIPEC is greater in male patients than in female patients. Gender is one of the preoperative risk factors for postoperative AKI. BMI A total of five studies reported detailed information on patients' BMI using mean ± standard deviation data recording. The heterogeneity test suggested no heterogeneity between studies ( I 2 = 0%, P = 0.65). Analysis using the fixed effect model showed a significant and statistically significant difference between the two groups (MD = 1.22, 95% CI: 0.42,2.03, p = 0.003) (Fig. C). Therefore, we can conclude that the risk of postoperative AKI after CRS + HIPEC increases as the value of BMI increases. Peritoneal cancer index A total of three studies referred to patients' preoperative Peritoneal cancer index (PCI), and the data were described using mean ± standard deviation recording. By heterogeneity test ( I 2 = 0%, P = 0.51), P > 0.1 suggests that there is no heterogeneity between studies, so we used fixed effect model for pooled analysis. The results showed a significant difference between the AKI and Non-AKI groups, and the results were statistically significant (MD = 3.79, 95% CI: 1.49,3.79, p = 0.003) (Fig. D). Therefore, we conclude that higher preoperative PCI is one of the significant risk factors for postoperative AKI. Preoperative eGFR and Hb A total of three studies reported details of Preoperative eGFR, recorded as mean ± standard deviation. Pooled analysis of the data using continuous variables and random effects model ( I 2 = 0%, p = 0.97) suggested that there was no heterogeneity between the two and there was a significant difference between the two groups (MD = -11.53, 95%CI: -17.89,-5.17, p = 0.0004) (Fig. E). Therefore, we conclude that a decrease in Preoperative eGFR leads to an increase in the incidence of AKI after CRS + HIPEC and that Preoperative eGFR is a significant preoperative risk factor. In addition, a total of 2 studies referred to the Preoperative Hb of patients and recorded detailed data in the form of mean ± standard deviation. Heterogeneity analysis showed no heterogeneity between the two ( I 2 = 0%, P = 0.94), which was analyzed using continuous variables and fixed effects model. The results suggested a significant difference between the two groups (MD = -5.87, 95%CI: -10.90,-0.83, p = 0.02) (Fig. F) and were statistically significant. In other words, a decrease in preoperative Hb increased the incidence of postoperative AKI. Diabetes mellitus and hypertension A total of five studies reported diabetes in 1310 patients, detailing the number of people in the AKI group compared with those in the Non-AKI group. The heterogeneity test suggested that there was no heterogeneity among the studies ( I 2 = 6%, P = 0.37), so we used a fixed-effects model for meta-analysis. The results suggested that diabetes mellitus was one of the significant risk factors for AKI after CRS + HIPEC, and the results were statistically significant (OR = 1.78, 95% CI: 1.15,2.75, p = 0.01) (Fig. G). In addition a total of four studies reported information on patients' hypertension, of which the results of Annika et al. versus Bai et al. indicated that preoperative hypertension was a risk factor for postoperative AKI ( P > 0.05). On the contrary, the findings of Lu et al. versus Lukas et al. concluded that hypertension does not lead to an increased risk of postoperative AKI. Therefore, we performed a meta-analysis using a random-effects model ( I 2 = 59%, p = 0.06), which showed a significant difference between the two groups (OR = 2.43, 95% CI: 1.37,4.31, p = 0.002) (Fig. H), and the results were statistically significant. For the high heterogeneity of the pooled analysis, we used a case-by-case exclusion method to explore the source of heterogeneity, and the results showed that the I 2 values stabilized and no significant source of heterogeneity was found. Other preoperative risk factors Other preoperative risk factors included the use of drugs such as ACEI or ARB, Diuretics, NSAIDs, laboratory findings such as preoperative Alb, preoperative Urea and Preoperative creatinine. Similarly, we performed a pooled analysis of the above risk factors, but the results suggested that there was no significant difference between the two groups (P > 0.05). Detailed results of heterogeneity analysis and pooled analysis can be found in Table .
A total of six studies recorded information on the distribution of age in the AKI group versus the Non-AKI group, and information on age was reported as mean ± standard deviation. The heterogeneity test suggested that there was no heterogeneity between studies ( I 2 = 0%, P = 0.50). Therefore, a fixed-effects model was chosen for the analysis and the pooled analysis showed (MD = 2.04, 95% CI: 0.55,3.52, p = 0.007) (Fig. A). We therefore conclude that advanced age is one of the preoperative risk factors for AKI after CRS + HIPEC.
A total of six studies described the gender distribution of patients, recorded using dichotomous variables. After performing the test for heterogeneity ( I 2 = 21%, P = 0.27), meta-analysis was performed using a fixed-effects model. The results suggested a significant and statistically significant difference between the two groups (OR = 1.53, 95% CI: 1.17,2.00, p = 0.002) (Fig. B). Therefore, we conclude that the risk of AKI after CRS + HIPEC is greater in male patients than in female patients. Gender is one of the preoperative risk factors for postoperative AKI.
A total of five studies reported detailed information on patients' BMI using mean ± standard deviation data recording. The heterogeneity test suggested no heterogeneity between studies ( I 2 = 0%, P = 0.65). Analysis using the fixed effect model showed a significant and statistically significant difference between the two groups (MD = 1.22, 95% CI: 0.42,2.03, p = 0.003) (Fig. C). Therefore, we can conclude that the risk of postoperative AKI after CRS + HIPEC increases as the value of BMI increases.
A total of three studies referred to patients' preoperative Peritoneal cancer index (PCI), and the data were described using mean ± standard deviation recording. By heterogeneity test ( I 2 = 0%, P = 0.51), P > 0.1 suggests that there is no heterogeneity between studies, so we used fixed effect model for pooled analysis. The results showed a significant difference between the AKI and Non-AKI groups, and the results were statistically significant (MD = 3.79, 95% CI: 1.49,3.79, p = 0.003) (Fig. D). Therefore, we conclude that higher preoperative PCI is one of the significant risk factors for postoperative AKI.
A total of three studies reported details of Preoperative eGFR, recorded as mean ± standard deviation. Pooled analysis of the data using continuous variables and random effects model ( I 2 = 0%, p = 0.97) suggested that there was no heterogeneity between the two and there was a significant difference between the two groups (MD = -11.53, 95%CI: -17.89,-5.17, p = 0.0004) (Fig. E). Therefore, we conclude that a decrease in Preoperative eGFR leads to an increase in the incidence of AKI after CRS + HIPEC and that Preoperative eGFR is a significant preoperative risk factor. In addition, a total of 2 studies referred to the Preoperative Hb of patients and recorded detailed data in the form of mean ± standard deviation. Heterogeneity analysis showed no heterogeneity between the two ( I 2 = 0%, P = 0.94), which was analyzed using continuous variables and fixed effects model. The results suggested a significant difference between the two groups (MD = -5.87, 95%CI: -10.90,-0.83, p = 0.02) (Fig. F) and were statistically significant. In other words, a decrease in preoperative Hb increased the incidence of postoperative AKI.
A total of five studies reported diabetes in 1310 patients, detailing the number of people in the AKI group compared with those in the Non-AKI group. The heterogeneity test suggested that there was no heterogeneity among the studies ( I 2 = 6%, P = 0.37), so we used a fixed-effects model for meta-analysis. The results suggested that diabetes mellitus was one of the significant risk factors for AKI after CRS + HIPEC, and the results were statistically significant (OR = 1.78, 95% CI: 1.15,2.75, p = 0.01) (Fig. G). In addition a total of four studies reported information on patients' hypertension, of which the results of Annika et al. versus Bai et al. indicated that preoperative hypertension was a risk factor for postoperative AKI ( P > 0.05). On the contrary, the findings of Lu et al. versus Lukas et al. concluded that hypertension does not lead to an increased risk of postoperative AKI. Therefore, we performed a meta-analysis using a random-effects model ( I 2 = 59%, p = 0.06), which showed a significant difference between the two groups (OR = 2.43, 95% CI: 1.37,4.31, p = 0.002) (Fig. H), and the results were statistically significant. For the high heterogeneity of the pooled analysis, we used a case-by-case exclusion method to explore the source of heterogeneity, and the results showed that the I 2 values stabilized and no significant source of heterogeneity was found.
Other preoperative risk factors included the use of drugs such as ACEI or ARB, Diuretics, NSAIDs, laboratory findings such as preoperative Alb, preoperative Urea and Preoperative creatinine. Similarly, we performed a pooled analysis of the above risk factors, but the results suggested that there was no significant difference between the two groups (P > 0.05). Detailed results of heterogeneity analysis and pooled analysis can be found in Table .
In our seven included articles, we defined a total of eight risk factors as intraoperative risk factors, including the use of chemotherapeutic agents cisplatin and mitomycin in HIPEC, IO fluid, IO SBP < 100, IO transfusion, IO vasopressors, operation time, urine output, and other parameters. We tested the heterogeneity of the above risk factors and selected the appropriate effect model for pooled analysis based on their results to obtain scientific conclusions. The results suggested that the use of chemotherapeutic drugs cisplatin, mitomycin and IO SBP < 100(min) were significantly and statistically different between the two groups. Detailed results are shown in Table . Operative time A total of three studies reported specific surgical times, using a mean ± standard deviation form of data recording. After testing for heterogeneity ( I 2 = 50%, P = 0.14), the pooled data were analyzed using a random effects model. The results showed that the duration of surgery was not significantly different between the two groups (MD = 21.92, 95%CI: -20.25,64.08, p = 0.31) (Fig. A) and the results were not statistically significant. In other words, it is not yet possible to consider the duration of surgery as one of the risk factors for AKI after CRS + HIPEC. Chemotherapy regimens A total of 5 of the 7 included studies in this meta-analysis mentioned and documented the number of events in the AKI group compared with the number of events in the Non-AKI group with different intraoperative chemotherapy regimens. Intraoperative chemotherapy regimens included the use of drugs such as cistplatin, oxilaplatin, and mitomycin, of which the data for cistplatin and mitomycin were comparable, so we analyzed the data for these two drugs separately. A total of four studies reported a comparison of the number of events in patients using the chemotherapeutic agent cistplatin in HIPEC, using a dichotomous variable recording format. Heterogeneity analysis was performed and revealed heterogeneity among the studies ( I 2 = 79%, p = 0.01), and the results suggested that intraoperative use of cistplatin greatly increased the risk of postoperative AKI (OR = 2.84, 95% CI: 1.27,6.35, p < 0.0001) (Fig. B). When we explored the source of heterogeneity using a cull-by-cull approach, we found that I 2 values stabilized (66%-79%) and no source of heterogeneity was identified. Three studies reported the use of the drug mitomycin between the two groups, and the heterogeneity analysis revealed a large heterogeneity between the studies ( I 2 = 70%, P = 0.04), so we used a case-by-case exclusion method to explore the source of heterogeneity. When we excluded the data from Annika et al. study, we found that the heterogeneity disappeared ( I 2 = 0%, P = 0.33), and thus we determined that the data from Annika et al.'s study was the source of heterogeneity. After eliminating the source of heterogeneity, we found that the data from the studies of Eduarda et al. and Juan et al. had considerable homogeneity, and by pooling and analyzing the data from both, we found that the use of mitomycin was significantly different between the two groups, and that the use of mitomycin was one of the protective factors given for postoperative AKI, and the results had a statistically significant (OR = 0.41, 95% CI: 0.26,0.63, p < 0.0001) (Fig. C). IO SBP < 100 mmHg(min) A total of 2 studies recorded the specific time (min) for intraoperative IO SBP < 100 mmHg and described the data as mean ± standard deviation. Pooled analysis of the data from Lu et al. and Lukas et al. using a random-effects model ( I 2 = 0%, p = 0.50) revealed a significant difference between the two groups and the results were statistically significant (MD = 10.95, 95%CI:3.15,18.75, p = 0.006) (Fig. D). Therefore, we got the conclusion that prolonged duration of intraoperative IO SBP < 100 leads to increased risk of postoperative AKI after CRS + HIPEC and is one of the intraoperative risk factors. Other intraoperative risk factors Other intraoperative risk factors included parameters such as IO fluid, IO transfusion, IO vasopressors, and urine output. Detailed results of heterogeneity analysis with meta-analysis can be found in Table . Two studies explored the effect of IO fluid on postoperative AKI, and the results told us that IO fluid was not one of the intraoperative risk factors (MD = 79.20, 95%CI:-135.33,293.72, p = 0.47). Three studies mentioned detailed data on IO transfusion, analyzed by random effects model ( I 2 = 60%, p = 0.08), which suggested that IO transfusion was not significantly different between the two groups (OR = 0.90, 95%CI:0.47,1.72, p = 0.74). Heterogeneity between studies disappeared after excluding data from the study by Lukas et al. ( I 2 = 0%, p = 0.79). However, the conclusions did not change (OR = 1.20, 95%CI:0.80,1.79, p = 0.38). 2 studies reported the effect of IO vasopressors on postoperative AKI, and the data were pooled and analyzed using a random-effects model, and the results suggested that the use of intraoperative vasopressors did not increase the risk of AKI. In addition, a total of 2 studies recorded patients' intraoperative urine output, and complete data were recorded by mean ± standard deviation. We also performed a meta-analysis, and the results suggested that the increase or decrease in intraoperative urine output was not one of the risk factors for postoperative AKI, and the results were not statistically significant (MD = 27.15, 95% CI:-127.21,181.51, p = 0.73).
A total of three studies reported specific surgical times, using a mean ± standard deviation form of data recording. After testing for heterogeneity ( I 2 = 50%, P = 0.14), the pooled data were analyzed using a random effects model. The results showed that the duration of surgery was not significantly different between the two groups (MD = 21.92, 95%CI: -20.25,64.08, p = 0.31) (Fig. A) and the results were not statistically significant. In other words, it is not yet possible to consider the duration of surgery as one of the risk factors for AKI after CRS + HIPEC.
A total of 5 of the 7 included studies in this meta-analysis mentioned and documented the number of events in the AKI group compared with the number of events in the Non-AKI group with different intraoperative chemotherapy regimens. Intraoperative chemotherapy regimens included the use of drugs such as cistplatin, oxilaplatin, and mitomycin, of which the data for cistplatin and mitomycin were comparable, so we analyzed the data for these two drugs separately. A total of four studies reported a comparison of the number of events in patients using the chemotherapeutic agent cistplatin in HIPEC, using a dichotomous variable recording format. Heterogeneity analysis was performed and revealed heterogeneity among the studies ( I 2 = 79%, p = 0.01), and the results suggested that intraoperative use of cistplatin greatly increased the risk of postoperative AKI (OR = 2.84, 95% CI: 1.27,6.35, p < 0.0001) (Fig. B). When we explored the source of heterogeneity using a cull-by-cull approach, we found that I 2 values stabilized (66%-79%) and no source of heterogeneity was identified. Three studies reported the use of the drug mitomycin between the two groups, and the heterogeneity analysis revealed a large heterogeneity between the studies ( I 2 = 70%, P = 0.04), so we used a case-by-case exclusion method to explore the source of heterogeneity. When we excluded the data from Annika et al. study, we found that the heterogeneity disappeared ( I 2 = 0%, P = 0.33), and thus we determined that the data from Annika et al.'s study was the source of heterogeneity. After eliminating the source of heterogeneity, we found that the data from the studies of Eduarda et al. and Juan et al. had considerable homogeneity, and by pooling and analyzing the data from both, we found that the use of mitomycin was significantly different between the two groups, and that the use of mitomycin was one of the protective factors given for postoperative AKI, and the results had a statistically significant (OR = 0.41, 95% CI: 0.26,0.63, p < 0.0001) (Fig. C).
A total of 2 studies recorded the specific time (min) for intraoperative IO SBP < 100 mmHg and described the data as mean ± standard deviation. Pooled analysis of the data from Lu et al. and Lukas et al. using a random-effects model ( I 2 = 0%, p = 0.50) revealed a significant difference between the two groups and the results were statistically significant (MD = 10.95, 95%CI:3.15,18.75, p = 0.006) (Fig. D). Therefore, we got the conclusion that prolonged duration of intraoperative IO SBP < 100 leads to increased risk of postoperative AKI after CRS + HIPEC and is one of the intraoperative risk factors.
Other intraoperative risk factors included parameters such as IO fluid, IO transfusion, IO vasopressors, and urine output. Detailed results of heterogeneity analysis with meta-analysis can be found in Table . Two studies explored the effect of IO fluid on postoperative AKI, and the results told us that IO fluid was not one of the intraoperative risk factors (MD = 79.20, 95%CI:-135.33,293.72, p = 0.47). Three studies mentioned detailed data on IO transfusion, analyzed by random effects model ( I 2 = 60%, p = 0.08), which suggested that IO transfusion was not significantly different between the two groups (OR = 0.90, 95%CI:0.47,1.72, p = 0.74). Heterogeneity between studies disappeared after excluding data from the study by Lukas et al. ( I 2 = 0%, p = 0.79). However, the conclusions did not change (OR = 1.20, 95%CI:0.80,1.79, p = 0.38). 2 studies reported the effect of IO vasopressors on postoperative AKI, and the data were pooled and analyzed using a random-effects model, and the results suggested that the use of intraoperative vasopressors did not increase the risk of AKI. In addition, a total of 2 studies recorded patients' intraoperative urine output, and complete data were recorded by mean ± standard deviation. We also performed a meta-analysis, and the results suggested that the increase or decrease in intraoperative urine output was not one of the risk factors for postoperative AKI, and the results were not statistically significant (MD = 27.15, 95% CI:-127.21,181.51, p = 0.73).
A total of five out of seven included studies reported the comparative number of events and incidence of AKI after CRC + HIPEC for different primary tumor sites. In our dataset, the most common primary site was appendix (39%), followed by colorectal (21%) and gastric (13%), and the other sites and their percentages were pseudomyxoma peritonei (9%), mesothelioma (8%), ovarian (6%), and others (4%), respectively. For different sites of primary tumors, we separately and independently performed heterogeneity tests and selected appropriate models for meta-analysis to investigate whether the primary tumor site was a potential risk factor for postoperative AKI. The detailed results of the heterogeneity analysis and meta-analysis are shown in Table . Among them, a total of five studies reported the number of events of appendiceal site between the AKI group and Non-AKI group, and the data were pooled and analyzed by using a dichotomous variable with a random-effects model ( I 2 = 61%, P = 0.04), and the exclusion of the studies one by one revealed that the I 2 values stabilized, and no source of heterogeneity was found. The results suggested that patients with tumors at the appendix site had a significantly lower risk of postoperative AKI (OR = 0.48, 95%CI:0.24,0.98, p = 0.04) (Fig. A). In addition, four studies reported the role of primary tumors at mesothelioma and ovarian sites on postoperative AKI, respectively. Our pooled analysis of the data revealed that the risk of postoperative AKI, regardless of whether the tumor was at mesothelioma (OR = 2.54, 95%CI:1.21,5.30, p = 0.01) (Fig. B) or ovarian (OR = 2.31, 95%CI:1.37,3.89, p = 0.002) (Fig. C) sites, was substantially increased.
Considering that the populations studied in our inclusion study came from all over the world, we divided them into three regions, including Europe, Asia, and the Americas, and conducted subgroup analyses of the age and gender factors to explore whether regional factors contribute to the creation of greater heterogeneity. The results suggested that in the Asian subgroup ( p = 0.74) and the European subgroup ( p = 0.09), the age factor was not significantly different between the AKI and Non-AKI groups, and the results were not statistically significant. In contrast, in the Americas subgroup, older patients were more likely to develop AKI after CRS + HIPEC ( p = 0.003) (Fig. A). Therefore, we conclude that the concept of age as a risk factor for postoperative AKI may be more appropriate for the American population. Subgroup analysis of the gender factor suggested that the results were consistent with the total effect in the American ( p = 0.007) and European ( p = 0.04) populations. In contrast, the Asian population showed the opposite results (Fig. B). In addition, since the included studies used two different diagnostic criteria, AKIN and KDIGO, for the definition of AKI, we grouped them with different diagnostic criteria for subgroup analysis. For diabetes mellitus, a preoperative risk factor, the results suggested that diabetes mellitus did not lead to an increased risk of postoperative AKI under the AKIN diagnostic criteria ( p = 0.34). And under the KDIGO diagnostic criteria, the results were consistent with the total effect ( p = 0.02) (Fig. D). In addition, we also performed subgroup analysis for appendix (Fig. E) and mesothelioma (Fig. F). The results all suggested significant differences between the two groups only under the KDIGO diagnostic criteria. Therefore, we conclude that for part of the meta-analysis results may be more accurate under the KDIGO diagnostic criteria. On the contrary, the subgroup analysis of the gender factor suggested that there was a significant difference between the two groups only under the AKIN diagnostic criteria (Fig. C). The results of the detailed subgroup analysis are shown in Table .
Due to the limited number of included studies, it was not possible to visualize publication bias among studies using funnel plots, so we used the Egger test and Begg test to explore whether there was significant publication bias among the included studies. In addition, we used a sensitivity analysis of the relevant risk factors using round-by-round exclusion hair to ensure the stability of the study results. We conducted Egger's test and Begg's test for age (Appendix Figure S2A-S2D), gender (Appendix Figure S3A-S3D), BMI (Appendix Figure S4A-S4D), DM (Appendix Figure S5A-S5D), and appendix (Appendix Figure S6A-S6D), respectively, and observed the change of the total effect value after excluding the included literature one by one. The results suggested that the p-values of Egger's test and Begg's test for the risk factors mentioned above were greater than 0.05, suggesting that there was no significant publication bias among the studies. Therefore, we conclude that publication bias has no significant effect on the findings of this meta-analysis, and the results are highly stable. The detailed Egger test and Begg test results are shown in Table .
Patients with peritoneal surface malignancies (PSM) often face severe complications and poor survival prognosis . Currently, cytoreductive surgery (CRS) combined with hyperthermic intraperitoneal chemotherapy (HIPEC) has become the primary treatment modality for the treatment of patients with primary or metastatic peritoneal tumors .CRS + HIPEC significantly improves the survival prognosis of the patients , but HIPEC is associated with higher mortality and morbidity along with the improvement in the prognosis .CRS + HIPEC has a perioperative mortality rate of approximately 4% and a combined incidence of postoperative related complications of up to 40% . Acute kidney injury (AKI), one of the most serious complications in cancer treatment, significantly increases the length of hospitalization and is strongly associated with high morbidity and mortality . Despite continuous improvement in perioperative management, the incidence of complications after major surgery still exceeds 30% , of which AKI accounts for 13%, and patients with these AKIs face more than a six-fold increased risk of death . Therefore, early identification of risk factors for AKI after CRS + HIPEC and preoperative interventions targeting reversible risk factors are essential to improve patient prognosis. In this study, the combined incidence and risk factors of AKI in PSM patients after CRS + HIPEC were comprehensively analyzed by meta-analysis. This meta-analysis of the incidence of postoperative AKI after CRS + HIPEC in the seven included articles suggested that the combined incidence of postoperative AKI was approximately 22.9%. In contrast, the incidence of postoperative AKI after CRS + HIPEC ranged from 1 to 48% as reported in previous studies . The large differences in incidence rates among studies may stem from differences in sample size or differences in diagnostic criteria for AKI. In this study, a total of five studies used the most widely recognized KDIGO diagnostic criteria and two studies used the AKIN criteria. The incidence of AKI under different diagnostic criteria was counted separately, resulting in 23.7% for the AKIN criterion and 22.54% for the KDIGO criterion, which were close to each other and within the range of 1–48%. Our study showed that higher body mass index (BMI) was associated with an increased risk of acute kidney injury (AKI) after CRS + HIPEC ( p < 0.05), which is consistent with the findings of Juan et al. However, the studies by Samer et al. and Eduarda et al. did not find a significant association between BMI and AKI. We note that there were differences between the study by Juan et al. using the AKIN diagnostic criteria and other studies using the KDIGO criteria. After excluding the data from Juan et al., there was no significant association between BMI and AKI (p > 0.05), suggesting that BMI as a risk factor for AKI may be applicable only to the AKIN criteria. Obesity is associated with a variety of inflammatory responses and diseases, including heart disease, hypertension, and osteoarthritis , and has been linked to the development and progression of cancer . Obese patients face a higher risk of complications in the perioperative period . Conversely, underweight patients have higher surgical mortality associated with low albumin levels and energy reserves . Therefore, preoperative weight management is crucial for postoperative recovery. The Peritoneal Cancer Index (PCI) scoring system, proposed by Sugarbaker et al., is widely used to assess peritoneal metastasis of abdominal tumors . The system divides the abdomen into 13 regions and assesses tumor spread and the feasibility of CRS by scoring tumor size .PCI has also been associated with a high risk of death in patients after CRS + HIPEC . Our meta-analysis showed that higher preoperative PCI values were associated with a higher incidence of postoperative AKI. This finding is consistent with the study of Samer et al. but contrary to the studies of Eduarda et al. and Lu et al. It may be related to the different PCI scoring methods and the subjectivity of the scorers.PCI is a powerful tool for assessing patients' preoperative risk and postoperative prognosis, and our study also demonstrated that PCI reliably predicts postoperative AKI.Therefore, PCI scoring should be routinely performed in patients with peritoneal carcinomatosis preoperatively performed to assess tumor load and risk of postoperative complications and to guide treatment selection. The highly correlated relationship between advanced age and various renal function indices with AKI has been confirmed by several studies .People aged 65 years or older are defined as traditionally elderly, and this segment of the population is currently the fastest growing in developed countries . Acute kidney injury (AKI), however, is one of the most common emergencies among elderly patients, and its incidence has been on the rise in recent years. Its main pathogenesis lies in the fact that aging kidneys are often accompanied by structural changes and functional decline. As the kidneys age, the renal blood vessels and glomeruli gradually become sclerotic, leading to a series of functional changes. For example, the decline of eGFR, the decrease of ultrafiltration coefficient, and the decreasing ability of the kidney to regulate itself . The results of our meta-analysis suggest that two factors, advanced age and decline in eGFR, are among the high-risk factors for AKI after CRS + HIPEC. For the elderly aged 65 years or above, we should pay more attention to the changes of preoperative renal function indexes in patients during preoperative evaluation, and timely monitoring and early intervention are particularly important. In addition, for the identification of patients with acute kidney injury, proteinuria and eGFR should be monitored at the same time, which not only improves the identification rate of AKI, but also further evaluates the long-term prognosis of patients . In our study, diabetes mellitus and hypertension were identified as preoperative risk factors for AKI after CRS + HIPEC.DM and hypertension have gained widespread attention as serious global public health problems. And, their high correlation with AKI has been confirmed by several studies . DM-associated AKI, which mainly originates from circulatory disorders and changes in the renal microenvironment caused by metabolic disorders, thus affecting the self-repairing ability of the kidneys, has led to the development of a series of complications . The pathological mechanism of hypertension-related AKI is mainly that the sustained elevation of blood pressure puts the kidneys in a state of hyperperfusion leading to thickening and hardening of the vessel wall, and ultimately the formation of atherosclerosis. In addition, glomerular injury and tubulointerstitial fibrosis are also important causes. Therefore, before CRS + HIPEC is performed in patients with DM and hypertension, the main strategy of treatment should be focused on glycemic control and blood pressure management. We performed a meta-analysis of a total of eight intraoperative factors and finally identified IO SBP < 100 mmHg(min) as an intraoperative risk factor for AKI. It is also mentioned in the KDIGO guidelines that maintaining stable blood pressure intraoperatively, especially avoiding hypotension, will reduce the risk of kidney injury. Several studies (single and multicenter) exist to demonstrate the association between intraoperative hypotension and postoperative AKI . However, blood pressure was described in the studies as mean arterial pressure (MAP), and in 40% of these patients, MAP was below 65 mmHg (for 10–12 min) during surgery. Therefore, although we confirmed that intraoperative SBP < 100 mmHg increases the risk of developing postoperative AKI, limited by the limited number of studies we included, more future studies are needed to confirm the association between intraoperative hypotension and postoperative AKI after CRS + HIPEC. In addition, the effect of different mean arterial pressure (MAP) gradients on AKI in hypotensive states will be one of our future research directions. Common chemotherapeutic agents used in HIPEC surgery, including oxaliplatin, cisplatin (CDDP), and mitomycin C (MMC). Our study conducted a meta-analysis of the role of MMC and cisplatin on postoperative AKI. One of the surprises was that MMC appeared to be relatively protective against postoperative AKI. Extant studies have explored various aspects of the use of MMC with oxaliplatin in HIPEC. Among them, the results of van et al. suggested that oxaliplatin not only shortened the time but also had no adverse effect on postoperative complications or short-term survival compared with MMC infusion therapy . In a retrospective study of patients with colon cancer by the American Society for Peritoneal Surface Malignancies (ASPSM), a total of 539 patients who received MMC and oxaliplatin respectively were analyzed for survival . The results suggested that the MMC group appeared to have a better survival prognosis (32.7 months) compared to the OS of the oxaliplatin group (31.4 months). In addition, moreover, the results of a study showed that intraoperative coadministration of cisplatin and MMC was not associated with an increase in the incidence of postoperative AKI . As an alkylating agent, cisplatin is widely used in the treatment of tumors, and is particularly effective in gastrointestinal and gynecological tumors. However, the use of cisplatin is associated with several complications, including nephrotoxicity, ototoxicity, neurotoxicity, and vomiting. The most significant complication is nephrotoxicity. The main pathogenesis of nephrotoxicity is acute or subacute tubular necrosis due to injury of proximal tubules. According to the KDIGO guidelines, we recommend avoiding nephrotoxic drugs and using renoprotective measures such as sodium thiosulfate and amifostine when necessary, especially after cisplatin chemotherapy. Among them, the prevention of nephrotoxicity after HIPEC by sodium thiosulfate has been confirmed by several studies . The effectiveness of the drug has been gradually proved as the studies continue, but for the nephrotoxicity of cisplatin we still need to develop a standardized treatment regimen with the aim of reducing the postoperative AKI caused by nephrotoxicity.In addition, in the KDIGO guidelines, the importance of establishing a multidisciplinary team that includes nephrologists, intensivists, surgeons, and anesthesiologists to work together in the management of patients with AKI is specifically mentioned. Finally, in this study, we specifically focused on the effect of the dose of chemotherapeutic agents used in HIPEC on postoperative acute kidney injury (AKI). Although we provided detailed information on the doses of chemotherapeutic agents used in different studies, we found it challenging to perform a direct meta-analysis due to the heterogeneity of the data. Our analysis revealed a possible association between the type and dose of chemotherapeutic agents and the risk of AKI, especially when cisplatin (Cisplatin) was used, which was significantly associated with an increased risk of postoperative AKI. In contrast, mitomycin (Mitomycin C) showed a protective effect against postoperative AKI. These findings underscore the importance of optimizing chemotherapeutic drug dosing in HIPEC treatment and the need for further studies to determine the impact of different chemotherapy regimens on AKI risk. Future studies require more standardized data collection and more refined dose–response analyses to better understand the role of chemotherapeutic agents in HIPEC and provide guidance for clinical practice.
This systematic review and meta-analysis provides the first complete search and scientific analysis of the risk factors for AKI after CRS + HIPEC and a comprehensive analysis of the incidence of postoperative AKI in extant. It fills the gap of incomplete data on AKI risk factors and innovatively evaluates the impact of different primary tumor sites on the incidence of postoperative AKI.Although our study provides valuable information about risk factors for AKI after CRS + HIPEC, there are some limitations. All included studies were retrospective cohort studies, which may be subject to selection bias and information bias. In addition, heterogeneity among studies may have affected the interpretation of the results. Although we used random effects models and sensitivity analyses to deal with heterogeneity, the influence of these factors on the overall conclusions cannot be excluded. Our study may also have been affected by publication bias, although Begg's and Egger's tests did not detect significant publication bias. Among the included studies, there were differences in the specific modalities of CRS and HIPEC, the selection and dosage of chemotherapeutic agents, and the diagnostic criteria for AKI. This diversity may have influenced the assessment and comparison of AKI risk factors.In addition, data completeness and reporting bias are potential limitations of our study, as only published studies were included and some unpublished results may be missing. Finally, the results of our study may be limited by geographic and population distribution, with most studies coming from specific regions and may not be fully representative of the global picture.
Due to the limited number of included studies, we lacked detailed information on some of the potential risk factors and thus the corresponding analyzed results. For example, Annika et al. found that patients possessing coronary artery disease increased the risk of AKI after CRS + HIPEC. In addition, only one study reported the effect of preoperative antibiotic use on AKI, whereas prophylactic antibiotic use is a very common treatment in laparotomy reduction. In a study by Lu et al. , it was mentioned that ascites was also one of the risk factors for AKI, but the amount of ascites was not counted. For malignant tumors in the abdominal cavity, ascites is an unavoidable challenge, and understanding the effect of different gradients of ascites volume on postoperative AKI will also be one of the future research directions.Finally, it is also particularly important to explore the effect of the dose of chemotherapeutic agents during HIPEC on postoperative AKI.
The results of this systematic review and meta-analysis showed that the preoperative risk factors for postoperative AKI after CRS + HIPEC include age, gender, BMI, PCI, eGFR, Hb, Diabetes mellitus, and Hypertension.All of the above factors, except for age, gender, and PCI, are potential risk factors that can be controlled or reversible. Early identification as well as advance intervention in the perioperative period will greatly reduce the incidence of postoperative AKI in patients. For example, preoperative monitoring and management of renal function, Hb, blood glucose, and blood pressure. In addition, IO SBP < 100 mmHg(min) is an intraoperative risk factor for postoperative AKI. Close attention to intraoperative anesthesia management and maintenance of normal range intraoperative blood pressure will reduce the incidence of AKI. Finally, for the selection of intraoperative chemotherapeutic agents, MMC can significantly reduce the risk of postoperative AKI compared with cisplatin. The results of this study may provide clinicians with more epidemiological evidence on the prevention of postoperative AKI after CRS + HIPEC, so as to establish personalized treatment plans and optimize the perioperative management of patients, with a view to improving the prognosis of patients.
Supplementary Material 1. Supplementary Material 2. Supplementary Material 3. Supplementary Material 4.
|
Harmonizing Quality Improvement Metrics Across Global Trial Networks to Advance Paediatric Clinical Trials Delivery | 7f333a53-5324-43cb-8958-e1f99691420b | 11335960 | Pediatrics[mh] | Significant challenges remain in how paediatric clinical trials are conducted: upto 19% of trials have been reported to discontinue, with up to 38% of trials reporting patient recruitment as the main reason . These results are attributed to issues with the design and operational execution of these trials, including lengthy study start-up times, inability to meet target enrolment goals and poor patient retention rates . The clinical research enterprise needs to transform involving collaboration among diverse public and private stakeholders, innovative re-engineering of the current delivery of clinical trials, and novel methodologies to integrate existing expertise, resources, and infrastructure . These challenges apply to paediatric trials . Clinical trial networks can support optimizing trial delivery by implementing quality and performance metrics in alignment with sponsors and sites . Adopting rigorous, harmonized systems and procedures to capture operational metrics and compare them with performance targets can support tracking, evaluating, benchmarking and predicting performance. Metrics should measure the right factors accurately, with standard definitions and data points, to provide actionable information to support planning and decisions . Metrics can be used to track outcomes, processes, and performance in clinical trial delivery by sponsors and research networks and can focus on the individual site and protocol levels but also at a portfolio level, across trials and sites . The objectives of this paper are to: Describe approaches used by contributing networks to identify and develop key metrics/indicators Describe common metrics and challenges in identifying network metrics Identify a preliminary set of interoperable metrics An international quality initiative “think tank” was convened with representatives from three paediatric trial networks from different jurisdictions that focus on novel drugs. These networks are specialty-agnostic with wide geographical coverage and work with the Pharmaceutical industry and academic Sponsors. This group, derived from the ongoing dialogue and collaboration between these networks, focused on improving the pediatric research enterprise and infrastructure. The group met remotely from 2021 to 2023 with at least quarterly meetings. Open discussions were driven by sharing of approaches, processes, documents, and experiences from each network. Metrics and their methods of collections were identified through discussion and sharing within the think tank. The metrics were identified by a survey of the networks. Sources of alignment and divergence and opportunities for shared metrics were identified by consensus between members of the think tank. This work used process data from the networks excluding personal data. Accordingly, review by research ethics committees or Institutional Review Boards was not needed. Contributing Networks Institute for Advanced Clinical Trials (I-ACT) for Children I-ACT was created by a consortium of key stakeholders in paediatrics, including the Critical Path Institute, the American Academy of Pediatrics and others in academia, industry, and the regulatory world. I-ACT is a 501c3 non-profit organization with a mission to serve as a neutral and independent organization on behalf of children everywhere. I-ACT is designed to advance innovative medicines and device development and labelling to improve child health . I-ACT engages public and private stakeholders through research and education to ensure that healthcare for children is continually improved by enhancing the awareness, quality, and support for paediatric clinical trials. I-ACT currently includes 74 U.S. and international network sites committed to performing paediatric research to support regulatory approval by industry and academic Sponsors. I-ACT supports the network sites by providing clinical trial opportunities, a peer-to-peer mentoring program, educational webinars, professional education grants, and supports sites to improve paediatric research conduct. I-ACT launched the Pediatric Improvement Collaborative for Clinical Trials & Research (PICTR ® ), a quality improvement program to help identify and mitigate the challenges sites face when conducting paediatric clinical trials. The PICTR program collects and analyses paediatric clinical trial operations data at site level to determine best practices and process improvement. The data is shared across the site network. This exchange creates a continuous learning environment to maximize trial speed, quality and efficiency. conect4children (c4c)-Collaborative Network for European Clinical Trials for Children C4c is an action under the Innovative Medicines Initiative 2 (IMI2) Joint Understanding, Grant Agreement 777389 from 2018 to 2024 . The c4c consortium includes 10 large pharmaceutical companies and 37 non-industry partners, including academia, hospitals, third-sector organizations and patient advocacy groups. The consortium aims to set up and evaluate a pan-European paediatric-focused clinical trial infrastructure tailored to meet the needs of children involved in clinical trials. c4c is focused on four main areas of services, including: strategic feasibility expert advice on study design and/or paediatric development programmes, including patient/parent involvement; a network of over 250 clinical sites across 21 European countries coordinated by 20 National Hubs and a central Network Infrastructure Office, with local knowledge and expertise and aligned processes across the entire network; a Training Academy providing standardized training to all study sites and site personnel; and a Data focused work package to support management of data and metrics used by the network and the development of a standardised paediatric data dictionary. A new legal entity, the conect4children Stichting has been established to ensure sustainability of this project’s results. Maternal Infant Child and Youth Research Network (MICYRN) MICYRN is a Canadian federal not-for-profit, charitable organization founded in 2006 to build capacity for high-quality applied health research. MICYRN is governed by a Board comprising member research organizations and members at large, who represent specific research foci and expertise. Oversight of the network is maintained through an executive team consisting of the Board chair, vice-chair, scientific directors, and executive directors. The network formally links 21 maternal and child health research member organizations based at academic health centres in Canada; is affiliated with more than 25 practice-based research networks; provides support to new and emerging teams; and has established strong national and international partnerships such as I-ACT and c4c. The mission of MICYRN is to catalyze advances in maternal and child healthcare by connecting minds and removing barriers to high-quality health research. MICYRN is working towards building a national infrastructure to attract and facilitate the conduct of maternal-child investigator-initiated and industry-sponsored multicenter clinical trials and functions as a de-centralized Academic Research Organization. MICYRN prioritizes quality improvement initiatives, supports training and mentorship programs for emerging investigators and new trainees, and leverages national partnerships to lead advocacy initiatives for regulatory and ethical pathways in Canada. In collaboration with I-ACT, MICYRN is working on a Quality Improvement and Performance Metrics Initiative to collect information on key indicators to improve maternal/child health in Canada. Institute for Advanced Clinical Trials (I-ACT) for Children I-ACT was created by a consortium of key stakeholders in paediatrics, including the Critical Path Institute, the American Academy of Pediatrics and others in academia, industry, and the regulatory world. I-ACT is a 501c3 non-profit organization with a mission to serve as a neutral and independent organization on behalf of children everywhere. I-ACT is designed to advance innovative medicines and device development and labelling to improve child health . I-ACT engages public and private stakeholders through research and education to ensure that healthcare for children is continually improved by enhancing the awareness, quality, and support for paediatric clinical trials. I-ACT currently includes 74 U.S. and international network sites committed to performing paediatric research to support regulatory approval by industry and academic Sponsors. I-ACT supports the network sites by providing clinical trial opportunities, a peer-to-peer mentoring program, educational webinars, professional education grants, and supports sites to improve paediatric research conduct. I-ACT launched the Pediatric Improvement Collaborative for Clinical Trials & Research (PICTR ® ), a quality improvement program to help identify and mitigate the challenges sites face when conducting paediatric clinical trials. The PICTR program collects and analyses paediatric clinical trial operations data at site level to determine best practices and process improvement. The data is shared across the site network. This exchange creates a continuous learning environment to maximize trial speed, quality and efficiency. conect4children (c4c)-Collaborative Network for European Clinical Trials for Children C4c is an action under the Innovative Medicines Initiative 2 (IMI2) Joint Understanding, Grant Agreement 777389 from 2018 to 2024 . The c4c consortium includes 10 large pharmaceutical companies and 37 non-industry partners, including academia, hospitals, third-sector organizations and patient advocacy groups. The consortium aims to set up and evaluate a pan-European paediatric-focused clinical trial infrastructure tailored to meet the needs of children involved in clinical trials. c4c is focused on four main areas of services, including: strategic feasibility expert advice on study design and/or paediatric development programmes, including patient/parent involvement; a network of over 250 clinical sites across 21 European countries coordinated by 20 National Hubs and a central Network Infrastructure Office, with local knowledge and expertise and aligned processes across the entire network; a Training Academy providing standardized training to all study sites and site personnel; and a Data focused work package to support management of data and metrics used by the network and the development of a standardised paediatric data dictionary. A new legal entity, the conect4children Stichting has been established to ensure sustainability of this project’s results. Maternal Infant Child and Youth Research Network (MICYRN) MICYRN is a Canadian federal not-for-profit, charitable organization founded in 2006 to build capacity for high-quality applied health research. MICYRN is governed by a Board comprising member research organizations and members at large, who represent specific research foci and expertise. Oversight of the network is maintained through an executive team consisting of the Board chair, vice-chair, scientific directors, and executive directors. The network formally links 21 maternal and child health research member organizations based at academic health centres in Canada; is affiliated with more than 25 practice-based research networks; provides support to new and emerging teams; and has established strong national and international partnerships such as I-ACT and c4c. The mission of MICYRN is to catalyze advances in maternal and child healthcare by connecting minds and removing barriers to high-quality health research. MICYRN is working towards building a national infrastructure to attract and facilitate the conduct of maternal-child investigator-initiated and industry-sponsored multicenter clinical trials and functions as a de-centralized Academic Research Organization. MICYRN prioritizes quality improvement initiatives, supports training and mentorship programs for emerging investigators and new trainees, and leverages national partnerships to lead advocacy initiatives for regulatory and ethical pathways in Canada. In collaboration with I-ACT, MICYRN is working on a Quality Improvement and Performance Metrics Initiative to collect information on key indicators to improve maternal/child health in Canada. I-ACT was created by a consortium of key stakeholders in paediatrics, including the Critical Path Institute, the American Academy of Pediatrics and others in academia, industry, and the regulatory world. I-ACT is a 501c3 non-profit organization with a mission to serve as a neutral and independent organization on behalf of children everywhere. I-ACT is designed to advance innovative medicines and device development and labelling to improve child health . I-ACT engages public and private stakeholders through research and education to ensure that healthcare for children is continually improved by enhancing the awareness, quality, and support for paediatric clinical trials. I-ACT currently includes 74 U.S. and international network sites committed to performing paediatric research to support regulatory approval by industry and academic Sponsors. I-ACT supports the network sites by providing clinical trial opportunities, a peer-to-peer mentoring program, educational webinars, professional education grants, and supports sites to improve paediatric research conduct. I-ACT launched the Pediatric Improvement Collaborative for Clinical Trials & Research (PICTR ® ), a quality improvement program to help identify and mitigate the challenges sites face when conducting paediatric clinical trials. The PICTR program collects and analyses paediatric clinical trial operations data at site level to determine best practices and process improvement. The data is shared across the site network. This exchange creates a continuous learning environment to maximize trial speed, quality and efficiency. C4c is an action under the Innovative Medicines Initiative 2 (IMI2) Joint Understanding, Grant Agreement 777389 from 2018 to 2024 . The c4c consortium includes 10 large pharmaceutical companies and 37 non-industry partners, including academia, hospitals, third-sector organizations and patient advocacy groups. The consortium aims to set up and evaluate a pan-European paediatric-focused clinical trial infrastructure tailored to meet the needs of children involved in clinical trials. c4c is focused on four main areas of services, including: strategic feasibility expert advice on study design and/or paediatric development programmes, including patient/parent involvement; a network of over 250 clinical sites across 21 European countries coordinated by 20 National Hubs and a central Network Infrastructure Office, with local knowledge and expertise and aligned processes across the entire network; a Training Academy providing standardized training to all study sites and site personnel; and a Data focused work package to support management of data and metrics used by the network and the development of a standardised paediatric data dictionary. A new legal entity, the conect4children Stichting has been established to ensure sustainability of this project’s results. MICYRN is a Canadian federal not-for-profit, charitable organization founded in 2006 to build capacity for high-quality applied health research. MICYRN is governed by a Board comprising member research organizations and members at large, who represent specific research foci and expertise. Oversight of the network is maintained through an executive team consisting of the Board chair, vice-chair, scientific directors, and executive directors. The network formally links 21 maternal and child health research member organizations based at academic health centres in Canada; is affiliated with more than 25 practice-based research networks; provides support to new and emerging teams; and has established strong national and international partnerships such as I-ACT and c4c. The mission of MICYRN is to catalyze advances in maternal and child healthcare by connecting minds and removing barriers to high-quality health research. MICYRN is working towards building a national infrastructure to attract and facilitate the conduct of maternal-child investigator-initiated and industry-sponsored multicenter clinical trials and functions as a de-centralized Academic Research Organization. MICYRN prioritizes quality improvement initiatives, supports training and mentorship programs for emerging investigators and new trainees, and leverages national partnerships to lead advocacy initiatives for regulatory and ethical pathways in Canada. In collaboration with I-ACT, MICYRN is working on a Quality Improvement and Performance Metrics Initiative to collect information on key indicators to improve maternal/child health in Canada. Approaches Used by Contributing Networks to Identify and Develop Key Metrics/Indicators Pediatric Improvement Collaborative for Clinical Research and Trials (PICTR®) In 2018, PICTR worked closely with members of the site network to assess current paediatric clinical trial research operations. Sites completed surveys about their operations and met frequently to share gaps in their processes. Based on site feedback and subject matter experts (SME), a preliminary list of measurable goals and metrics was developed for improving the clinical trials process within sites. To ensure the program’s goals and metrics aligned across the industry, PICTR hosted an SME meeting in Chicago in 2019, bringing together key stakeholders to discuss the conduct of clinical trials including pharmaceutical companies, federal agencies, academia, research sites, other global paediatric networks, and patients and families. The meeting outcome was a draft set of six metrics used to identify gaps in the clinical research operations process at site level. Following the SME meeting, 14 sites participated in a pilot project collecting research operations metrics focused on the institutional review board and contracts process. The pilot aided in validating the program goals and identifying additional metrics after which, there was an ongoing collaboration with key stakeholders resulting in a final set of 11 core research operation metrics (Appendix A). Quality Improvement initiatives for sites were based on these metrics. connect4children-Collaborative Network for European Clinical Trial for Children C4c collects metrics to measure quality and performance of processes and network. Implementing a Performance Measurement System has a positive organisational effect, improves results over the long term, drives organisational strategy, supports planning and decision-making, and acts as an effective tool for communicating achieved results to stakeholders Within c4c, a methodological model was developed to identify a list of metrics and underlying data points to be suggested for adoption by c4c. The model considered metrics-specific issues, including: Terminology. Common practice and use of metrics—collected from examples of national networks and sponsors. Lean Management approach in clinical research (e.g. “time” as one of the key performance measures). Goal-Question-Metric Paradigm (defining goals behind the processes to be measured and using these to decide precisely what to measure). Multi-Criteria Decision Analysis (to aggregate several simple metrics into one meaningful combined metric). Target setting. A cross-work stream collaboration between c4c partners led to the selection of an initial core set of 13 metrics (Appendix B) from a list of 126 proposed metrics. The core set, prioritised by function and business case, is used to measure the performance of the studies used to define the network’s processes and to test its viability (so-called proof-of-viability studies), thereby testing the usefulness and actionability of this core set. Each metric has a target (value or range) and several attributes defined, including Name and Code, Process (mapped to Network or Clinical Trial processes), Definition, Data Points, as well as prioritisation for collection. The subset was chosen after a three-month consultation process across all c4c National Hubs and Industry partners of the consortium. The c4c Network Committee approved the metrics after a pilot phase of utilising with academic proof-of-viability studies. These metrics are critical to the c4c network and trial performance management framework and are continuously reviewed and evaluated. MICYRN—Maternal Infant Child and Youth Research Network In early 2019, MICYRN collaborated with I-ACT to learn about the PICTR initiative and metrics collected in the United States. Following the discussions with I-ACT, MICYRN engaged with its clinical trials consortium (CTC) comprised of scientific and operational representatives across 16 clinical trial units at MICYRN’s member research organizations to discuss the Quality Improvement (QI) and Performance Metrics initiative. Buy-in from the CTC was achieved and deemed important to the maternal-child health research community in Canada. The MICYRN leadership team conducted individual teleconferences with CTC sites to identify a list of meaningful indicators across the 3 domains of quality, efficiency, and timeliness; 11 interviews were completed. Using the interview data, an electronic survey was created with the compiled list of 14 indicators and disseminated to the 16 consortium sites for completion. Sites were asked to rank each indicator in order of importance to their site (1–14). 11/16 CTC sites completed the survey. The survey results were analysed, reducing the list to the top 6 indicators identified by the CTC sites. The 6 indicators were reviewed by the MICYRN leadership team and in terms of tangible action items that MICYRN could support and facilitate. The MICYRN Annual General Assembly brought together the CTC to collectively generate common data elements and definitions, inclusion/exclusion criteria, timeframe, methods of data collection, frequency of reporting, and unit analysis, further reducing the indicators to 5 (Appendix C). The CTC and MICYRN leadership team are currently working on metrics collection and action items for each of the 5 defined indicators. In summary, metric selection was driven by site quality improvement in iACT (11 metrics), by network performance in c4c (13 metrics), and by both in MICYRN (5 metrics). Commonalities and Challenges in Identifying Network Metrics Appendix A–D describe the metrics provided by the participating networks. The metrics developed are broadly at either trial level, or at site, and/or country level. They are related to individual services developed, and/or network/infrastructure Figure summarizes the commonalities of the approach to identifying and developing these metrics across the three networks. All networks used a staged evidence-based approach based on existing evidence and wide internal stakeholder consultation and co-creation, keeping in mind the expected implementation of metrics across sites and organizations. Appendix D summarises metrics related to each phase across each contributing network. The network driven by site quality improvement did not have indicators for capacity/capability or identification/feasibility (Table ). 15 metrics for trial start up and conduct were identified. Metrics related to approvals were found in all three networks. Topics relating to protocol review were only included by the network driven by site quality improvement. Topics relating to numbers of paediatric interventional clinical trials and investigators participating in these at country level were only included by the network focussing on country-wide approach. Site identification/feasibility indicators were only included by the network that was driven only by network management. The challenges faced when reviewing and identifying common metrics reported by the three networks were: Technical differences: c4c, I-ACT and MICYRN use (and source data from organisations that may use) different technical standards and systems, making it difficult to exchange data and information. Measurement and semantic differences: All three networks use different terminology, definitions for each data point and metrics, and coding systems, making it difficult to compare data across organizations. Each of the three networks used slightly different reference points and definitions to capture similar metrics. For example, specific definitions used for site “initiation”, “activation” and “ready for enrolment” timelines were different between networks, impacting how the dates for these steps were captured. The same was noted in recruitment dates related to patient screening, consent, or enrolment. The source of information also varies; c4c collects detailed information from sponsors, whereas I-ACT and MICYRN collect the information from sites. Organizational policies: Parent and partner organisations have different policies and regulations regarding data sharing and use; these need to be addressed to establish common guidelines for data exchange. These differences often arise because of the characteristics of health systems. Our findings and discussion suggest that metrics are identified, defined, and developed according to each stakeholder's goals and the processes they can measure or influence. Working Towards a Common Interoperable Set of Metrics By comparing the identified metrics across the networks, we found specific shared metrics measured across all three networks that can form the basis of comparators for the service/support that the networks provide across the trial lifecycle. Shared metrics could measure the effectiveness of interoperable networks. An example of a shared metric is shown in Table , illustrating challenges with terminologies and data point/measures alignment. Pediatric Improvement Collaborative for Clinical Research and Trials (PICTR®) In 2018, PICTR worked closely with members of the site network to assess current paediatric clinical trial research operations. Sites completed surveys about their operations and met frequently to share gaps in their processes. Based on site feedback and subject matter experts (SME), a preliminary list of measurable goals and metrics was developed for improving the clinical trials process within sites. To ensure the program’s goals and metrics aligned across the industry, PICTR hosted an SME meeting in Chicago in 2019, bringing together key stakeholders to discuss the conduct of clinical trials including pharmaceutical companies, federal agencies, academia, research sites, other global paediatric networks, and patients and families. The meeting outcome was a draft set of six metrics used to identify gaps in the clinical research operations process at site level. Following the SME meeting, 14 sites participated in a pilot project collecting research operations metrics focused on the institutional review board and contracts process. The pilot aided in validating the program goals and identifying additional metrics after which, there was an ongoing collaboration with key stakeholders resulting in a final set of 11 core research operation metrics (Appendix A). Quality Improvement initiatives for sites were based on these metrics. connect4children-Collaborative Network for European Clinical Trial for Children C4c collects metrics to measure quality and performance of processes and network. Implementing a Performance Measurement System has a positive organisational effect, improves results over the long term, drives organisational strategy, supports planning and decision-making, and acts as an effective tool for communicating achieved results to stakeholders Within c4c, a methodological model was developed to identify a list of metrics and underlying data points to be suggested for adoption by c4c. The model considered metrics-specific issues, including: Terminology. Common practice and use of metrics—collected from examples of national networks and sponsors. Lean Management approach in clinical research (e.g. “time” as one of the key performance measures). Goal-Question-Metric Paradigm (defining goals behind the processes to be measured and using these to decide precisely what to measure). Multi-Criteria Decision Analysis (to aggregate several simple metrics into one meaningful combined metric). Target setting. A cross-work stream collaboration between c4c partners led to the selection of an initial core set of 13 metrics (Appendix B) from a list of 126 proposed metrics. The core set, prioritised by function and business case, is used to measure the performance of the studies used to define the network’s processes and to test its viability (so-called proof-of-viability studies), thereby testing the usefulness and actionability of this core set. Each metric has a target (value or range) and several attributes defined, including Name and Code, Process (mapped to Network or Clinical Trial processes), Definition, Data Points, as well as prioritisation for collection. The subset was chosen after a three-month consultation process across all c4c National Hubs and Industry partners of the consortium. The c4c Network Committee approved the metrics after a pilot phase of utilising with academic proof-of-viability studies. These metrics are critical to the c4c network and trial performance management framework and are continuously reviewed and evaluated. MICYRN—Maternal Infant Child and Youth Research Network In early 2019, MICYRN collaborated with I-ACT to learn about the PICTR initiative and metrics collected in the United States. Following the discussions with I-ACT, MICYRN engaged with its clinical trials consortium (CTC) comprised of scientific and operational representatives across 16 clinical trial units at MICYRN’s member research organizations to discuss the Quality Improvement (QI) and Performance Metrics initiative. Buy-in from the CTC was achieved and deemed important to the maternal-child health research community in Canada. The MICYRN leadership team conducted individual teleconferences with CTC sites to identify a list of meaningful indicators across the 3 domains of quality, efficiency, and timeliness; 11 interviews were completed. Using the interview data, an electronic survey was created with the compiled list of 14 indicators and disseminated to the 16 consortium sites for completion. Sites were asked to rank each indicator in order of importance to their site (1–14). 11/16 CTC sites completed the survey. The survey results were analysed, reducing the list to the top 6 indicators identified by the CTC sites. The 6 indicators were reviewed by the MICYRN leadership team and in terms of tangible action items that MICYRN could support and facilitate. The MICYRN Annual General Assembly brought together the CTC to collectively generate common data elements and definitions, inclusion/exclusion criteria, timeframe, methods of data collection, frequency of reporting, and unit analysis, further reducing the indicators to 5 (Appendix C). The CTC and MICYRN leadership team are currently working on metrics collection and action items for each of the 5 defined indicators. In summary, metric selection was driven by site quality improvement in iACT (11 metrics), by network performance in c4c (13 metrics), and by both in MICYRN (5 metrics). In 2018, PICTR worked closely with members of the site network to assess current paediatric clinical trial research operations. Sites completed surveys about their operations and met frequently to share gaps in their processes. Based on site feedback and subject matter experts (SME), a preliminary list of measurable goals and metrics was developed for improving the clinical trials process within sites. To ensure the program’s goals and metrics aligned across the industry, PICTR hosted an SME meeting in Chicago in 2019, bringing together key stakeholders to discuss the conduct of clinical trials including pharmaceutical companies, federal agencies, academia, research sites, other global paediatric networks, and patients and families. The meeting outcome was a draft set of six metrics used to identify gaps in the clinical research operations process at site level. Following the SME meeting, 14 sites participated in a pilot project collecting research operations metrics focused on the institutional review board and contracts process. The pilot aided in validating the program goals and identifying additional metrics after which, there was an ongoing collaboration with key stakeholders resulting in a final set of 11 core research operation metrics (Appendix A). Quality Improvement initiatives for sites were based on these metrics. C4c collects metrics to measure quality and performance of processes and network. Implementing a Performance Measurement System has a positive organisational effect, improves results over the long term, drives organisational strategy, supports planning and decision-making, and acts as an effective tool for communicating achieved results to stakeholders Within c4c, a methodological model was developed to identify a list of metrics and underlying data points to be suggested for adoption by c4c. The model considered metrics-specific issues, including: Terminology. Common practice and use of metrics—collected from examples of national networks and sponsors. Lean Management approach in clinical research (e.g. “time” as one of the key performance measures). Goal-Question-Metric Paradigm (defining goals behind the processes to be measured and using these to decide precisely what to measure). Multi-Criteria Decision Analysis (to aggregate several simple metrics into one meaningful combined metric). Target setting. A cross-work stream collaboration between c4c partners led to the selection of an initial core set of 13 metrics (Appendix B) from a list of 126 proposed metrics. The core set, prioritised by function and business case, is used to measure the performance of the studies used to define the network’s processes and to test its viability (so-called proof-of-viability studies), thereby testing the usefulness and actionability of this core set. Each metric has a target (value or range) and several attributes defined, including Name and Code, Process (mapped to Network or Clinical Trial processes), Definition, Data Points, as well as prioritisation for collection. The subset was chosen after a three-month consultation process across all c4c National Hubs and Industry partners of the consortium. The c4c Network Committee approved the metrics after a pilot phase of utilising with academic proof-of-viability studies. These metrics are critical to the c4c network and trial performance management framework and are continuously reviewed and evaluated. In early 2019, MICYRN collaborated with I-ACT to learn about the PICTR initiative and metrics collected in the United States. Following the discussions with I-ACT, MICYRN engaged with its clinical trials consortium (CTC) comprised of scientific and operational representatives across 16 clinical trial units at MICYRN’s member research organizations to discuss the Quality Improvement (QI) and Performance Metrics initiative. Buy-in from the CTC was achieved and deemed important to the maternal-child health research community in Canada. The MICYRN leadership team conducted individual teleconferences with CTC sites to identify a list of meaningful indicators across the 3 domains of quality, efficiency, and timeliness; 11 interviews were completed. Using the interview data, an electronic survey was created with the compiled list of 14 indicators and disseminated to the 16 consortium sites for completion. Sites were asked to rank each indicator in order of importance to their site (1–14). 11/16 CTC sites completed the survey. The survey results were analysed, reducing the list to the top 6 indicators identified by the CTC sites. The 6 indicators were reviewed by the MICYRN leadership team and in terms of tangible action items that MICYRN could support and facilitate. The MICYRN Annual General Assembly brought together the CTC to collectively generate common data elements and definitions, inclusion/exclusion criteria, timeframe, methods of data collection, frequency of reporting, and unit analysis, further reducing the indicators to 5 (Appendix C). The CTC and MICYRN leadership team are currently working on metrics collection and action items for each of the 5 defined indicators. In summary, metric selection was driven by site quality improvement in iACT (11 metrics), by network performance in c4c (13 metrics), and by both in MICYRN (5 metrics). Appendix A–D describe the metrics provided by the participating networks. The metrics developed are broadly at either trial level, or at site, and/or country level. They are related to individual services developed, and/or network/infrastructure Figure summarizes the commonalities of the approach to identifying and developing these metrics across the three networks. All networks used a staged evidence-based approach based on existing evidence and wide internal stakeholder consultation and co-creation, keeping in mind the expected implementation of metrics across sites and organizations. Appendix D summarises metrics related to each phase across each contributing network. The network driven by site quality improvement did not have indicators for capacity/capability or identification/feasibility (Table ). 15 metrics for trial start up and conduct were identified. Metrics related to approvals were found in all three networks. Topics relating to protocol review were only included by the network driven by site quality improvement. Topics relating to numbers of paediatric interventional clinical trials and investigators participating in these at country level were only included by the network focussing on country-wide approach. Site identification/feasibility indicators were only included by the network that was driven only by network management. The challenges faced when reviewing and identifying common metrics reported by the three networks were: Technical differences: c4c, I-ACT and MICYRN use (and source data from organisations that may use) different technical standards and systems, making it difficult to exchange data and information. Measurement and semantic differences: All three networks use different terminology, definitions for each data point and metrics, and coding systems, making it difficult to compare data across organizations. Each of the three networks used slightly different reference points and definitions to capture similar metrics. For example, specific definitions used for site “initiation”, “activation” and “ready for enrolment” timelines were different between networks, impacting how the dates for these steps were captured. The same was noted in recruitment dates related to patient screening, consent, or enrolment. The source of information also varies; c4c collects detailed information from sponsors, whereas I-ACT and MICYRN collect the information from sites. Organizational policies: Parent and partner organisations have different policies and regulations regarding data sharing and use; these need to be addressed to establish common guidelines for data exchange. These differences often arise because of the characteristics of health systems. Our findings and discussion suggest that metrics are identified, defined, and developed according to each stakeholder's goals and the processes they can measure or influence. By comparing the identified metrics across the networks, we found specific shared metrics measured across all three networks that can form the basis of comparators for the service/support that the networks provide across the trial lifecycle. Shared metrics could measure the effectiveness of interoperable networks. An example of a shared metric is shown in Table , illustrating challenges with terminologies and data point/measures alignment. The adoption of rigorous, harmonized operational metrics along with performance targets can support tracking, evaluating, benchmarking, and predicting performance . To our knowledge, this is the first report of international comparisons between international paediatric clinical research networks. c4c, I-ACT and MICYRN each have developed and implemented a well-defined set of metrics. Despite differences, common ground exists in the approaches, methods and sources of data collection for these metrics. The review and usage of these metrics are defined by each network’s internal goals. The main aim aligned across all three networks is to ensure the efficient conduct of clinical trials across the network sites. Adopting common metrics, standards and terminologies across organizations helps ensure data interoperability, identifying common trends, and allows the networks to work towards benchmarking. The networks have worked together to identify a core set of metrics which is comparable to other multi-site, multi-national clinical research organizations working towards efficiency in trial set-up, enrolment, and completion . Developing common metrics for multiple networks from the start of the networks would be ideal. Each network needs to establish its focus and identity before liaising with other networks. Several challenges exist concerning I-ACT, MICYRN and c4c using metrics inter-operably. Some of these challenges can be addressed by clearly identifying specific collaborative activities that address organisational, measurement and technical issues in a comprehensive and coherent manner. One challenge relates to the specific nature of what is being measured and how, which ultimately impacts the measurement properties, utility, and impact of selected metrics, pertinent to how the metrics and underlying data points were defined. The absence of a widely accepted standard for data nomenclature, exchange and interoperability means that theses aspects will need to be addressed within each network and then across the networks. For future inter-operability, all 3 networks will need to agree upon common sets of metrics and accompanying definitions. Another general limitation of metrics-driven network initiatives is the oversight and influence that each network has over their respective sites. Each network has been designed and established with its partner organisations with differences in communication channels, sponsor interactions activities, and structures, all of which impact the information collection and flow through the organisations. The networks cannot mandate sharing of data because different partner organizations have different cultures, governance, and ways of working, which can impact their willingness and ability to collaborate on interoperability efforts. In particular, there is a limitation in capturing and interpreting variations in some metrics that are beyond the control of the network. An example of this is seen in recruitment timelines. The recruitment metrics were mostly aggregated and do not consider potential reasons for efficiency or delays. This limitation makes it difficult to identify specific actions for standardization and improvement in the future. Furthermore, c4c and I-ACT both have specific roles and objectives for their organizational sites or country-level networks and they receive a mixture of private and public funds to support the initiative. On the other hand, the sites affiliated with MICYRN are academic member organizations that operate without dedicated funds to support their metrics collection initiative. As a result, MICYRN does not have the same level of influence and incentive for their sites, and the collection of certain metrics largely depends on the individual sites rather than the network. This poses additional challenges in gathering accurate and comprehensive metrics. Despite these challenges, the overarching goal of these networks is to improve the conduct of clinical research studies sponsored by industry and academia. Interoperable metrics will ensure that clinical research study operations across different networks can be reported on in a standardised manner, which makes it easier to compare data across different networks and countries. A more comprehensive, consistent and accurate view across sites and countries is possible, allowing better identification of issues and informed decisions to be made based on high-quality data. The above advantages will support tracking and performance management of network activities supporting clinical trials, collaborative decision making, and solution finding. Sharing of good practice and learnings across networks will result in efficiencies of trial set up and enrolment stages, thereby reducing costs and ultimately helping improve patient outcomes. Shared metrics and targets establish a common framework that will allow better identification of bottlenecks and hurdles in the trial delivery process, and development of quality improvement initiatives to support site and organizations, including adequate resourcing and process improvement. Interoperable metrics enable clinical research networks to collaborate and share data, which can lead to increased efficiency and the development of new treatments and therapies. Other existing clinical research networks around the world that include paediatric research activities collect clinical trial research performance metrics or benchmark data to assess the performance of their sites. However, the methodologies for collecting such data vary. Without a universal standard or methodology in place, networks cannot reference the same metrics across all domains of the trial lifecycle. For example, based on publically available data, the Paediatric Trials Network (PTN) addresses a reduction in time from the start of a study to completion and increased enrolment as part of its organizational improvements to increase efficiency and the Cystic Fibrosis Foundation (CF) Benchmarks use metrics focused on time to contract execution and time to first patient inclusion, whilst metrics from other networks such as the CTSA- Clinical Research Consortium and the Pediatric Emergency Care Applied Research Network (PECARN) do not address these areas of activity. Conversely, even when many networks focus on the same domains, methodological differences in the approach can be identified. That would be the case when contrasting metrics focused on time to IRB approval and recruitment from the three aforementioned research networks. These disparities are not dissimilar to the ones we identified, and can be at least partially justified by different contexts and purposes from each network, as well as constraints to data sources and data collection. Efforts are being made to establish more standardized approaches to data collection and measurement in clinical research . Regulatory bodies, professional organizations, and research institutions are working towards developing common frameworks and guidelines to facilitate the collection and reporting of data across different sites and networks. These initiatives aim to promote consistency and comparability in performance metrics, which can lead to better assessment and evaluation of clinical trial outcomes. It is important for clinical research networks and stakeholders to collaborate, share their experiences and knowledge to establish common methodologies and definitions to help advance the field and ensure the rigorous conduct of clinical trials, ultimately benefiting patients and advancing medical knowledge. It should be noted that this paper only focuses on a small sample of industry- and academia-facing large paediatric trial networks that are specialty agnostic with wide geographical coverage. Other networks focused on specific disciplines or covering other study designs may require a tailored approach to the selection of relevant operations metrics. In addition to these specific metric related limitations, interoperability efforts require significant funding and resources, as well as time and commitment to work together, which may not be available to all organisations. Recommendations and Next Steps The think-tank identified specific collaborative activities that are needed to develop and use interoperable metrics: Harmonization of processes for the collection of data related to metrics, including goals, data definitions, and measurement methods, across organizations can help ensure data comparability. Collaborative development of technology solutions that support interoperability, such as common data platforms and APIs. Provision of education and training to staff on the importance of data requirements, capture and integrity. Shared educational and training opportunities focusing on quality improvement may reduce burden on resources. Work with similar networks, e.g. those that may be academia-facing or not mandated to study new drugs with industry , on interoperable metrics and their implementation Addressing context at site and national level and regularly testing and evaluating the interoperability of data and systems across organizations to help identify and resolve any issues. Engaging stakeholders, including patients, healthcare providers, and regulatory agencies, in the interoperability efforts so that the solutions developed meet their needs and are widely adopted. Developing a multi-stakeholder engagement strategy for sites to be involved in metrics projects and across the three networks would further ensure interoperability. Establishing data sharing agreements to ensure the secure exchange of data and information. The think tank proposes the following next steps to utilise these metrics inter-operably: Define common or similar metrics/terminology that can be used within each network (intra network metrics) but be alike enough to allow interpretation as globally aligned networks. Define and align on data points that are collected to measure each metric. Review target values and/or comparators and/or benchmarks that may be used to drive performance. The think-tank identified specific collaborative activities that are needed to develop and use interoperable metrics: Harmonization of processes for the collection of data related to metrics, including goals, data definitions, and measurement methods, across organizations can help ensure data comparability. Collaborative development of technology solutions that support interoperability, such as common data platforms and APIs. Provision of education and training to staff on the importance of data requirements, capture and integrity. Shared educational and training opportunities focusing on quality improvement may reduce burden on resources. Work with similar networks, e.g. those that may be academia-facing or not mandated to study new drugs with industry , on interoperable metrics and their implementation Addressing context at site and national level and regularly testing and evaluating the interoperability of data and systems across organizations to help identify and resolve any issues. Engaging stakeholders, including patients, healthcare providers, and regulatory agencies, in the interoperability efforts so that the solutions developed meet their needs and are widely adopted. Developing a multi-stakeholder engagement strategy for sites to be involved in metrics projects and across the three networks would further ensure interoperability. Establishing data sharing agreements to ensure the secure exchange of data and information. The think tank proposes the following next steps to utilise these metrics inter-operably: Define common or similar metrics/terminology that can be used within each network (intra network metrics) but be alike enough to allow interpretation as globally aligned networks. Define and align on data points that are collected to measure each metric. Review target values and/or comparators and/or benchmarks that may be used to drive performance. This paper presents a review on the experience of three international paediatric clinical research networks to establish metrics for paediatric clinical trial support, demonstrating a disparity in methodology and common challenges in defining metrics. The adoption of rigorous, validated, and harmonized operational metrics, along with performance targets, can bring several advantages to international paediatric research networks. The recommended next steps will contribute to enabling international collaboration and benchmarking, thereby resulting in more efficient trial set-up, enrolment, and completion, reducing costs and improving patient outcomes. |
Regional cerebral pulsatile hemodynamics during isocapnic and poikilocapnic hyperthermia in young men | b36d8f3d-1d43-4638-8184-5c6fd58e491d | 11847895 | Cardiovascular System[mh] | INTRODUCTION The brain is a metabolically expensive organ that demands a high blood flow relative to its size (Willie et al., ). However, maintaining nutrient delivery to match cerebral metabolism during environmentally challenging conditions may trigger deleterious hemodynamic patterns associated with adverse effects. For example, the thermoregulatory responses during heat stress entail profound cardiovascular and autonomic adjustments that redistribute cardiac output to the periphery (via cutaneous vasodilation) to support convective and evaporative heat loss (Crandall et al., ). Increased peripheral vasodilation caused by hyperthermia decreases total peripheral resistance and increases peripheral blood flow; consequently, the decreased total peripheral resistance and ensuing hypotension lower cerebral perfusion pressure [reviewed in (Bain et al., )]. However, core temperature typically needs to exceed ∼1.0°C during passive hyperthermia to elicit reductions in P a CO 2 between 5 and 15 mmHg (Barltrop, ) that result in lower CBF. Moreover, when core temperature ( T c ) increases beyond 1°C, temperature‐sensitive neurons in the medulla and carotid body cause an increase in ventilatory rate that exceeds metabolic needs, that is, hyperthermia‐induced hyperventilation (Curtis et al., ; Gibbons et al., ). The hyperthermic hyperventilation leads to a reduction in the partial pressure of arterial carbon dioxide (P a CO 2 ; hypocapnia) causing reductions in cerebral blood flow (CBF) (Brothers et al., ; Nelson et al., ; Wilson et al., ). Indeed, during passive heat stress, each degree Celsius increase in Tc is associated with hyperventilation and an approximate 3–5 mmHg decrease in P a CO 2 , corresponding to an approximate 10%–15% reduction in CBF [reviewed in (Bain et al., )]. Collectively, the hyperthermic‐induced hypocapnia and decreased perfusion pressure create a scenario where cerebrovascular resistances and pulsatility (PI) are altered. Typically, healthy vessels are compliant and capable of damping pulsatile hemodynamic transmittance from proximal to distal arteries despite increased cerebrovascular resistance (Muskat et al., ; van den Kerkhof et al., ; Vrselja et al., ; Zarrinkoob et al., ; Zieman et al., ). To date, only one study has attempted to investigate the relationship between pulsatile hemodynamic forces in central arteries and anterior cerebral blood velocities during heating (Ashton, ). However, this study (Zarrinkoob et al., ) did not measure extracranial or intracranial pulsatility to determine if hemodynamic buffering (i.e., damping; DFi) is preserved during hyperthermia and if hypocapnic vasoconstriction preserves or provokes cerebrovascular hemodynamics. Blood flow changes affect pulsatility and the brain's buffering capacity for hemodynamic stress. For instance, decreased gCBF increases pulsatile amplitudes and reduces DFi (Dempsey et al., ). Because hyperthermia is known to reduce CBF, the primary aim of this retrospective analysis was to quantify the impact of hyperthermia on cerebrovascular buffering and the impact of PaCO 2 . A secondary aim was to investigate the regional cerebral DFi (i.e., anterior circulation via ICA‐MCA and posterior circulation via VA‐PCA) because of the potential for regional differences in CBF responses to heat stress (Ashton, ). We hypothesized that hyperthermia will reduce both posterior and anterior cerebral DFi; however, there will be no changes in DFi during isocapnic hyperthermia, suggesting that the changes in damping can be attributed to hypocapnia.
METHODS 2.1 Participant information Ten healthy young males (age 23 ± 3 years) were recruited for the study. All participants were non‐obese (body mass index 23.0 ± 2.0 kg m −1 ), normotensive (sitting blood pressure 118/70 ± 6/7 mmHg), normoglycemic (<7.0 mmol l −1 ), non‐smoking, and free of overt cardiometabolic and respiratory diseases. Participants were rigorously screened to ensure optimal ICA and VA ultrasound images. Only those with ideal anatomy for sonography of the extracranial neck arteries were included (e.g., those with a high bifurcation of the common carotid artery were omitted). All experimentation was completed at the Centre for Heart, Lung & Vascular Health at the University of British Columbia, Kelowna, BC, Canada. The ethical committee of the University of British Columbia approved the study (H15‐00166). This data set was part of a larger study that focused on assessing cerebral metabolic rate of oxygen, pro‐oxidative and pro‐inflammatory markers, and endothelial‐ and platelet‐derived microparticles during passive hyperthermia. Whereas, this retrospective analysis will assess the changes to pulsatile hemodynamic forces during poikilocapnic and isocapnic passive hyperthermia. A subset of descriptive data (blood pressure [BP], heart rate [HR], temperature) from this retrospective analysis has previously been published elsewhere (Bain et al., , ). The study conformed to the standards set by the Declaration of Helsinki, except the registry in a database. All participants provided informed written consent before experimentation. 2.2 Experimental protocol After initial familiarization with the measurements, the experimental protocol was completed for each subject on a single day (between 07:00 and 18:00 h), following a 4‐h fast and a 12‐h abstinence from alcohol or caffeine. Participants were fitted into a water‐perfused tube‐lined suit (Med‐Eng, Ottawa, ON, Canada) that covered the entire body except for the head, feet, and hands. The suit was perfused with ∼49°C water until the esophageal temperature was +2°C above baseline, an absolute core temperature of 39.5°C, or the subject's volitional thermal tolerance was reached during this isolated hyperthermic exposure. Core temperature (T c ) was determined by a thermocouple probe (RET‐1; Physitemp Instruments, Clifton, NJ, USA) inserted 40 cm past the nostril into the esophagus. Once peak hyperthermia was attained, the circulating temperature of the suit was reduced to ∼40°C to maintain a stable core temperature until the completion of all measures. At peak hyperthermia, P a CO 2 was increased to temperature‐corrected baseline (BL) normothermic values using a custom‐built end‐tidal forcing apparatus (as explained in detail in 15). Cardiovascular and cerebrovascular measures were acquired for 1 min at BL, at +2°C during non‐randomized poikilocapnic hyperthermia (HT), and at +2°C with isocapnic hyperthermia (HT‐C). 2.3 Cardiovascular and cerebrovascular measures Heart rate (HR) was obtained from the R–R intervals measured from a three‐lead ECG. Under local anesthesia (1% lidocaine) and ultrasound guidance, a 20‐gauge arterial catheter (Arrow, Markham, ON, Canada) was placed in the right radial artery, and a central venous catheter (Edwards PediaSat Oximetry Catheter, Edwards Lifesciences, Irvine, CA, USA) was placed in the right internal jugular vein and advanced towards the jugular bulb to respectively measure mean arterial pressure (MAP) and intra‐jugular venous pressure (IJVP). Heart rate and blood pressure measures were integrated into PowerLab and LabChart software (ADInstruments, Colorado Springs, CO, USA) for online monitoring and saved for offline analysis. Blood flow in the right ICA and left VA was simultaneously measured using a vascular duplex ultrasound (VDu) (Terason 3200, Teratech, Burlington, MA, USA). The right ICA was, on average, insonated 2 cm from the carotid bifurcation, while the left VA was insonated at the C5–C6 or C4–C5 space, depending on each subject's unique anatomy. The steering angle was fixed to 60 deg for all measures, and the sample volume was placed in the center of the vessel and adjusted to cover the entire vascular lumen. Additionally, blood velocity of the right middle cerebral artery (MCA) and the left posterior cerebral artery (PCA) was simultaneously measured using a transcranial Doppler (TCD) (Spencer Technologies, Seattle, WA, USA) while adhering to standardized TCD procedures (Willie et al., ). A specialized headband fixation device (model M600 bilateral headframe, Spencer Technologies) secured TCD probes in position throughout trials. All files were screen‐captured and saved as video files for offline analysis at 30 Hz using custom‐designed software. Simultaneous measures of luminal diameter and velocity over a minimum of 12 cardiac cycles were used to calculate blood velocity, PI, DFi, cerebrovascular resistance (CVR i ), and conductance (CVC i ). When a DFi is >1, it indicates greater damping of pulsatile hemodynamic forces, where a portion of the proximal arteries pulsatility is attenuated to distal arteries. A DFi equal to 1 indicates an absence of damping, allowing all the pulsatility of upstream vessels to be transmitted to downstream vessels. A DFi <1 indicates amplified pulsatility in distal arteries compared to proximal arteries. Amplification of pulsatility is hypothesized to be increased transmittance of pulsatile hemodynamic forces to the microvasculature, where chronic amplification is believed to be associated with age‐related cognitive decline (Chiesa et al., ; Lefferts et al., ). The within‐day coefficient of variation for the ICA and VA blood flow was 7% and 4%, respectively. There were no posterior velocity measures for 3 of the 10 participants. 2.4 Hemodynamic equations The following hemodynamic equations were used to calculate the pulsatility index, damping factor index, MCA and PCA cerebrovascular conductance and resistance indices, ICA and VA cerebrovascular resistance, and β‐stiffness. Pulsatility Index (PI) was calculated for the ICA, VA, MCA, and PCA as follows: (1) PI ICA , MCA , VA , or PCA a . u = Sys − Dias Velocity where Sys is the systolic (maximum), and Dias is the diastolic (minimum) average velocities calculated for each cardiac cycle during the selected stable range. Velocity was the mean velocity over the entire chosen stable range in arbitrary units (a.u). Anterior and posterior damping Factor index (DFi) was calculated as: (2) DFi a . u = PI ICA or VA PI MCA or PCA where PI ICA or VA is the pulsatility index of the ICA or VA, and the PI MCA or PCA is the pulsatility index of the MCA or PCA in arbitrary units (a.u). As previously suggested, anterior damping includes PI values from the ICA and MCA, while posterior damping includes values from the VA and PCA. The MCA and PCA cerebrovascular conductance index (CVC I ) was calculated as: (3) CVC I cm . s − 1 mmHg = MCA V or PCA V MAP where MCA V or PCA V is the mean velocity for the MCA or PCA, and MAP is the mean arterial pressure calculated over the cardiac cycle range. The MCA and PCA cerebrovascular resistance index (CVR I ), the inverse of conductance, was calculated as: (4) CVR I mmHg cm . s − 1 = MAP MCA V or PCA V where MCA V or PCA V is the mean velocity for the MCA or PCA, and MAP is the mean arterial pressure calculated over the cardiac cycle range. The ICA and VA cerebrovascular resistance (CVR ICA or VA ) was calculated as: (5) CVR ICA or VA mmHg mL . min − 1 = MAP Q ICA or VA where MAP is the mean arterial pressure calculated over Q ICA or VA (internal carotid or vertebral artery blood flow). The ICA and VA β‐stiffness was calculated using the natural logarithmic conversion of the ratio of systolic and diastolic blood pressure: (6) β = ln SBP / DBP × D ÷ ∆ D where ln is the natural logarithm, SBP is systolic blood pressure, DBP is diastolic blood pressure, D is diameter, and ∆ D is the difference in diameter. 2.5 Statistical analysis Values are presented as mean ± standard deviation (SD). Statistical analysis was performed using JASP (Ver. 0.18.3.0, Amsterdam, the Netherlands). Within‐subject effect sizes were reported (η 2 ). A repeated‐measures ANOVA was used to test for potential differences in the measured variables during hyperthermia. A Mauchly's W sphericity test was used to assess whether the assumption of sphericity was met. When sphericity was violated, a Greenhouse–Geisser correction was used to adjust the degrees of freedom. When significant differences in cerebrovascular pulsatile hemodynamic forces were detected, Holm's post hoc tests were used to determine specific differences between conditions. The a priori level of statistical significance was set at p < 0.05.
Participant information Ten healthy young males (age 23 ± 3 years) were recruited for the study. All participants were non‐obese (body mass index 23.0 ± 2.0 kg m −1 ), normotensive (sitting blood pressure 118/70 ± 6/7 mmHg), normoglycemic (<7.0 mmol l −1 ), non‐smoking, and free of overt cardiometabolic and respiratory diseases. Participants were rigorously screened to ensure optimal ICA and VA ultrasound images. Only those with ideal anatomy for sonography of the extracranial neck arteries were included (e.g., those with a high bifurcation of the common carotid artery were omitted). All experimentation was completed at the Centre for Heart, Lung & Vascular Health at the University of British Columbia, Kelowna, BC, Canada. The ethical committee of the University of British Columbia approved the study (H15‐00166). This data set was part of a larger study that focused on assessing cerebral metabolic rate of oxygen, pro‐oxidative and pro‐inflammatory markers, and endothelial‐ and platelet‐derived microparticles during passive hyperthermia. Whereas, this retrospective analysis will assess the changes to pulsatile hemodynamic forces during poikilocapnic and isocapnic passive hyperthermia. A subset of descriptive data (blood pressure [BP], heart rate [HR], temperature) from this retrospective analysis has previously been published elsewhere (Bain et al., , ). The study conformed to the standards set by the Declaration of Helsinki, except the registry in a database. All participants provided informed written consent before experimentation.
Experimental protocol After initial familiarization with the measurements, the experimental protocol was completed for each subject on a single day (between 07:00 and 18:00 h), following a 4‐h fast and a 12‐h abstinence from alcohol or caffeine. Participants were fitted into a water‐perfused tube‐lined suit (Med‐Eng, Ottawa, ON, Canada) that covered the entire body except for the head, feet, and hands. The suit was perfused with ∼49°C water until the esophageal temperature was +2°C above baseline, an absolute core temperature of 39.5°C, or the subject's volitional thermal tolerance was reached during this isolated hyperthermic exposure. Core temperature (T c ) was determined by a thermocouple probe (RET‐1; Physitemp Instruments, Clifton, NJ, USA) inserted 40 cm past the nostril into the esophagus. Once peak hyperthermia was attained, the circulating temperature of the suit was reduced to ∼40°C to maintain a stable core temperature until the completion of all measures. At peak hyperthermia, P a CO 2 was increased to temperature‐corrected baseline (BL) normothermic values using a custom‐built end‐tidal forcing apparatus (as explained in detail in 15). Cardiovascular and cerebrovascular measures were acquired for 1 min at BL, at +2°C during non‐randomized poikilocapnic hyperthermia (HT), and at +2°C with isocapnic hyperthermia (HT‐C).
Cardiovascular and cerebrovascular measures Heart rate (HR) was obtained from the R–R intervals measured from a three‐lead ECG. Under local anesthesia (1% lidocaine) and ultrasound guidance, a 20‐gauge arterial catheter (Arrow, Markham, ON, Canada) was placed in the right radial artery, and a central venous catheter (Edwards PediaSat Oximetry Catheter, Edwards Lifesciences, Irvine, CA, USA) was placed in the right internal jugular vein and advanced towards the jugular bulb to respectively measure mean arterial pressure (MAP) and intra‐jugular venous pressure (IJVP). Heart rate and blood pressure measures were integrated into PowerLab and LabChart software (ADInstruments, Colorado Springs, CO, USA) for online monitoring and saved for offline analysis. Blood flow in the right ICA and left VA was simultaneously measured using a vascular duplex ultrasound (VDu) (Terason 3200, Teratech, Burlington, MA, USA). The right ICA was, on average, insonated 2 cm from the carotid bifurcation, while the left VA was insonated at the C5–C6 or C4–C5 space, depending on each subject's unique anatomy. The steering angle was fixed to 60 deg for all measures, and the sample volume was placed in the center of the vessel and adjusted to cover the entire vascular lumen. Additionally, blood velocity of the right middle cerebral artery (MCA) and the left posterior cerebral artery (PCA) was simultaneously measured using a transcranial Doppler (TCD) (Spencer Technologies, Seattle, WA, USA) while adhering to standardized TCD procedures (Willie et al., ). A specialized headband fixation device (model M600 bilateral headframe, Spencer Technologies) secured TCD probes in position throughout trials. All files were screen‐captured and saved as video files for offline analysis at 30 Hz using custom‐designed software. Simultaneous measures of luminal diameter and velocity over a minimum of 12 cardiac cycles were used to calculate blood velocity, PI, DFi, cerebrovascular resistance (CVR i ), and conductance (CVC i ). When a DFi is >1, it indicates greater damping of pulsatile hemodynamic forces, where a portion of the proximal arteries pulsatility is attenuated to distal arteries. A DFi equal to 1 indicates an absence of damping, allowing all the pulsatility of upstream vessels to be transmitted to downstream vessels. A DFi <1 indicates amplified pulsatility in distal arteries compared to proximal arteries. Amplification of pulsatility is hypothesized to be increased transmittance of pulsatile hemodynamic forces to the microvasculature, where chronic amplification is believed to be associated with age‐related cognitive decline (Chiesa et al., ; Lefferts et al., ). The within‐day coefficient of variation for the ICA and VA blood flow was 7% and 4%, respectively. There were no posterior velocity measures for 3 of the 10 participants.
Hemodynamic equations The following hemodynamic equations were used to calculate the pulsatility index, damping factor index, MCA and PCA cerebrovascular conductance and resistance indices, ICA and VA cerebrovascular resistance, and β‐stiffness. Pulsatility Index (PI) was calculated for the ICA, VA, MCA, and PCA as follows: (1) PI ICA , MCA , VA , or PCA a . u = Sys − Dias Velocity where Sys is the systolic (maximum), and Dias is the diastolic (minimum) average velocities calculated for each cardiac cycle during the selected stable range. Velocity was the mean velocity over the entire chosen stable range in arbitrary units (a.u). Anterior and posterior damping Factor index (DFi) was calculated as: (2) DFi a . u = PI ICA or VA PI MCA or PCA where PI ICA or VA is the pulsatility index of the ICA or VA, and the PI MCA or PCA is the pulsatility index of the MCA or PCA in arbitrary units (a.u). As previously suggested, anterior damping includes PI values from the ICA and MCA, while posterior damping includes values from the VA and PCA. The MCA and PCA cerebrovascular conductance index (CVC I ) was calculated as: (3) CVC I cm . s − 1 mmHg = MCA V or PCA V MAP where MCA V or PCA V is the mean velocity for the MCA or PCA, and MAP is the mean arterial pressure calculated over the cardiac cycle range. The MCA and PCA cerebrovascular resistance index (CVR I ), the inverse of conductance, was calculated as: (4) CVR I mmHg cm . s − 1 = MAP MCA V or PCA V where MCA V or PCA V is the mean velocity for the MCA or PCA, and MAP is the mean arterial pressure calculated over the cardiac cycle range. The ICA and VA cerebrovascular resistance (CVR ICA or VA ) was calculated as: (5) CVR ICA or VA mmHg mL . min − 1 = MAP Q ICA or VA where MAP is the mean arterial pressure calculated over Q ICA or VA (internal carotid or vertebral artery blood flow). The ICA and VA β‐stiffness was calculated using the natural logarithmic conversion of the ratio of systolic and diastolic blood pressure: (6) β = ln SBP / DBP × D ÷ ∆ D where ln is the natural logarithm, SBP is systolic blood pressure, DBP is diastolic blood pressure, D is diameter, and ∆ D is the difference in diameter.
Statistical analysis Values are presented as mean ± standard deviation (SD). Statistical analysis was performed using JASP (Ver. 0.18.3.0, Amsterdam, the Netherlands). Within‐subject effect sizes were reported (η 2 ). A repeated‐measures ANOVA was used to test for potential differences in the measured variables during hyperthermia. A Mauchly's W sphericity test was used to assess whether the assumption of sphericity was met. When sphericity was violated, a Greenhouse–Geisser correction was used to adjust the degrees of freedom. When significant differences in cerebrovascular pulsatile hemodynamic forces were detected, Holm's post hoc tests were used to determine specific differences between conditions. The a priori level of statistical significance was set at p < 0.05.
RESULTS Baseline esophageal temperature was 37.1 ± 0.2°C and increased to 39.0 ± 0.4°C (HT) and 39.1 ± 0.4°C (HT‐C). MAP decreased from BL (112 ± 6.0 mmHg) during HT (96 ± 9.7 mmHg, p = <0.001) and HT‐C (99 ± 9.6 mmHg, p = <0.001). BL HR (66 ± 8 bpm) increased to 122 ± 13 bpm (HT) and 120 ± 12 bpm (HT‐C), both p = <0.001. Anterior (ICA and MCA) and posterior (VA and PCA) velocities, ICA and VA β‐Stiffness, CVR MCA and PCA , and CVC MCA and PCA , and IJVP are presented in Table . Anterior PI increased in both ICA ( η 2 = 0.66) and MCA ( η 2 = 0.76) from BL (1.0 ± 0.24 a.u; 0.79 ± 0.17 a.u, respectively) during HT (1.6 ± 0.4 a.u; 1.4 ± 0.28 a.u; both p = <0.001), but only PI MCA increased from BL during HT‐C (1.1 ± 0.20 a.u, p = 0.003). Both ICA and MCA PI HT‐C (1.192 ± 0.165 a.u; 1.092 ± 0.201 a.u) were lower than HT (1.6 ± 0.4 a.u; 1.4 ± 0.28 a.u), p = 0.001. VA PI violated Mauchly's W assumption of sphericity ( p = 0.048) and was corrected using a Greenhouse–Geisser ( p = <0.001 a ). Posterior PI increased in both the VA ( η 2 = 0.90) and PCA ( η 2 = 0.70) from BL (1.388 ± 0.549 a.u; 0.775 ± 0.157 a.u, respectively) during HT (2.341 ± 0.524; 1.688 ± 0.455 a.u, p = <0.001), but only PI VA increased from BL during HT‐C (1.797 ± 0.541 a.u; p = 0.003). Both HT PI VA and PCA values were greater than HT‐C (1.797 ± 0.541 a.u [ p = <0.001]; 1.018 ± 0.231 a.u [ p = 0.006]). Anterior and posterior PI are demonstrated in Figure . BL Anterior DFi (1.273 ± 0.138 a.u) decreased in both HT (1.079 ± 0.19 a.u; p = 0.007) and HT‐C (1.117 ± 0.231 a.u; p = 0.021). Posterior DFi did not change, p = 0.116. BL CVC MCA (1.827 ± 0.372 mmHg.mL.min −1 ) increased during HT (2.362 ± 0.456 mmHg.mL.min −1 ; p = <0.001). Anterior and posterior DFi are demonstrated in Figure .
DISCUSSION The primary finding from this study was that poikilocapnic and isocapnic hyperthermia reduced anterior but not posterior DFi. Although hemodynamic transmittance (DFi <1) was not observed on average, 50% of subjects did incur a DFi <1 during HT, which is suggestive that there is a phenotype associated with increased transmittance. Thus, during HT, the pulsatile stress transmittance along the ICA‐MCA cerebrovascular segment is variable. Furthermore, the lack of DFi changes in the posterior circulation suggests that the posterior anatomy may limit deleterious pulsatile hemodynamic transmission from penetrating into the brain stem (Bain et al., ). Collectively, our results do not support our primary hypothesis, as HT and HT‐C demonstrated a reduction of DFi irrespective of P a CO 2 manipulation, as shown in Figure . Even without a clear connection, the severity of the DFi responses was less variable in HT‐C. In addition, there was no association with arterial resistance, as MCA resistance increased during HT and did not change during HT‐C conditions. However, volumetric Q ICA measures allow for a more robust measure of cerebrovascular resistance, which indicated CVR ICA decreased during HT‐C (0.30 mmHg.cm.s −1 ) from BL (0.37 mmHg.cm.s −1 ) but was not altered during HT (0.37 mmHg.cm.s −1 ). Collectively, DFi responses to HT and HT‐C may require a more integrative introspection to understand the nuances associated with cerebrovascular hemodynamics during hyperthermia. Specifically, changes in intracranial pressure (ICP) and cerebral perfusion pressure, in addition to changes in vascular resistance, likely each impact hyperthermic DFi. 4.1 Intracranial pressure's influence on pulsatility It is well established that hyperthermia decreases CBF but may also increase ICP (Cairns & Andrews, ), which is supported by recent findings using non‐invasive measures that suggest an 18% increase in ICP when moderately hyperthermic (≤39.5°C) (Gibbons et al., ). Mathematical models indicate that the ICP‐PI relationship is linearly related under normal conditions (Ursino & Lodi, ); however, further investigation is required to fully elucidate ICP‐PI interactions during hyperthermia. The IJVP (a surrogate measure for ICP) increased from BL during HT and HT‐C, with larger increases occurring when CO 2 was clamped. IJVP is a useful surrogate for ICP in healthy volunteers as it compares well with direct measures (Cardim et al., ) and it is reasonable to consider that increases in CBF volume entering the cranium during HT‐C would result in an increased ICP. While speculative, our results may suggest that moderate hyperthermia may attenuate the ICP‐PI relationship. Animal models have demonstrated a >300% increase in ICP (Lin & Lin, ; Shih et al., ) during hyperthermia. Whether this increased ICP results directly from a more permeable blood–brain barrier and increased cerebral water content (~6%) (Sharma et al., ; Sharma, Nyberg, et al., ) or whether the increased ICP itself causes greater blood–brain barrier permeability and altered water content remains undetermined. Comparisons between rodent and human models are challenging, as rodents' ability to stabilize T c is limited by reduced cutaneous management of hyperthermia (i.e., lack of sweating), which diminishes their capacity to represent human physiological hemodynamic responses. Increased ICP in these cases is likely associated with poor blood pressure regulation and individuals with low compliance (Nyholm et al., ). Other studies (Sharma et al., ; Sharma & Cervós‐Navarro, ; Sharma, Nyberg, et al., ; Sharma, Westman, et al., ) and postmortem human analysis (Hales et al., ; Malamud et al., ) have demonstrated hyperthermic cerebral edema and hemorrhaging in extreme cases. However, biomarkers indicative of blood–brain barrier disruption do not appear to be elevated in hyperthermic humans. Whether cerebral edema and hemorrhaging are influenced by increased pulsatile hemodynamic transmission remains unclear. Further investigations are needed to directly assess whether hyperthermia disrupts the ICP‐PI relationship and to elucidate its role in DFi. 4.2 Hyperthermic sympathetic regulation A common perspective is that sympathetic nerve activity (SNA) vasoconstricts the cerebral vasculature (Bain et al., ; Brothers et al., ; Cassaglia et al., ), which would have a significant effect on cerebrovascular resistance. Indeed, passive heat stress is known to increase SNA in muscle (Low et al., ); however, the paucity of cerebral sympathetic‐mediated vasoconstriction and its capacity to modulate arterial function is limited (Bain et al., , ; Nelson et al., ). Further, Tymko et al. ( ) indicate that blood pressure and hypercapnic challenges do not always cause cerebral sympathetic‐mediated vasoconstriction of the cerebral vasculature. Based on the understanding of central arterial compliance (i.e., arteries ability to expand and recoil throughout the cardiac cycle), acute increases in sympathetic‐adrenergic tone reduce compliance (Boutouyrie et al., ; Tanaka et al., ) and tonically restrain it in large arteries (Failla et al., ; Mangoni et al., ; Tanaka et al., ). Coupled with augmented forward wave intensity (Lefferts et al., ) and individuals' unique anatomical cerebrovascular structures (e.g., tortuous vessels and many bifurcations), the pulsatile hemodynamic transmission may increase during hyperthermia. However, it has been suggested that SNA may not cause cerebral vasoconstriction (Osol et al., ), and arteries distal to the Circle of Willis are unaffected (Warnert et al., ). Yet, changes in MAP may alter the transmural pressure and influence the depolarization of mechanosensitive cytosolic calcium ion channels that mediate resistant arteries and arterioles' myogenic reactivity, increasing wall tension (Osol et al., ). Thus, the calcium‐mediated myogenic reactivity in the macro and microvasculature may drive reduced DFi and DFi assessed transmission during hyperthermia. Therefore, hyperthermia may paradoxically impede the parenchymal arterioles' myogenic restrictive response, which dictates reduced cerebral perfusion pressure and prompts pulsatile hemodynamic transmission. 4.3 Limitations This study was not without its experimental limitations. Intracranial measures of cerebral hemodynamics acquired using TCD are limited by the low spatial resolution offered by TCD. This limitation does not allow for MCA or PCA diameter measures; thus, assessing volumetric hemodynamics through the extra‐intracranial cerebral vessels was not possible. Studies relying on large extracranial to intracranial conduit artery hemodynamics (i.e., DFi, resistance, conductance) during thermal stimuli may benefit from including pulsatile microvascular assessments (e.g., near‐infrared spectroscopy) and mathematical approaches (i.e., modified Windkessel model) to improve our understanding of vascular compliance throughout the entire cerebrovascular tree (Moir et al., ; Shoemaker et al., ). However, it remains to be studied if a modified Windkessel model is superior to using volumetric ICA and VA resistances to understand the hemodynamic forces in the brain during hyperthermia. Furthermore, the present study had a small sample size that only included healthy males. Consequently, this limitation precluded our ability to characterize phenotypic variations or establish correlations between changes in MAP or PaCO₂ and changes in hemodynamic transmission. Indeed, future studies should include females, especially considering the paucity of studies that do not include female data desegregation. Sex hormone fluctuations over the menstrual cycle shift T c and the thermoregulatory response (Kolka & Stephenson, ). Additionally, females tend to have a higher resting CBF compared to males (Muer et al., ; Rodriguez et al., ; Tomoto et al., ), although much of this CBF difference is related to lower hemoglobin in females. A lower hemoglobin content likely impacts cerebral oxygen delivery and resting state oxygen extraction fractions. For instance, the inverse relation between MCA V , hemoglobin content, and peak systolic velocity may indicate that females require a higher CBF to maintain adequate cerebral oxygen delivery (Mazzucco et al., ). However, male and female differences in resting state oxygen extraction fractions remain largely unexplored. Females do have a lower PI in comparison to their male counterparts (Alwatban et al., ), which may result in deleterious outcomes when exposed to hyperthermia due to their heat intolerance (Alele et al., ). These physiological differences underscore the need for studies investigating cerebral hemodynamics to include measures of hemoglobin content, sex hormone fluctuations, and cerebral pulsatile damping, particularly in females. Without addressing these factors, the understanding of sex‐specific responses to hyperthermia and heat stress remains incomplete, leaving a critical gap in knowledge with potentially significant implications for health and performance outcomes. This study did not control for dehydration, which accelerates the decline in cerebral perfusion (Trangmar et al., ) and remains unclear during passive resting hyperthermia; however, it is likely mediated by blood pressure. Future studies should account for hydration status at a minimum and control for it as a gold standard. While direct measures of ICP were not taken due to impracticality (e.g., craniotomy), they can be obtained through non‐invasive measures (e.g., optic nerve sheath diameter measures by trans bulbar sonography (Bäuerle et al., )), which would be useful to better understand the hemodynamic response to hyperthermia. In addition, future studies should consider taking bilateral measures of the VA and PCA due to their unique anatomical structure. Lastly, no measures of SNA were taken, and future studies should consider its potential neurogenic impact on cerebral arteries during hyperthermia.
Intracranial pressure's influence on pulsatility It is well established that hyperthermia decreases CBF but may also increase ICP (Cairns & Andrews, ), which is supported by recent findings using non‐invasive measures that suggest an 18% increase in ICP when moderately hyperthermic (≤39.5°C) (Gibbons et al., ). Mathematical models indicate that the ICP‐PI relationship is linearly related under normal conditions (Ursino & Lodi, ); however, further investigation is required to fully elucidate ICP‐PI interactions during hyperthermia. The IJVP (a surrogate measure for ICP) increased from BL during HT and HT‐C, with larger increases occurring when CO 2 was clamped. IJVP is a useful surrogate for ICP in healthy volunteers as it compares well with direct measures (Cardim et al., ) and it is reasonable to consider that increases in CBF volume entering the cranium during HT‐C would result in an increased ICP. While speculative, our results may suggest that moderate hyperthermia may attenuate the ICP‐PI relationship. Animal models have demonstrated a >300% increase in ICP (Lin & Lin, ; Shih et al., ) during hyperthermia. Whether this increased ICP results directly from a more permeable blood–brain barrier and increased cerebral water content (~6%) (Sharma et al., ; Sharma, Nyberg, et al., ) or whether the increased ICP itself causes greater blood–brain barrier permeability and altered water content remains undetermined. Comparisons between rodent and human models are challenging, as rodents' ability to stabilize T c is limited by reduced cutaneous management of hyperthermia (i.e., lack of sweating), which diminishes their capacity to represent human physiological hemodynamic responses. Increased ICP in these cases is likely associated with poor blood pressure regulation and individuals with low compliance (Nyholm et al., ). Other studies (Sharma et al., ; Sharma & Cervós‐Navarro, ; Sharma, Nyberg, et al., ; Sharma, Westman, et al., ) and postmortem human analysis (Hales et al., ; Malamud et al., ) have demonstrated hyperthermic cerebral edema and hemorrhaging in extreme cases. However, biomarkers indicative of blood–brain barrier disruption do not appear to be elevated in hyperthermic humans. Whether cerebral edema and hemorrhaging are influenced by increased pulsatile hemodynamic transmission remains unclear. Further investigations are needed to directly assess whether hyperthermia disrupts the ICP‐PI relationship and to elucidate its role in DFi.
Hyperthermic sympathetic regulation A common perspective is that sympathetic nerve activity (SNA) vasoconstricts the cerebral vasculature (Bain et al., ; Brothers et al., ; Cassaglia et al., ), which would have a significant effect on cerebrovascular resistance. Indeed, passive heat stress is known to increase SNA in muscle (Low et al., ); however, the paucity of cerebral sympathetic‐mediated vasoconstriction and its capacity to modulate arterial function is limited (Bain et al., , ; Nelson et al., ). Further, Tymko et al. ( ) indicate that blood pressure and hypercapnic challenges do not always cause cerebral sympathetic‐mediated vasoconstriction of the cerebral vasculature. Based on the understanding of central arterial compliance (i.e., arteries ability to expand and recoil throughout the cardiac cycle), acute increases in sympathetic‐adrenergic tone reduce compliance (Boutouyrie et al., ; Tanaka et al., ) and tonically restrain it in large arteries (Failla et al., ; Mangoni et al., ; Tanaka et al., ). Coupled with augmented forward wave intensity (Lefferts et al., ) and individuals' unique anatomical cerebrovascular structures (e.g., tortuous vessels and many bifurcations), the pulsatile hemodynamic transmission may increase during hyperthermia. However, it has been suggested that SNA may not cause cerebral vasoconstriction (Osol et al., ), and arteries distal to the Circle of Willis are unaffected (Warnert et al., ). Yet, changes in MAP may alter the transmural pressure and influence the depolarization of mechanosensitive cytosolic calcium ion channels that mediate resistant arteries and arterioles' myogenic reactivity, increasing wall tension (Osol et al., ). Thus, the calcium‐mediated myogenic reactivity in the macro and microvasculature may drive reduced DFi and DFi assessed transmission during hyperthermia. Therefore, hyperthermia may paradoxically impede the parenchymal arterioles' myogenic restrictive response, which dictates reduced cerebral perfusion pressure and prompts pulsatile hemodynamic transmission.
Limitations This study was not without its experimental limitations. Intracranial measures of cerebral hemodynamics acquired using TCD are limited by the low spatial resolution offered by TCD. This limitation does not allow for MCA or PCA diameter measures; thus, assessing volumetric hemodynamics through the extra‐intracranial cerebral vessels was not possible. Studies relying on large extracranial to intracranial conduit artery hemodynamics (i.e., DFi, resistance, conductance) during thermal stimuli may benefit from including pulsatile microvascular assessments (e.g., near‐infrared spectroscopy) and mathematical approaches (i.e., modified Windkessel model) to improve our understanding of vascular compliance throughout the entire cerebrovascular tree (Moir et al., ; Shoemaker et al., ). However, it remains to be studied if a modified Windkessel model is superior to using volumetric ICA and VA resistances to understand the hemodynamic forces in the brain during hyperthermia. Furthermore, the present study had a small sample size that only included healthy males. Consequently, this limitation precluded our ability to characterize phenotypic variations or establish correlations between changes in MAP or PaCO₂ and changes in hemodynamic transmission. Indeed, future studies should include females, especially considering the paucity of studies that do not include female data desegregation. Sex hormone fluctuations over the menstrual cycle shift T c and the thermoregulatory response (Kolka & Stephenson, ). Additionally, females tend to have a higher resting CBF compared to males (Muer et al., ; Rodriguez et al., ; Tomoto et al., ), although much of this CBF difference is related to lower hemoglobin in females. A lower hemoglobin content likely impacts cerebral oxygen delivery and resting state oxygen extraction fractions. For instance, the inverse relation between MCA V , hemoglobin content, and peak systolic velocity may indicate that females require a higher CBF to maintain adequate cerebral oxygen delivery (Mazzucco et al., ). However, male and female differences in resting state oxygen extraction fractions remain largely unexplored. Females do have a lower PI in comparison to their male counterparts (Alwatban et al., ), which may result in deleterious outcomes when exposed to hyperthermia due to their heat intolerance (Alele et al., ). These physiological differences underscore the need for studies investigating cerebral hemodynamics to include measures of hemoglobin content, sex hormone fluctuations, and cerebral pulsatile damping, particularly in females. Without addressing these factors, the understanding of sex‐specific responses to hyperthermia and heat stress remains incomplete, leaving a critical gap in knowledge with potentially significant implications for health and performance outcomes. This study did not control for dehydration, which accelerates the decline in cerebral perfusion (Trangmar et al., ) and remains unclear during passive resting hyperthermia; however, it is likely mediated by blood pressure. Future studies should account for hydration status at a minimum and control for it as a gold standard. While direct measures of ICP were not taken due to impracticality (e.g., craniotomy), they can be obtained through non‐invasive measures (e.g., optic nerve sheath diameter measures by trans bulbar sonography (Bäuerle et al., )), which would be useful to better understand the hemodynamic response to hyperthermia. In addition, future studies should consider taking bilateral measures of the VA and PCA due to their unique anatomical structure. Lastly, no measures of SNA were taken, and future studies should consider its potential neurogenic impact on cerebral arteries during hyperthermia.
CONCLUSION The observed reductions in anterior cerebral DFi are not indicative of deleterious pulsatile hemodynamic transmission; however, there may be an individual phenotype associated with increased transmission, as 50% of individuals drop below 1.0. However, reductions in DFi and DFi transmission were only found in the anterior circulation, not the posterior. Considering the rising global temperatures, characterizing the mechanistic understanding of pulsatile hemodynamic transmission in cerebral vessels during hyperthermia can provide an important understanding of the impact of environmental temperature on brain health. Future studies need to consider both structural and functional hemodynamic regulation to better understand pulsatile hemodynamic transmission during heat stress.
All authors have read and approved the final version of this manuscript and agree to be accountable for all aspects of the work to ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All persons designated as authors qualify for authorship, and all those who qualify are listed.
Dr. Kurt Smith is funded through a National Sciences and Engineering Research Council (NSERC) of Canada RGPIN‐2020‐06269. Dr. Bain is funded through a National Sciences and Engineering Research Council (NSERC) of Canada RGPIN‐2020‐05760.
None declared.
The ethical committee of the University of British Columbia approved the study (H15‐00166). The study conformed to the standards set by the Declaration of Helsinki, except the registry database. All participants provided informed consent before experimentation.
|
A Review on the Synthesis and Chemical Transformation of Quinazoline 3-Oxides | c053a3a7-c66e-45f1-a29d-1d5f43bd99dd | 9697966 | Pharmacology[mh] | Pyrimidine 1 , shown in , is a six-membered heterocyclic aromatic organic compound containing two nitrogen atoms at positions 1 and 3. This scaffold forms nuclei of several pharmacologically relevant compounds with a wide spectrum of biological activities, including anti-tubercular, anti-bacterial, anti-fungal, anti-viral, and anti-inflammatory properties . The pyrimidine ring readily undergoes N -oxidation using hydrogen peroxide, m -chloroperbenzoic acid (MCPBA), monopermaleic acid, monoperphtalic acid, or p -methylperbenzoic acid to afford pyrimidine N -oxides . However, this nucleus is susceptible to hydrolysis, ring opening, and decomposition during oxidation resulting in reduced yields of the N -oxide . Benzo-fused pyrimidine derivatives such as quinazolines (1,3-diazanaphthalenes) 2 are also associated with a wide range of biological and pharmacological activities, including anti-cancer, anti-tuberculosis, anti-hypertensive, anti-bacterial, anti-inflammatory, and anti-malarial properties . Considerable effort has been devoted to the synthesis, transformation, and biological properties of these benzo-fused pyrimidine derivatives . Both nitrogen atoms of the pyrimidine nucleus of quinazolines can be oxidised to afford either the 1-oxide or 3-oxide derivatives. Among this class of nitrogen-based heterocycles, quinazoline 3-oxides represent valuable intermediates in the synthesis of benzodiazepine analogues and other polycyclic compounds of biological importance . Many valuable benzodiazepine-based drugs for the treatment of seizures and anxiety, such as chlordiazepoxide and diazepam, were first prepared from the corresponding quinazoline 3-oxides . Despite this, the synthesis, transformation, and applications of the quinazoline-3-oxides have received less attention when compared to the other classes of N -oxides, such as 5-membered heteroaromatic N -oxides, pyridine N -oxides, and diazine N -oxides . Interestingly, quinazoline 3-oxides do not feature in any of the reviews dedicated to the synthesis, biological activity, and chemical transformation of quinazolinones and/or quinazoline derivatives. In view of the considerable interest in quinazoline 3-oxides as bronchodilators, cardiotonics, and fungicides , it was decided to provide an up-to-date record of their synthesis and chemical transformation. 2.1. Direct Oxidation of Quinazolines Although there are several reagents that can be used for the direct oxidation of quinazoline nuclei to N -oxides, this approach is complicated by the lack of selectivity. Moreover, the pyrimidine nucleus is susceptible to hydrolysis, ring opening, and decomposition resulting in reduced yields of the N -oxides. Treatment of 4-alkylsubstituted quinazoline 3a (R = -CH 3 ) and 3b (R = -CH 2 CH 3 ) with monoperphthalic acid (1.2–1.3 equiv.) in ether at room temperature (RT) for 5 h, for example, afforded mixtures of the corresponding N -1 ( 4a , b ) and N -3 oxides ( 5a , b ) as well as the quinazolinone derivative 6 . The preference for N -1 oxidation over the N -3 centre resulted in significantly reduced yields of the biologically relevant quinazoline 3-oxides. Moreover, this reaction produced quinazolin-4(3 H )-one as the main product and quinazoline N -oxides as by-products. Recourse to the literature revealed a method that made use of a recombinant soluble di-iron monooxygenase (SDIMO) PmlABCDEF overexpressed in Escherichia coli which was used as a whole-cell biocatalyst to oxidize pyridines, pyrazines, pyrimidines, and their benzo-fused derivatives into the corresponding N -oxides . Quinazoline 2 was among the benzo-fused heterocycles with two nitrogen atoms, which was transformed into quinazoline 3-oxide in 67% yield without any side oxidation products. The drawback associated with the direct N -oxidation of quinazoline scaffold using strong oxidizing agents led to the development of alternative methods for the synthesis of quinazoline 3-oxides, and these strategies are described in detail below. 2.2. Synthesis of Quinazoline 3-Oxides The most common strategy for the synthesis of quinazoline 3-oxides is based on intramolecular cyclocondensation of the intermediate N -acyl-2-aminoaryl ketone oximes using various reagents. 2.2.1. Intramolecular Cyclocondensation of the N -Acyl-2-aminoaryl Ketone Oximes The 2-aminoaryl ketone oximes were previously cyclized with triethyl orthoformate to afford quinazoline 3-oxides albeit in low yields . Improved yields of the 2,4-dicarbo substituted quinazoline 3-oxides were achieved via initial acylation of 2-aminoacetophenone followed by intramolecular cyclocondensation of the intermediate N -acyl-2-aminoaryl ketone oximes with hydroxylamine hydrochloride . The N -oxide of 2,4-dimethylquinazoline 8 , for example, was obtained in 75% yield by treatment of ( E )- N -(2-(1-(hydroxyimino)ethyl)phenyl)acetamide 7 with hydroxylamine hydrochloride under reflux for 3 h . Hydroxylamine hydrochloride serves as a proton source to protonate an oxygen atom of the amide moiety, followed by cyclocondensation of the incipient intermediate A to afford B . The latter then undergoes dehydrogenation to afford the fully aromatic derivative 8 . Hitherto, the analogous 2-alkyl/cycloalkyl substituted 4-methylquinazoline 3-oxides were evaluated for biological activity as pulmonary-selective inhibitors of ovalbumin-induced, leukotriene-mediated bronchoconstriction . The most active and selective compounds contained a methyl group at the 4-position, a medium-sized branched alkyl group at the 2-position, and a small electron donating group on the phenyl ring. Series of quinazoline 3-oxides 10 substituted with various groups on the fused benzo ring were prepared in 12–95% yield by subjecting the 2-aminoacetophenone oxime derivatives 9 to hydroxylamine hydrochloride and pyridine-ethanol mixture under reflux . The 2-carbo substituted derivatives of 11 , on the other hand, were prepared by treatment of substrates 9 with triethyl orthopropionate or triethyl orthoacetate under reflux for 1–3 h . The mechanism of this reaction involves the formation of an ethoxymethyleneamino derivative or Schiff base followed by cyclocondensation to afford quinazoline 3-oxide. Analogues of compound 11 were used as cardiotonic and bronchodilating agents . The 2-aminobenzaldoxime 12 was subjected to a one-pot reaction with benzaldehyde derivatives in the presence of H 2 O 2 -sodium tungstate in THF to afford after 24 h the corresponding quinazoline 3-oxides 13 in good overall yields (69–81%) . The mechanism of this reaction is envisaged to involve the initial nucleophilic addition of the aniline derivative to the carbaldehyde group followed by cyclocondensation of the intermediate Schiff base A to afford a dihydroquinazoline 3-oxide derivative B . H 2 O 2 -sodium tungstate then serves as an oxidizing system on B to afford 13 . Series of N -[2-(1-hydroxyiminoethyl)phenyl]amides and N -(4-halo-2-(1-(hydroxyimino)ethyl)phenyl)amide derivatives 14 (R = alkyl or aryl) were subjected to acid promoted intramolecular cyclization with trifluoroacetic acid (TFA) under reflux for 2 h to afford upon aqueous workup and purification through silica gel column chromatography the corresponding 2,4-dicarbo substituted quinazoline 3-oxides 15 in 72–89% yield . These compounds resulted from initial protonation of the amide oxygen by TFA followed by an attack of the activated amide carbon of A to form intermediate B . The heterocyclic ring of the latter underwent spontaneous dehydrogenation to afford a fully aromatic derivative. These compounds were, in turn, evaluated through enzymatic assays (in vitro and in silico) for potential inhibitory effect against cyclooxygenase-1/2 (COX-1/2) and lipoxygenase-5 (LOX-5) activities as well as for free radical scavenging potential and cytotoxicity. Structure–activity relationship analysis suggested that the presence of a halogen atom at the C-6 position and a 2-aryl group enhanced the inhibitory effect against COX-2, and this observation was well supported by molecular docking studies. The presence of a π-electron delocalizing group on the fused benzo ring, on the other hand, enhanced the free radical scavenging effect of the quinazoline 3-oxides. Methods that employ aryloximes and isothiocyanate for the construction of quinazoline 3-oxides derivatives in the presence of iodine have also been developed. The (2-aminophenyl)(phenyl)methanone oximes 16 (X = H or Cl) and arylisothiocyanates 17 , for example, were reacted with iodine in dimethylsulphoxide (DMSO) at RT to afford the corresponding 2-(arylamino)-4-phenylquinazoline 3-oxide derivatives 18 in 94–98% yield . This reaction proceeded via the initial condensation of the oxime 16 with arylisothiocyanate 17 in DMSO to afford the thiourea intermediate A . Iodine-mediated cyclization of A afforded intermediate B via N–C and S–I bond formation. Aromatization of the latter intermediate occurred with the generation of HI and S to afford the cyclized aromatic products 18 . The analogous oximes derived from 2-aminoacetophenone or 2-amino-5-bromo-3-iodoacetophenones , on the other hand, have previously been found to undergo methanesulfonyl chloride-mediated cyclization in the presence of triethylamine in dichloromethane at RT to afford the corresponding 1 H -indazoles. Under similar reaction conditions, the N -aryl o -aminoacetophenone oximes afforded a variety of N -aryl-1 H -indazoles and the analogous benzimidazoles when 2-aminopyridine and trimethylamine were used as bases, respectively . The oxime derivatives 19 were reacted with ethoxycarbonyl isothiocyanate in ethyl acetate to form the intermediate thioureas 20 , which spontaneously yclized in refluxing ethanol to afford the desired substituted ethyl (3-oxido-2-quinazolinyl)carbamates 21 in good yields . Madabhushi et al. previously employed Zinc(II) triflate (Zn(OTf) 2 ) as a Lewis acid catalyst in anhydrous toluene under reflux to affect the cyclocondensation of 2-aminoaryl ketones 22 with acetohydroxamic acid derivatives 23 to afford the corresponding 2,4-disubstituted quinazoline 3-oxides 24 . The mechanism of this reaction involves an initial attack of the electrophilic carbonyl carbon of 2-aminoacetophenone by the acetohydroxamic acid derivative to generate intermediate A . The latter would then undergo rapid intramolecular cyclization through the reaction of N -acetyl carbonyl with adjacent amine moiety followed by dehydration of B with the assistance of zinc species as a Lewis acid to produce a quinazoline 3-oxide with the elimination of two molecules of water . Methods involving the use of transition metals for the synthesis of quinazoline 3-oxides have also been developed, and examples are discussed in the next section. 2.2.2. Transition Metal-Mediated Reactions to Afford Quinazoline N -Oxides The N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide 27 was prepared as a sole product by subjecting acetophenone oxime 25 and 1,4,2-dioxazol-5-one 26 to dichloro(pentamethylcyclopentadienyl)rhodium(II) dimer ([Cp*RhCl 2 ] 2 ) as a catalyst in methanol under reflux for 12 h . Attempted cyclization of this N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide in acetic acid by these authors resulted in the recovery of the starting material with no quinazoline 3-oxide detected in the reaction mixture. The keto oxime 25 was found to undergo Zn(II)-catalyzed cyclocondensation-dehydration in tetrafluoroethylene (TFE) under a nitrogen atmosphere at 80 °C in a pressure tube to afford the quinazoline 3-oxide 28 in yield of 93% . A one-pot Rh(III)-catalyzed C−H activation-amidation of the ketoximes 29 and 1,4,2-dioxazol-5-ones 30 , and subsequent Zn(II) catalyzed cyclocondensation-dehydration of the incipient N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide afforded the 2,4-dicarbo substituted quinazoline 3-oxides 31 . The active RhCp*X 2 (X = NTf 2 or OAc) species is envisaged to be generated from the anion exchange between [RhCp*Cl 2 ] 2 and Zn(NTf) 2 or HOAc. Sawant et al. developed a direct one-pot, three-component reaction of 2-azidobenzaldehyde, isocyanide, and hydroxylamine hydrochloride to afford quinazoline-3-oxides . Palladium acetate catalyzed reaction of 2-azidobenzaldehyde 32 , isocyanides, and hydroxylamine hydrochloride in toluene in the presence of 4 Å molecular sieves under reflux afforded the quinazoline-3-oxides 33 in a single-pot operation . The mechanistic study revealed that the reaction proceeds via initial palladium-catalyzed azide–isocyanide denitrogenative coupling to afford intermediate A . Oximation of the carbaldehyde moiety of this intermediate and subsequent 6-exo-dig cyclization afforded the quinazoline 3-oxide derivative. The addition of 4 Å molecular sieves improved the overall yield of the desired product by removing water produced in situ during the formation of hydrazine. Although several substituted isocyanides reacted well under these conditions, the aromatic and secondary isocyanides failed to react, and the starting materials were recovered unchanged. Another conventional approach for the synthesis of quinazoline 3-oxides involves the dehydrogenation of the corresponding readily accessible 1,2-dihydroquinazoline 3-oxides , as described below. 2.3. Dehydrogenation of the 1,2-Dihydroquinazoline 3-Oxides 2-Aminobenzaldehyde or 2-aminoacetophenone derivatives readily undergo oximation with hydroxylamine hydrochloride in the presence of an amine base to afford the corresponding 2-aminobenzaldoximes or 2-aminoacetophenone oxime derivatives, respectively. Nucleophilic addition of the o -aminobenzaldoximes or 2-aminoacetophenone oxime derivatives to benzaldehyde derivatives and subsequent in situ cyclocondensation of the resultant intermediate afforded the corresponding 1,2-dihydroquinazoline 3-oxides. The latter were, in turn, evaluated for cytotoxicity against the human promyelocytic leukaemia HL-60 and lymphoblastic leukaemia NALM-6 cell lines . The oxime derived from 2-aminoacetophenone 34 (R = H), for example, has previously been reacted with a series of aryl aldehydes in the presence of p -toluene sulfonic acid as a catalyst in ethanol at RT for 5–15 min. to afford the corresponding 1,2-dihydroquinazoline 3-oxides 35 . Under similar reaction conditions, the oxime derived from 2-(methylamino)acetophenone (R = CH 3 ) afforded after 1 h, the corresponding 1,2-dihydroquinazoline 3-oxides . Samandran et al. also synthesised a series of the 1,2-dihydroquinazoline 3-oxides from the reaction of equimolar amounts of amino oximes with the corresponding aldehydes in ethanol at RT for 24 h . 2-Aminoacetophenone oxime analogue 36 was previously reacted with butanedione monooxime 37 in acetic acid under reflux for 24 h to afford ketoximes 38 . The latter were, in turn, cyclized in ethanol–acetic acid mixture under reflux for 24 h to afford the corresponding quinazoline 3-oxides 39 in 60–75% yield . These quinazoline 3-oxides were evaluated for cytotoxic activities against the human leukaemia HL-60 cells under hypoxic and aerobic conditions using tirapazamine as the reference standard. Chen and Yang previously exposed 4-methyl-2-(4-nitrophenyl)-1,2-dihydroquinazoline-3-oxide 40 to visible light in the presence of 0.5 mol % tris(bipyridine)ruthenium(II) chloride (Ru(bpy) 3 Cl 2 ) as a photocatalyst in acetonitrile under aerobic conditions and isolated 4-methyl-2-(4-nitrophenyl)quinazoline 3-oxide 41 in 63% yield . No product was obtained when the photooxidation of 40 was conducted under argon atmosphere prompting the authors to suggest the importance of molecular oxygen as the oxidant for this photoreaction. It is envisaged that visible light excited the Ru(bpy) 3 2+ to accept one electron from NH of 40 to yield the cation radical 40′ and the Ru(bpy) 3 + (see ref for fragmentation pattern). Electron transfer from the latter to molecular oxygen yielded the superoxide anion radical and regenerated the ground-state photocatalyst Ru(bpy) 3 2+ . It is envisaged that the cation radical 40′ underwent proton and hydrogen transfers to the superoxide anion radical to furnish the quinazoline 3-oxide 41 extruding hydrogen peroxide as a by-product. Oxidizing agents such as 2,3-dichloro-5,6-dicyanobenzoquinone (DDQ) , active manganese oxide (MnO 2 ) , and hydrogen peroxide (H 2 O 2 )-tungstate were used before to transform the dihydroquinazoline 3-oxides into the corresponding quinazoline 3-oxides. A series of quinazoline 3-oxides 42 (X = H, 6-Cl/Br or 7-Me) substituted at the 2-position with an alkyl or benzyl group and an electron donating or withdrawing group at the 4-position (alkyl, aryl, or heteroaryl) were synthesized in good to excellent yields (54–88%) by oxidation of the corresponding 1,2-dihydroquinazoline 3-oxides 43 using 3 equiv. of activated MnO 2 in dichloromethane at 50 °C . The advantage of the use of MnO 2 as an oxidant is the ease of its removal from the reaction mixture which involves simple filtration. Coșkun et al. have also dehydrogenated the dihydroquinazoline 3-oxides 44 using H 2 O 2 -sodium tungstate oxidant system in THF to afford after 24 h at RT the corresponding quinazoline 3-oxides 45 . However, the one-pot synthesis of these quinazoline 3-oxides from the 2-aminobenzaldoximes (refer to ) proceeded in a relatively short time resulting in improved overall yields . 2.4. Chemical Transformation of Quinazoline 3-Oxides Quinazolines N -oxides can undergo deoxygenation into quinazolines , acetoxylation and ring expansion to benzodiazepines . 2.4.1. Deoxygenation of Quinazoline N -Oxides The N–O bond in pyrimidine N -oxides is cleaved by catalytic reduction, low-valent phosphorus (PCl 3 or POCl 3 ) or titanium (TiCl 3 ) reagents, as well as by the more common metals used for hydrogenolysis. Deoxygenation of 4-methyl-2-phenylquinazoline 3-oxide 46 using Zn in the presence of aqueous NH 4 Cl in THF afforded 4-methyl-2-phenylquinazoline 47 in 71% yield . Deoxygenation of the analogous quinazoline 1-oxides 4a (R = CH 3 ) and 4b (R = -CH 2 CH 3 ), on the other hand, was achieved through catalytic hydrogenation (Raney Ni catalyst in MeOH under hydrogen (H 2 ) stream) to afford 4-substituted quinazoline 48 in 33–43% yield . A mixture of N -oxide 46 and phosphorus oxychloride in chloroform was heated at reflux for 15 min. followed by aqueous work-up and purification through silica gel column chromatography to afford 49 in 18% yield . Improved yield (70%) of this quinazoline derivative was observed when this quinazoline 3-oxide was treated with PCl 5 in dichloromethane at RT for 15 min. . 2.4.2. Alkoxylation of Quinazoline N -Oxides The highly acidic proton of the methyl group at the C-4 position of quinazoline-3-oxide scaffold has been found to promote acetoxylation to ester derivatives. 4-Methyl-7-methoxy-2-phenyl substituted quinazoline 3-oxide 50 , for example, was subjected to acetic anhydride under reflux for 0.5 h to afford the ester derivative 51 in 82% yield . 2.4.3. Alkylation of Quinazoline N -Oxides N -Oxide moiety in aza-heteroarene represents an efficient and removable directing group for ortho C–H bond activation. Zhao et al., for example, effected a copper-catalyzed oxidative coupling reaction between Csp 2 –H of quinazoline 3-oxide 52 and Csp 2 –H of benzaldehyde derivatives in the presence of tert -butyl hydroperoxide (TBHP) in dichloromethane at 40 °C under nitrogen atmosphere to furnish the quinazolinone derivatives 53 . Both aliphatic and aromatic substituents at the 2-position of the quinazoline 3-oxide scaffold were tolerated though the yields decreased with the increase of the chain length from the methyl to the propyl group. α,β-Unsaturated aldehydes, heteroaryl aldehydes, and aliphatic aldehydes were also found to be suitable acyl donors to afford cyclic hydroxamic esters in good to excellent yields. The authors observed the formation of quinazoline aryl ketone derivatives 54 from 52 in the presence of Cu(OAc) 2, albeit in low yields when the reaction was quenched prematurely. The quinazoline aryl ketones 54 were isolated as sole products in the presence of trimethylsilyl azide (TMSN 3 ) and copper carbonate (CuCO 3 ) . Controlled reactions revealed that compounds 53 are the consequence of initial in situ Baeyer–Villiger oxidation of quinazoline aryl ketones 54 followed by intramolecular acyl transfer to afford 53 . In a subsequent study, these authors employed this strategy on benzylic Csp 3 –H bonds with quinazoline 3-oxides 52 in the presence of CuSO 4 (3 mol %), TBHP (2 equiv.), 20 mol % of tetrabutylammonium iodide (TBAI), and NaI (70 mol %) in dichloromethane at 70 °C in sealed tubes to afford after 12 h the corresponding quinazolinone derivatives 55 . A copper-catalyzed oxidative coupling reaction between Csp 2 –H of quinazoline 3-oxide 56 and Csp 2 –H of formamides in the presence of copper hydroxide and TBHP in dichloroethane (DCE) also afforded the analogous O -quinazolinone carbamates 57 . The latter are envisaged to be formed through a reaction sequence involving radical addition, Baeyer–Villiger oxidation, and intramolecular acyl transfer . Quinazoline 3-oxides 58 reacted with primary amines in the presence of TBHP as the oxidant in dioxane under reflux for 24–44 h to afford the quinazolin-4(3 H )-one derivatives 59 . The mechanism of this reaction was investigated using control reactions, and ESI-MS analysis revealed a complex reaction involving multiple bond dissociation/recombination steps. These mild reactions and metal-free conditions were found to be compatible with a broad range of primary amines, producing a series of quinoxaline-4(3H)-ones. Moreover, this methodology also afforded 3-(2-(1 H -indol-3-yl) ethyl)quinazolin-4(3 H )-one 60 in 70% yield, which is a precursor for the synthesis of bioactive rutaempine and (±)-evodiamine . Direct C-4 alkylation of the 2-unsubstituted and 2-aryl substituted quinazoline-3-oxides was previously achieved with open chain (1,2-dimethoxyethane, diethoxymethane or diethyl ether) or cyclic ethers (1,3-dioxolane or 1,3-benzodioxole) in the presence of tert -butyl peroxybenzoate (TBPB) afforded series of oxidative cross-coupling products in moderate to good yields . shows the reactions of 1,4-dioxane with 61a (R = H) and 61b (R = Ar) as representative models for the radical oxidative cross-coupling reaction of the sp 3 C–H bond in ethers with the sp 2 C–H bond in quinazoline-3-oxide to afford 62a and 62b , respectively. The mechanism of this radical oxidative cross-coupling reaction is envisaged to involve the initial decomposition of TBPB to generate a tert -butoxyl radical and a benzoate radical. The most reactive tert -butoxyl radical then abstracted hydrogen from 1,4-dioxane, and the resultant dioxane radical added to quinazoline 3-oxide 61 to generate a quinazoline-3-oxide radical. Abstraction of a hydrogen atom from the quinazoline-3-oxide radical by less sterically hindered benzoate radical (versus tert -butoxyl radical) afforded quinazoline-3-oxide 62 and benzoic acid as a by-product . Copper(II) chloride has been employed to catalyze the Csp 2 –H bond (C-4) of the 2-aryl substituted quinazoline-3-oxide 63 and the Csp 2 –H bond (C-3) of various indoles 64 to facilitate the cross-dehydrogenative-coupling reaction between these two chromophores to afford the quinazoline 3-oxide appended indole hybrids 65 . The N -methylindoles substituted with alkyl, halide, or alkoxy group at the 5-position afforded the expected quinazoline 3-oxide–indole hybrids in moderate to good yields. However, indoles substituted on nitrogen with an electron-withdrawing group, such as the tosyl group, failed to react. Subsequent dehydrogenation of these molecular hybrids with PCl 5 (1.2 equiv.) in toluene at RT afforded quinazoline-indole hybrids 66 . Quinazoline 3-oxides are valuable intermediates in the synthesis of benzodiazepine analogues and other polycyclic compounds of biological importance , and examples of these reactions are described in the next sections. The analogous 4-(1-benzyl-1 H -indol-3-yl)-6,7-dimethoxyquinazoline has previously been found to exhibit moderate activity against protein tyrosine kinase ErbB-2, with little or no activity against the epidermal growth factor receptor tyrosine kinase (EGFR-TK) . The 4-(indole-3-yl)quinazolines, on the other hand, were found to be highly potent EGFR-TK inhibitors with excellent cytotoxic properties against several cancer cell lines . 2.5. Synthesis of Polycyclic Quinazoline Derivatives and Benzodiazepine Analogues The oxygen atom of quinazoline 3-oxides can also participate in ring-closure reactions to yield polycyclic derivatives. The 3-oxidoquinazoline-2-carbamates 67, for example, were found to undergo reductive ring closure to afford the 3,9-dihydro-2 H -[1,2,4]oxadiazolo[3,2- b ]quinazolin-2-ones 68 . Wu et al. reported a copper-catalyzed [3 + 2] cycloaddition of quinazoline 3-oxides 69 with alkylidenecyclopropane derivatives to afford the angular polycyclic quinazoline derivatives 70 . Methyl, methoxy, fluoro and chloro functionalities were all tolerated, leading to the formation of the corresponding N -(2-(5-oxa-6-azaspiro[2.4]hept-6-en7-yl)phenyl) 69 in high yield. Yin et al. studied the [3 + 2] cycloaddition reaction between the quinazoline 3-oxides 71 (R 2 = H, alkyl, aryl) and various alkene derivatives 72 such as methyl 3-methoxyacrylate (R 3 = -CO 2 Me, R 4 = -OMe), ethyl-3-ethoxyacrylate (R 3 = -CO 2 Et, R 4 = -OEt), dimethyl maleate (R 3 , R 4 = -CO 2 Me), acrylonitrile (R 3 = -CN, R 4 = H) and 5-methyl-hex-2-enoic acid methyl ester (R 3 = -CO 2 Me, R 4 = -CH 2 CH(CH 3 ) 2 ) to afford a series of isoxazolo[2,3 -c ]quinazoline 73 in good to excellent yield with total regio- and stereoselectivities . A density functional theory (DFT) method using the B3LYP/6-31G(d) basis set further predicted the reaction to be under thermodynamic control and to favour exclusive formation of the ortho-exo cycloadduct in agreement with experimental finding . Hitherto, Heaney et al. reacted 2-styrylquinazoline 3-oxide 71 (R 2 = -CH=CHPh) with phenyl vinyl sulfone or N -methyl maleimide in THF under reflux and isolated the corresponding isoxazolo[2,3- c ]quinazoline derivatives in very low yields . These tricyclic compounds were found to be unstable in solution at RT and to rearrange to other complex products. The use of acetylene derivatives as dipolarophiles on the 4-carbo substituted quinazoline 3-oxides, on the other hand, resulted in the isolation of benzodiazepine analogues instead of the polycyclic quinazoline derivatives. The 2-aryl-4-methylquinazoline 3-oxides 74 , for example, were reacted with dimethyl acetylenedicarboxylate (DMAD) 75 in THF at 70° for 4 to afford the methyl 5-(2-methoxy-2-oxoacetyl)-4-methyl-2-phenylsubstituted-5 H -benzo[ d ][1,3]diazepine-5-carboxylates 76 in 67–74% yield . The formation of these benzodiazepine derivatives is envisaged to proceed via the rearrangement of the incipient tricyclic quinazoline intermediate A as represented in the Scheme. DMAD, on the other hand, reacted with the isomeric 2-methyl-4-phenylquinazoline 3-oxide 77 in benzene-(m)ethanol followed by purification on basic alumina to afford the phenyl acrylates 78 (13–21%) and the potentially tautomeric benzodiazepines 79 in 5–14% yield together with smaller amounts of other products . The analogous 4-methyl-2-styrylquinazoline 3-oxides 80a – d were also reacted with dimethyl acetylenedicarboxylate (DMAD) 75a (R 2 = -CO 2 CH 3 ) or methyl propiolate 75b (R 2 = H) as dipolarophiles (2 equiv.) in dry THF under reflux for 16 h followed through purification by column chromatography on either silica gel (SiO 2 ) or neutral alumina (Al 2 O) to afford the corresponding benzodiazepine analogues 81a – d . The presence of the 2-styryl group resulted in significantly reduced yields of the corresponding benzodiazepine derivatives compared to the products of the reaction of analogous 2-phenyl-4-methylquinazoline 3-oxides 74 with DMAD (refer to above). The heterocyclic ring of these compounds was hydrolysed during purification by column chromatography on either or both SiO 2 or Al 2 O and the extend of hydrolysis depended on the nature of the C-4 and C-5 substituents on the diazepine ring. The presence of the carbon-carbon double bond in the five-membered ring of the tricyclic quinazoline intermediate implicated in the reaction of the 4-carbo substituted quinazoline 3-oxides 74 , 77 or 80 with acetylene derivative 75a or 75b facilitated the rearrangement and subsequent ring enlargement to afford benzodiazepine analogues. 2.6. Ring Expansion of Quinazoline 3-Oxides to Afford Benzodiazepine Analogues Nucleophilic attack on C-2 of 2-chloromethyl quinazoline 3-oxide 82 by methylamine followed sequentially by ring opening of intermediate A and intramolecular displacement of chlorine atom by nitrogen atom of the oxime moiety afforded chlordiazepoxide 83 as shown in . Acid hydrolysis of chlordiazepoxide and in situ hydrolysis of the benzodiazepin-2-one 4-oxide intermediate 84 followed by its PCl 3 mediated deoxygenation afforded 1,4-benzodiazepine (diazepam) 85 . Benzodiazepines enhance the effect of the neurotransmitter gamma-amino butyric acid (GABA-A), resulting in sedative, hypnotic, anxiolytic, anticonvulsant and muscle relaxant properties . These properties make benzodiazepines and their analogues useful drugs in the treatment of anxiety, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal and as a premedication for medical or dental procedures. Chlordiazepoxide and diazepam, for example, are the central nervous system (CNS) agents used for the treatment of muscle spasms, seizures, trauma, and anxiety disorders . Although there are several reagents that can be used for the direct oxidation of quinazoline nuclei to N -oxides, this approach is complicated by the lack of selectivity. Moreover, the pyrimidine nucleus is susceptible to hydrolysis, ring opening, and decomposition resulting in reduced yields of the N -oxides. Treatment of 4-alkylsubstituted quinazoline 3a (R = -CH 3 ) and 3b (R = -CH 2 CH 3 ) with monoperphthalic acid (1.2–1.3 equiv.) in ether at room temperature (RT) for 5 h, for example, afforded mixtures of the corresponding N -1 ( 4a , b ) and N -3 oxides ( 5a , b ) as well as the quinazolinone derivative 6 . The preference for N -1 oxidation over the N -3 centre resulted in significantly reduced yields of the biologically relevant quinazoline 3-oxides. Moreover, this reaction produced quinazolin-4(3 H )-one as the main product and quinazoline N -oxides as by-products. Recourse to the literature revealed a method that made use of a recombinant soluble di-iron monooxygenase (SDIMO) PmlABCDEF overexpressed in Escherichia coli which was used as a whole-cell biocatalyst to oxidize pyridines, pyrazines, pyrimidines, and their benzo-fused derivatives into the corresponding N -oxides . Quinazoline 2 was among the benzo-fused heterocycles with two nitrogen atoms, which was transformed into quinazoline 3-oxide in 67% yield without any side oxidation products. The drawback associated with the direct N -oxidation of quinazoline scaffold using strong oxidizing agents led to the development of alternative methods for the synthesis of quinazoline 3-oxides, and these strategies are described in detail below. The most common strategy for the synthesis of quinazoline 3-oxides is based on intramolecular cyclocondensation of the intermediate N -acyl-2-aminoaryl ketone oximes using various reagents. 2.2.1. Intramolecular Cyclocondensation of the N -Acyl-2-aminoaryl Ketone Oximes The 2-aminoaryl ketone oximes were previously cyclized with triethyl orthoformate to afford quinazoline 3-oxides albeit in low yields . Improved yields of the 2,4-dicarbo substituted quinazoline 3-oxides were achieved via initial acylation of 2-aminoacetophenone followed by intramolecular cyclocondensation of the intermediate N -acyl-2-aminoaryl ketone oximes with hydroxylamine hydrochloride . The N -oxide of 2,4-dimethylquinazoline 8 , for example, was obtained in 75% yield by treatment of ( E )- N -(2-(1-(hydroxyimino)ethyl)phenyl)acetamide 7 with hydroxylamine hydrochloride under reflux for 3 h . Hydroxylamine hydrochloride serves as a proton source to protonate an oxygen atom of the amide moiety, followed by cyclocondensation of the incipient intermediate A to afford B . The latter then undergoes dehydrogenation to afford the fully aromatic derivative 8 . Hitherto, the analogous 2-alkyl/cycloalkyl substituted 4-methylquinazoline 3-oxides were evaluated for biological activity as pulmonary-selective inhibitors of ovalbumin-induced, leukotriene-mediated bronchoconstriction . The most active and selective compounds contained a methyl group at the 4-position, a medium-sized branched alkyl group at the 2-position, and a small electron donating group on the phenyl ring. Series of quinazoline 3-oxides 10 substituted with various groups on the fused benzo ring were prepared in 12–95% yield by subjecting the 2-aminoacetophenone oxime derivatives 9 to hydroxylamine hydrochloride and pyridine-ethanol mixture under reflux . The 2-carbo substituted derivatives of 11 , on the other hand, were prepared by treatment of substrates 9 with triethyl orthopropionate or triethyl orthoacetate under reflux for 1–3 h . The mechanism of this reaction involves the formation of an ethoxymethyleneamino derivative or Schiff base followed by cyclocondensation to afford quinazoline 3-oxide. Analogues of compound 11 were used as cardiotonic and bronchodilating agents . The 2-aminobenzaldoxime 12 was subjected to a one-pot reaction with benzaldehyde derivatives in the presence of H 2 O 2 -sodium tungstate in THF to afford after 24 h the corresponding quinazoline 3-oxides 13 in good overall yields (69–81%) . The mechanism of this reaction is envisaged to involve the initial nucleophilic addition of the aniline derivative to the carbaldehyde group followed by cyclocondensation of the intermediate Schiff base A to afford a dihydroquinazoline 3-oxide derivative B . H 2 O 2 -sodium tungstate then serves as an oxidizing system on B to afford 13 . Series of N -[2-(1-hydroxyiminoethyl)phenyl]amides and N -(4-halo-2-(1-(hydroxyimino)ethyl)phenyl)amide derivatives 14 (R = alkyl or aryl) were subjected to acid promoted intramolecular cyclization with trifluoroacetic acid (TFA) under reflux for 2 h to afford upon aqueous workup and purification through silica gel column chromatography the corresponding 2,4-dicarbo substituted quinazoline 3-oxides 15 in 72–89% yield . These compounds resulted from initial protonation of the amide oxygen by TFA followed by an attack of the activated amide carbon of A to form intermediate B . The heterocyclic ring of the latter underwent spontaneous dehydrogenation to afford a fully aromatic derivative. These compounds were, in turn, evaluated through enzymatic assays (in vitro and in silico) for potential inhibitory effect against cyclooxygenase-1/2 (COX-1/2) and lipoxygenase-5 (LOX-5) activities as well as for free radical scavenging potential and cytotoxicity. Structure–activity relationship analysis suggested that the presence of a halogen atom at the C-6 position and a 2-aryl group enhanced the inhibitory effect against COX-2, and this observation was well supported by molecular docking studies. The presence of a π-electron delocalizing group on the fused benzo ring, on the other hand, enhanced the free radical scavenging effect of the quinazoline 3-oxides. Methods that employ aryloximes and isothiocyanate for the construction of quinazoline 3-oxides derivatives in the presence of iodine have also been developed. The (2-aminophenyl)(phenyl)methanone oximes 16 (X = H or Cl) and arylisothiocyanates 17 , for example, were reacted with iodine in dimethylsulphoxide (DMSO) at RT to afford the corresponding 2-(arylamino)-4-phenylquinazoline 3-oxide derivatives 18 in 94–98% yield . This reaction proceeded via the initial condensation of the oxime 16 with arylisothiocyanate 17 in DMSO to afford the thiourea intermediate A . Iodine-mediated cyclization of A afforded intermediate B via N–C and S–I bond formation. Aromatization of the latter intermediate occurred with the generation of HI and S to afford the cyclized aromatic products 18 . The analogous oximes derived from 2-aminoacetophenone or 2-amino-5-bromo-3-iodoacetophenones , on the other hand, have previously been found to undergo methanesulfonyl chloride-mediated cyclization in the presence of triethylamine in dichloromethane at RT to afford the corresponding 1 H -indazoles. Under similar reaction conditions, the N -aryl o -aminoacetophenone oximes afforded a variety of N -aryl-1 H -indazoles and the analogous benzimidazoles when 2-aminopyridine and trimethylamine were used as bases, respectively . The oxime derivatives 19 were reacted with ethoxycarbonyl isothiocyanate in ethyl acetate to form the intermediate thioureas 20 , which spontaneously yclized in refluxing ethanol to afford the desired substituted ethyl (3-oxido-2-quinazolinyl)carbamates 21 in good yields . Madabhushi et al. previously employed Zinc(II) triflate (Zn(OTf) 2 ) as a Lewis acid catalyst in anhydrous toluene under reflux to affect the cyclocondensation of 2-aminoaryl ketones 22 with acetohydroxamic acid derivatives 23 to afford the corresponding 2,4-disubstituted quinazoline 3-oxides 24 . The mechanism of this reaction involves an initial attack of the electrophilic carbonyl carbon of 2-aminoacetophenone by the acetohydroxamic acid derivative to generate intermediate A . The latter would then undergo rapid intramolecular cyclization through the reaction of N -acetyl carbonyl with adjacent amine moiety followed by dehydration of B with the assistance of zinc species as a Lewis acid to produce a quinazoline 3-oxide with the elimination of two molecules of water . Methods involving the use of transition metals for the synthesis of quinazoline 3-oxides have also been developed, and examples are discussed in the next section. 2.2.2. Transition Metal-Mediated Reactions to Afford Quinazoline N -Oxides The N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide 27 was prepared as a sole product by subjecting acetophenone oxime 25 and 1,4,2-dioxazol-5-one 26 to dichloro(pentamethylcyclopentadienyl)rhodium(II) dimer ([Cp*RhCl 2 ] 2 ) as a catalyst in methanol under reflux for 12 h . Attempted cyclization of this N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide in acetic acid by these authors resulted in the recovery of the starting material with no quinazoline 3-oxide detected in the reaction mixture. The keto oxime 25 was found to undergo Zn(II)-catalyzed cyclocondensation-dehydration in tetrafluoroethylene (TFE) under a nitrogen atmosphere at 80 °C in a pressure tube to afford the quinazoline 3-oxide 28 in yield of 93% . A one-pot Rh(III)-catalyzed C−H activation-amidation of the ketoximes 29 and 1,4,2-dioxazol-5-ones 30 , and subsequent Zn(II) catalyzed cyclocondensation-dehydration of the incipient N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide afforded the 2,4-dicarbo substituted quinazoline 3-oxides 31 . The active RhCp*X 2 (X = NTf 2 or OAc) species is envisaged to be generated from the anion exchange between [RhCp*Cl 2 ] 2 and Zn(NTf) 2 or HOAc. Sawant et al. developed a direct one-pot, three-component reaction of 2-azidobenzaldehyde, isocyanide, and hydroxylamine hydrochloride to afford quinazoline-3-oxides . Palladium acetate catalyzed reaction of 2-azidobenzaldehyde 32 , isocyanides, and hydroxylamine hydrochloride in toluene in the presence of 4 Å molecular sieves under reflux afforded the quinazoline-3-oxides 33 in a single-pot operation . The mechanistic study revealed that the reaction proceeds via initial palladium-catalyzed azide–isocyanide denitrogenative coupling to afford intermediate A . Oximation of the carbaldehyde moiety of this intermediate and subsequent 6-exo-dig cyclization afforded the quinazoline 3-oxide derivative. The addition of 4 Å molecular sieves improved the overall yield of the desired product by removing water produced in situ during the formation of hydrazine. Although several substituted isocyanides reacted well under these conditions, the aromatic and secondary isocyanides failed to react, and the starting materials were recovered unchanged. Another conventional approach for the synthesis of quinazoline 3-oxides involves the dehydrogenation of the corresponding readily accessible 1,2-dihydroquinazoline 3-oxides , as described below. N -Acyl-2-aminoaryl Ketone Oximes The 2-aminoaryl ketone oximes were previously cyclized with triethyl orthoformate to afford quinazoline 3-oxides albeit in low yields . Improved yields of the 2,4-dicarbo substituted quinazoline 3-oxides were achieved via initial acylation of 2-aminoacetophenone followed by intramolecular cyclocondensation of the intermediate N -acyl-2-aminoaryl ketone oximes with hydroxylamine hydrochloride . The N -oxide of 2,4-dimethylquinazoline 8 , for example, was obtained in 75% yield by treatment of ( E )- N -(2-(1-(hydroxyimino)ethyl)phenyl)acetamide 7 with hydroxylamine hydrochloride under reflux for 3 h . Hydroxylamine hydrochloride serves as a proton source to protonate an oxygen atom of the amide moiety, followed by cyclocondensation of the incipient intermediate A to afford B . The latter then undergoes dehydrogenation to afford the fully aromatic derivative 8 . Hitherto, the analogous 2-alkyl/cycloalkyl substituted 4-methylquinazoline 3-oxides were evaluated for biological activity as pulmonary-selective inhibitors of ovalbumin-induced, leukotriene-mediated bronchoconstriction . The most active and selective compounds contained a methyl group at the 4-position, a medium-sized branched alkyl group at the 2-position, and a small electron donating group on the phenyl ring. Series of quinazoline 3-oxides 10 substituted with various groups on the fused benzo ring were prepared in 12–95% yield by subjecting the 2-aminoacetophenone oxime derivatives 9 to hydroxylamine hydrochloride and pyridine-ethanol mixture under reflux . The 2-carbo substituted derivatives of 11 , on the other hand, were prepared by treatment of substrates 9 with triethyl orthopropionate or triethyl orthoacetate under reflux for 1–3 h . The mechanism of this reaction involves the formation of an ethoxymethyleneamino derivative or Schiff base followed by cyclocondensation to afford quinazoline 3-oxide. Analogues of compound 11 were used as cardiotonic and bronchodilating agents . The 2-aminobenzaldoxime 12 was subjected to a one-pot reaction with benzaldehyde derivatives in the presence of H 2 O 2 -sodium tungstate in THF to afford after 24 h the corresponding quinazoline 3-oxides 13 in good overall yields (69–81%) . The mechanism of this reaction is envisaged to involve the initial nucleophilic addition of the aniline derivative to the carbaldehyde group followed by cyclocondensation of the intermediate Schiff base A to afford a dihydroquinazoline 3-oxide derivative B . H 2 O 2 -sodium tungstate then serves as an oxidizing system on B to afford 13 . Series of N -[2-(1-hydroxyiminoethyl)phenyl]amides and N -(4-halo-2-(1-(hydroxyimino)ethyl)phenyl)amide derivatives 14 (R = alkyl or aryl) were subjected to acid promoted intramolecular cyclization with trifluoroacetic acid (TFA) under reflux for 2 h to afford upon aqueous workup and purification through silica gel column chromatography the corresponding 2,4-dicarbo substituted quinazoline 3-oxides 15 in 72–89% yield . These compounds resulted from initial protonation of the amide oxygen by TFA followed by an attack of the activated amide carbon of A to form intermediate B . The heterocyclic ring of the latter underwent spontaneous dehydrogenation to afford a fully aromatic derivative. These compounds were, in turn, evaluated through enzymatic assays (in vitro and in silico) for potential inhibitory effect against cyclooxygenase-1/2 (COX-1/2) and lipoxygenase-5 (LOX-5) activities as well as for free radical scavenging potential and cytotoxicity. Structure–activity relationship analysis suggested that the presence of a halogen atom at the C-6 position and a 2-aryl group enhanced the inhibitory effect against COX-2, and this observation was well supported by molecular docking studies. The presence of a π-electron delocalizing group on the fused benzo ring, on the other hand, enhanced the free radical scavenging effect of the quinazoline 3-oxides. Methods that employ aryloximes and isothiocyanate for the construction of quinazoline 3-oxides derivatives in the presence of iodine have also been developed. The (2-aminophenyl)(phenyl)methanone oximes 16 (X = H or Cl) and arylisothiocyanates 17 , for example, were reacted with iodine in dimethylsulphoxide (DMSO) at RT to afford the corresponding 2-(arylamino)-4-phenylquinazoline 3-oxide derivatives 18 in 94–98% yield . This reaction proceeded via the initial condensation of the oxime 16 with arylisothiocyanate 17 in DMSO to afford the thiourea intermediate A . Iodine-mediated cyclization of A afforded intermediate B via N–C and S–I bond formation. Aromatization of the latter intermediate occurred with the generation of HI and S to afford the cyclized aromatic products 18 . The analogous oximes derived from 2-aminoacetophenone or 2-amino-5-bromo-3-iodoacetophenones , on the other hand, have previously been found to undergo methanesulfonyl chloride-mediated cyclization in the presence of triethylamine in dichloromethane at RT to afford the corresponding 1 H -indazoles. Under similar reaction conditions, the N -aryl o -aminoacetophenone oximes afforded a variety of N -aryl-1 H -indazoles and the analogous benzimidazoles when 2-aminopyridine and trimethylamine were used as bases, respectively . The oxime derivatives 19 were reacted with ethoxycarbonyl isothiocyanate in ethyl acetate to form the intermediate thioureas 20 , which spontaneously yclized in refluxing ethanol to afford the desired substituted ethyl (3-oxido-2-quinazolinyl)carbamates 21 in good yields . Madabhushi et al. previously employed Zinc(II) triflate (Zn(OTf) 2 ) as a Lewis acid catalyst in anhydrous toluene under reflux to affect the cyclocondensation of 2-aminoaryl ketones 22 with acetohydroxamic acid derivatives 23 to afford the corresponding 2,4-disubstituted quinazoline 3-oxides 24 . The mechanism of this reaction involves an initial attack of the electrophilic carbonyl carbon of 2-aminoacetophenone by the acetohydroxamic acid derivative to generate intermediate A . The latter would then undergo rapid intramolecular cyclization through the reaction of N -acetyl carbonyl with adjacent amine moiety followed by dehydration of B with the assistance of zinc species as a Lewis acid to produce a quinazoline 3-oxide with the elimination of two molecules of water . Methods involving the use of transition metals for the synthesis of quinazoline 3-oxides have also been developed, and examples are discussed in the next section. N -Oxides The N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide 27 was prepared as a sole product by subjecting acetophenone oxime 25 and 1,4,2-dioxazol-5-one 26 to dichloro(pentamethylcyclopentadienyl)rhodium(II) dimer ([Cp*RhCl 2 ] 2 ) as a catalyst in methanol under reflux for 12 h . Attempted cyclization of this N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide in acetic acid by these authors resulted in the recovery of the starting material with no quinazoline 3-oxide detected in the reaction mixture. The keto oxime 25 was found to undergo Zn(II)-catalyzed cyclocondensation-dehydration in tetrafluoroethylene (TFE) under a nitrogen atmosphere at 80 °C in a pressure tube to afford the quinazoline 3-oxide 28 in yield of 93% . A one-pot Rh(III)-catalyzed C−H activation-amidation of the ketoximes 29 and 1,4,2-dioxazol-5-ones 30 , and subsequent Zn(II) catalyzed cyclocondensation-dehydration of the incipient N -(2-(1-(hydroxyimino)ethyl)phenyl)benzamide afforded the 2,4-dicarbo substituted quinazoline 3-oxides 31 . The active RhCp*X 2 (X = NTf 2 or OAc) species is envisaged to be generated from the anion exchange between [RhCp*Cl 2 ] 2 and Zn(NTf) 2 or HOAc. Sawant et al. developed a direct one-pot, three-component reaction of 2-azidobenzaldehyde, isocyanide, and hydroxylamine hydrochloride to afford quinazoline-3-oxides . Palladium acetate catalyzed reaction of 2-azidobenzaldehyde 32 , isocyanides, and hydroxylamine hydrochloride in toluene in the presence of 4 Å molecular sieves under reflux afforded the quinazoline-3-oxides 33 in a single-pot operation . The mechanistic study revealed that the reaction proceeds via initial palladium-catalyzed azide–isocyanide denitrogenative coupling to afford intermediate A . Oximation of the carbaldehyde moiety of this intermediate and subsequent 6-exo-dig cyclization afforded the quinazoline 3-oxide derivative. The addition of 4 Å molecular sieves improved the overall yield of the desired product by removing water produced in situ during the formation of hydrazine. Although several substituted isocyanides reacted well under these conditions, the aromatic and secondary isocyanides failed to react, and the starting materials were recovered unchanged. Another conventional approach for the synthesis of quinazoline 3-oxides involves the dehydrogenation of the corresponding readily accessible 1,2-dihydroquinazoline 3-oxides , as described below. 2-Aminobenzaldehyde or 2-aminoacetophenone derivatives readily undergo oximation with hydroxylamine hydrochloride in the presence of an amine base to afford the corresponding 2-aminobenzaldoximes or 2-aminoacetophenone oxime derivatives, respectively. Nucleophilic addition of the o -aminobenzaldoximes or 2-aminoacetophenone oxime derivatives to benzaldehyde derivatives and subsequent in situ cyclocondensation of the resultant intermediate afforded the corresponding 1,2-dihydroquinazoline 3-oxides. The latter were, in turn, evaluated for cytotoxicity against the human promyelocytic leukaemia HL-60 and lymphoblastic leukaemia NALM-6 cell lines . The oxime derived from 2-aminoacetophenone 34 (R = H), for example, has previously been reacted with a series of aryl aldehydes in the presence of p -toluene sulfonic acid as a catalyst in ethanol at RT for 5–15 min. to afford the corresponding 1,2-dihydroquinazoline 3-oxides 35 . Under similar reaction conditions, the oxime derived from 2-(methylamino)acetophenone (R = CH 3 ) afforded after 1 h, the corresponding 1,2-dihydroquinazoline 3-oxides . Samandran et al. also synthesised a series of the 1,2-dihydroquinazoline 3-oxides from the reaction of equimolar amounts of amino oximes with the corresponding aldehydes in ethanol at RT for 24 h . 2-Aminoacetophenone oxime analogue 36 was previously reacted with butanedione monooxime 37 in acetic acid under reflux for 24 h to afford ketoximes 38 . The latter were, in turn, cyclized in ethanol–acetic acid mixture under reflux for 24 h to afford the corresponding quinazoline 3-oxides 39 in 60–75% yield . These quinazoline 3-oxides were evaluated for cytotoxic activities against the human leukaemia HL-60 cells under hypoxic and aerobic conditions using tirapazamine as the reference standard. Chen and Yang previously exposed 4-methyl-2-(4-nitrophenyl)-1,2-dihydroquinazoline-3-oxide 40 to visible light in the presence of 0.5 mol % tris(bipyridine)ruthenium(II) chloride (Ru(bpy) 3 Cl 2 ) as a photocatalyst in acetonitrile under aerobic conditions and isolated 4-methyl-2-(4-nitrophenyl)quinazoline 3-oxide 41 in 63% yield . No product was obtained when the photooxidation of 40 was conducted under argon atmosphere prompting the authors to suggest the importance of molecular oxygen as the oxidant for this photoreaction. It is envisaged that visible light excited the Ru(bpy) 3 2+ to accept one electron from NH of 40 to yield the cation radical 40′ and the Ru(bpy) 3 + (see ref for fragmentation pattern). Electron transfer from the latter to molecular oxygen yielded the superoxide anion radical and regenerated the ground-state photocatalyst Ru(bpy) 3 2+ . It is envisaged that the cation radical 40′ underwent proton and hydrogen transfers to the superoxide anion radical to furnish the quinazoline 3-oxide 41 extruding hydrogen peroxide as a by-product. Oxidizing agents such as 2,3-dichloro-5,6-dicyanobenzoquinone (DDQ) , active manganese oxide (MnO 2 ) , and hydrogen peroxide (H 2 O 2 )-tungstate were used before to transform the dihydroquinazoline 3-oxides into the corresponding quinazoline 3-oxides. A series of quinazoline 3-oxides 42 (X = H, 6-Cl/Br or 7-Me) substituted at the 2-position with an alkyl or benzyl group and an electron donating or withdrawing group at the 4-position (alkyl, aryl, or heteroaryl) were synthesized in good to excellent yields (54–88%) by oxidation of the corresponding 1,2-dihydroquinazoline 3-oxides 43 using 3 equiv. of activated MnO 2 in dichloromethane at 50 °C . The advantage of the use of MnO 2 as an oxidant is the ease of its removal from the reaction mixture which involves simple filtration. Coșkun et al. have also dehydrogenated the dihydroquinazoline 3-oxides 44 using H 2 O 2 -sodium tungstate oxidant system in THF to afford after 24 h at RT the corresponding quinazoline 3-oxides 45 . However, the one-pot synthesis of these quinazoline 3-oxides from the 2-aminobenzaldoximes (refer to ) proceeded in a relatively short time resulting in improved overall yields . Quinazolines N -oxides can undergo deoxygenation into quinazolines , acetoxylation and ring expansion to benzodiazepines . 2.4.1. Deoxygenation of Quinazoline N -Oxides The N–O bond in pyrimidine N -oxides is cleaved by catalytic reduction, low-valent phosphorus (PCl 3 or POCl 3 ) or titanium (TiCl 3 ) reagents, as well as by the more common metals used for hydrogenolysis. Deoxygenation of 4-methyl-2-phenylquinazoline 3-oxide 46 using Zn in the presence of aqueous NH 4 Cl in THF afforded 4-methyl-2-phenylquinazoline 47 in 71% yield . Deoxygenation of the analogous quinazoline 1-oxides 4a (R = CH 3 ) and 4b (R = -CH 2 CH 3 ), on the other hand, was achieved through catalytic hydrogenation (Raney Ni catalyst in MeOH under hydrogen (H 2 ) stream) to afford 4-substituted quinazoline 48 in 33–43% yield . A mixture of N -oxide 46 and phosphorus oxychloride in chloroform was heated at reflux for 15 min. followed by aqueous work-up and purification through silica gel column chromatography to afford 49 in 18% yield . Improved yield (70%) of this quinazoline derivative was observed when this quinazoline 3-oxide was treated with PCl 5 in dichloromethane at RT for 15 min. . 2.4.2. Alkoxylation of Quinazoline N -Oxides The highly acidic proton of the methyl group at the C-4 position of quinazoline-3-oxide scaffold has been found to promote acetoxylation to ester derivatives. 4-Methyl-7-methoxy-2-phenyl substituted quinazoline 3-oxide 50 , for example, was subjected to acetic anhydride under reflux for 0.5 h to afford the ester derivative 51 in 82% yield . 2.4.3. Alkylation of Quinazoline N -Oxides N -Oxide moiety in aza-heteroarene represents an efficient and removable directing group for ortho C–H bond activation. Zhao et al., for example, effected a copper-catalyzed oxidative coupling reaction between Csp 2 –H of quinazoline 3-oxide 52 and Csp 2 –H of benzaldehyde derivatives in the presence of tert -butyl hydroperoxide (TBHP) in dichloromethane at 40 °C under nitrogen atmosphere to furnish the quinazolinone derivatives 53 . Both aliphatic and aromatic substituents at the 2-position of the quinazoline 3-oxide scaffold were tolerated though the yields decreased with the increase of the chain length from the methyl to the propyl group. α,β-Unsaturated aldehydes, heteroaryl aldehydes, and aliphatic aldehydes were also found to be suitable acyl donors to afford cyclic hydroxamic esters in good to excellent yields. The authors observed the formation of quinazoline aryl ketone derivatives 54 from 52 in the presence of Cu(OAc) 2, albeit in low yields when the reaction was quenched prematurely. The quinazoline aryl ketones 54 were isolated as sole products in the presence of trimethylsilyl azide (TMSN 3 ) and copper carbonate (CuCO 3 ) . Controlled reactions revealed that compounds 53 are the consequence of initial in situ Baeyer–Villiger oxidation of quinazoline aryl ketones 54 followed by intramolecular acyl transfer to afford 53 . In a subsequent study, these authors employed this strategy on benzylic Csp 3 –H bonds with quinazoline 3-oxides 52 in the presence of CuSO 4 (3 mol %), TBHP (2 equiv.), 20 mol % of tetrabutylammonium iodide (TBAI), and NaI (70 mol %) in dichloromethane at 70 °C in sealed tubes to afford after 12 h the corresponding quinazolinone derivatives 55 . A copper-catalyzed oxidative coupling reaction between Csp 2 –H of quinazoline 3-oxide 56 and Csp 2 –H of formamides in the presence of copper hydroxide and TBHP in dichloroethane (DCE) also afforded the analogous O -quinazolinone carbamates 57 . The latter are envisaged to be formed through a reaction sequence involving radical addition, Baeyer–Villiger oxidation, and intramolecular acyl transfer . Quinazoline 3-oxides 58 reacted with primary amines in the presence of TBHP as the oxidant in dioxane under reflux for 24–44 h to afford the quinazolin-4(3 H )-one derivatives 59 . The mechanism of this reaction was investigated using control reactions, and ESI-MS analysis revealed a complex reaction involving multiple bond dissociation/recombination steps. These mild reactions and metal-free conditions were found to be compatible with a broad range of primary amines, producing a series of quinoxaline-4(3H)-ones. Moreover, this methodology also afforded 3-(2-(1 H -indol-3-yl) ethyl)quinazolin-4(3 H )-one 60 in 70% yield, which is a precursor for the synthesis of bioactive rutaempine and (±)-evodiamine . Direct C-4 alkylation of the 2-unsubstituted and 2-aryl substituted quinazoline-3-oxides was previously achieved with open chain (1,2-dimethoxyethane, diethoxymethane or diethyl ether) or cyclic ethers (1,3-dioxolane or 1,3-benzodioxole) in the presence of tert -butyl peroxybenzoate (TBPB) afforded series of oxidative cross-coupling products in moderate to good yields . shows the reactions of 1,4-dioxane with 61a (R = H) and 61b (R = Ar) as representative models for the radical oxidative cross-coupling reaction of the sp 3 C–H bond in ethers with the sp 2 C–H bond in quinazoline-3-oxide to afford 62a and 62b , respectively. The mechanism of this radical oxidative cross-coupling reaction is envisaged to involve the initial decomposition of TBPB to generate a tert -butoxyl radical and a benzoate radical. The most reactive tert -butoxyl radical then abstracted hydrogen from 1,4-dioxane, and the resultant dioxane radical added to quinazoline 3-oxide 61 to generate a quinazoline-3-oxide radical. Abstraction of a hydrogen atom from the quinazoline-3-oxide radical by less sterically hindered benzoate radical (versus tert -butoxyl radical) afforded quinazoline-3-oxide 62 and benzoic acid as a by-product . Copper(II) chloride has been employed to catalyze the Csp 2 –H bond (C-4) of the 2-aryl substituted quinazoline-3-oxide 63 and the Csp 2 –H bond (C-3) of various indoles 64 to facilitate the cross-dehydrogenative-coupling reaction between these two chromophores to afford the quinazoline 3-oxide appended indole hybrids 65 . The N -methylindoles substituted with alkyl, halide, or alkoxy group at the 5-position afforded the expected quinazoline 3-oxide–indole hybrids in moderate to good yields. However, indoles substituted on nitrogen with an electron-withdrawing group, such as the tosyl group, failed to react. Subsequent dehydrogenation of these molecular hybrids with PCl 5 (1.2 equiv.) in toluene at RT afforded quinazoline-indole hybrids 66 . Quinazoline 3-oxides are valuable intermediates in the synthesis of benzodiazepine analogues and other polycyclic compounds of biological importance , and examples of these reactions are described in the next sections. The analogous 4-(1-benzyl-1 H -indol-3-yl)-6,7-dimethoxyquinazoline has previously been found to exhibit moderate activity against protein tyrosine kinase ErbB-2, with little or no activity against the epidermal growth factor receptor tyrosine kinase (EGFR-TK) . The 4-(indole-3-yl)quinazolines, on the other hand, were found to be highly potent EGFR-TK inhibitors with excellent cytotoxic properties against several cancer cell lines . N -Oxides The N–O bond in pyrimidine N -oxides is cleaved by catalytic reduction, low-valent phosphorus (PCl 3 or POCl 3 ) or titanium (TiCl 3 ) reagents, as well as by the more common metals used for hydrogenolysis. Deoxygenation of 4-methyl-2-phenylquinazoline 3-oxide 46 using Zn in the presence of aqueous NH 4 Cl in THF afforded 4-methyl-2-phenylquinazoline 47 in 71% yield . Deoxygenation of the analogous quinazoline 1-oxides 4a (R = CH 3 ) and 4b (R = -CH 2 CH 3 ), on the other hand, was achieved through catalytic hydrogenation (Raney Ni catalyst in MeOH under hydrogen (H 2 ) stream) to afford 4-substituted quinazoline 48 in 33–43% yield . A mixture of N -oxide 46 and phosphorus oxychloride in chloroform was heated at reflux for 15 min. followed by aqueous work-up and purification through silica gel column chromatography to afford 49 in 18% yield . Improved yield (70%) of this quinazoline derivative was observed when this quinazoline 3-oxide was treated with PCl 5 in dichloromethane at RT for 15 min. . N -Oxides The highly acidic proton of the methyl group at the C-4 position of quinazoline-3-oxide scaffold has been found to promote acetoxylation to ester derivatives. 4-Methyl-7-methoxy-2-phenyl substituted quinazoline 3-oxide 50 , for example, was subjected to acetic anhydride under reflux for 0.5 h to afford the ester derivative 51 in 82% yield . N -Oxides N -Oxide moiety in aza-heteroarene represents an efficient and removable directing group for ortho C–H bond activation. Zhao et al., for example, effected a copper-catalyzed oxidative coupling reaction between Csp 2 –H of quinazoline 3-oxide 52 and Csp 2 –H of benzaldehyde derivatives in the presence of tert -butyl hydroperoxide (TBHP) in dichloromethane at 40 °C under nitrogen atmosphere to furnish the quinazolinone derivatives 53 . Both aliphatic and aromatic substituents at the 2-position of the quinazoline 3-oxide scaffold were tolerated though the yields decreased with the increase of the chain length from the methyl to the propyl group. α,β-Unsaturated aldehydes, heteroaryl aldehydes, and aliphatic aldehydes were also found to be suitable acyl donors to afford cyclic hydroxamic esters in good to excellent yields. The authors observed the formation of quinazoline aryl ketone derivatives 54 from 52 in the presence of Cu(OAc) 2, albeit in low yields when the reaction was quenched prematurely. The quinazoline aryl ketones 54 were isolated as sole products in the presence of trimethylsilyl azide (TMSN 3 ) and copper carbonate (CuCO 3 ) . Controlled reactions revealed that compounds 53 are the consequence of initial in situ Baeyer–Villiger oxidation of quinazoline aryl ketones 54 followed by intramolecular acyl transfer to afford 53 . In a subsequent study, these authors employed this strategy on benzylic Csp 3 –H bonds with quinazoline 3-oxides 52 in the presence of CuSO 4 (3 mol %), TBHP (2 equiv.), 20 mol % of tetrabutylammonium iodide (TBAI), and NaI (70 mol %) in dichloromethane at 70 °C in sealed tubes to afford after 12 h the corresponding quinazolinone derivatives 55 . A copper-catalyzed oxidative coupling reaction between Csp 2 –H of quinazoline 3-oxide 56 and Csp 2 –H of formamides in the presence of copper hydroxide and TBHP in dichloroethane (DCE) also afforded the analogous O -quinazolinone carbamates 57 . The latter are envisaged to be formed through a reaction sequence involving radical addition, Baeyer–Villiger oxidation, and intramolecular acyl transfer . Quinazoline 3-oxides 58 reacted with primary amines in the presence of TBHP as the oxidant in dioxane under reflux for 24–44 h to afford the quinazolin-4(3 H )-one derivatives 59 . The mechanism of this reaction was investigated using control reactions, and ESI-MS analysis revealed a complex reaction involving multiple bond dissociation/recombination steps. These mild reactions and metal-free conditions were found to be compatible with a broad range of primary amines, producing a series of quinoxaline-4(3H)-ones. Moreover, this methodology also afforded 3-(2-(1 H -indol-3-yl) ethyl)quinazolin-4(3 H )-one 60 in 70% yield, which is a precursor for the synthesis of bioactive rutaempine and (±)-evodiamine . Direct C-4 alkylation of the 2-unsubstituted and 2-aryl substituted quinazoline-3-oxides was previously achieved with open chain (1,2-dimethoxyethane, diethoxymethane or diethyl ether) or cyclic ethers (1,3-dioxolane or 1,3-benzodioxole) in the presence of tert -butyl peroxybenzoate (TBPB) afforded series of oxidative cross-coupling products in moderate to good yields . shows the reactions of 1,4-dioxane with 61a (R = H) and 61b (R = Ar) as representative models for the radical oxidative cross-coupling reaction of the sp 3 C–H bond in ethers with the sp 2 C–H bond in quinazoline-3-oxide to afford 62a and 62b , respectively. The mechanism of this radical oxidative cross-coupling reaction is envisaged to involve the initial decomposition of TBPB to generate a tert -butoxyl radical and a benzoate radical. The most reactive tert -butoxyl radical then abstracted hydrogen from 1,4-dioxane, and the resultant dioxane radical added to quinazoline 3-oxide 61 to generate a quinazoline-3-oxide radical. Abstraction of a hydrogen atom from the quinazoline-3-oxide radical by less sterically hindered benzoate radical (versus tert -butoxyl radical) afforded quinazoline-3-oxide 62 and benzoic acid as a by-product . Copper(II) chloride has been employed to catalyze the Csp 2 –H bond (C-4) of the 2-aryl substituted quinazoline-3-oxide 63 and the Csp 2 –H bond (C-3) of various indoles 64 to facilitate the cross-dehydrogenative-coupling reaction between these two chromophores to afford the quinazoline 3-oxide appended indole hybrids 65 . The N -methylindoles substituted with alkyl, halide, or alkoxy group at the 5-position afforded the expected quinazoline 3-oxide–indole hybrids in moderate to good yields. However, indoles substituted on nitrogen with an electron-withdrawing group, such as the tosyl group, failed to react. Subsequent dehydrogenation of these molecular hybrids with PCl 5 (1.2 equiv.) in toluene at RT afforded quinazoline-indole hybrids 66 . Quinazoline 3-oxides are valuable intermediates in the synthesis of benzodiazepine analogues and other polycyclic compounds of biological importance , and examples of these reactions are described in the next sections. The analogous 4-(1-benzyl-1 H -indol-3-yl)-6,7-dimethoxyquinazoline has previously been found to exhibit moderate activity against protein tyrosine kinase ErbB-2, with little or no activity against the epidermal growth factor receptor tyrosine kinase (EGFR-TK) . The 4-(indole-3-yl)quinazolines, on the other hand, were found to be highly potent EGFR-TK inhibitors with excellent cytotoxic properties against several cancer cell lines . The oxygen atom of quinazoline 3-oxides can also participate in ring-closure reactions to yield polycyclic derivatives. The 3-oxidoquinazoline-2-carbamates 67, for example, were found to undergo reductive ring closure to afford the 3,9-dihydro-2 H -[1,2,4]oxadiazolo[3,2- b ]quinazolin-2-ones 68 . Wu et al. reported a copper-catalyzed [3 + 2] cycloaddition of quinazoline 3-oxides 69 with alkylidenecyclopropane derivatives to afford the angular polycyclic quinazoline derivatives 70 . Methyl, methoxy, fluoro and chloro functionalities were all tolerated, leading to the formation of the corresponding N -(2-(5-oxa-6-azaspiro[2.4]hept-6-en7-yl)phenyl) 69 in high yield. Yin et al. studied the [3 + 2] cycloaddition reaction between the quinazoline 3-oxides 71 (R 2 = H, alkyl, aryl) and various alkene derivatives 72 such as methyl 3-methoxyacrylate (R 3 = -CO 2 Me, R 4 = -OMe), ethyl-3-ethoxyacrylate (R 3 = -CO 2 Et, R 4 = -OEt), dimethyl maleate (R 3 , R 4 = -CO 2 Me), acrylonitrile (R 3 = -CN, R 4 = H) and 5-methyl-hex-2-enoic acid methyl ester (R 3 = -CO 2 Me, R 4 = -CH 2 CH(CH 3 ) 2 ) to afford a series of isoxazolo[2,3 -c ]quinazoline 73 in good to excellent yield with total regio- and stereoselectivities . A density functional theory (DFT) method using the B3LYP/6-31G(d) basis set further predicted the reaction to be under thermodynamic control and to favour exclusive formation of the ortho-exo cycloadduct in agreement with experimental finding . Hitherto, Heaney et al. reacted 2-styrylquinazoline 3-oxide 71 (R 2 = -CH=CHPh) with phenyl vinyl sulfone or N -methyl maleimide in THF under reflux and isolated the corresponding isoxazolo[2,3- c ]quinazoline derivatives in very low yields . These tricyclic compounds were found to be unstable in solution at RT and to rearrange to other complex products. The use of acetylene derivatives as dipolarophiles on the 4-carbo substituted quinazoline 3-oxides, on the other hand, resulted in the isolation of benzodiazepine analogues instead of the polycyclic quinazoline derivatives. The 2-aryl-4-methylquinazoline 3-oxides 74 , for example, were reacted with dimethyl acetylenedicarboxylate (DMAD) 75 in THF at 70° for 4 to afford the methyl 5-(2-methoxy-2-oxoacetyl)-4-methyl-2-phenylsubstituted-5 H -benzo[ d ][1,3]diazepine-5-carboxylates 76 in 67–74% yield . The formation of these benzodiazepine derivatives is envisaged to proceed via the rearrangement of the incipient tricyclic quinazoline intermediate A as represented in the Scheme. DMAD, on the other hand, reacted with the isomeric 2-methyl-4-phenylquinazoline 3-oxide 77 in benzene-(m)ethanol followed by purification on basic alumina to afford the phenyl acrylates 78 (13–21%) and the potentially tautomeric benzodiazepines 79 in 5–14% yield together with smaller amounts of other products . The analogous 4-methyl-2-styrylquinazoline 3-oxides 80a – d were also reacted with dimethyl acetylenedicarboxylate (DMAD) 75a (R 2 = -CO 2 CH 3 ) or methyl propiolate 75b (R 2 = H) as dipolarophiles (2 equiv.) in dry THF under reflux for 16 h followed through purification by column chromatography on either silica gel (SiO 2 ) or neutral alumina (Al 2 O) to afford the corresponding benzodiazepine analogues 81a – d . The presence of the 2-styryl group resulted in significantly reduced yields of the corresponding benzodiazepine derivatives compared to the products of the reaction of analogous 2-phenyl-4-methylquinazoline 3-oxides 74 with DMAD (refer to above). The heterocyclic ring of these compounds was hydrolysed during purification by column chromatography on either or both SiO 2 or Al 2 O and the extend of hydrolysis depended on the nature of the C-4 and C-5 substituents on the diazepine ring. The presence of the carbon-carbon double bond in the five-membered ring of the tricyclic quinazoline intermediate implicated in the reaction of the 4-carbo substituted quinazoline 3-oxides 74 , 77 or 80 with acetylene derivative 75a or 75b facilitated the rearrangement and subsequent ring enlargement to afford benzodiazepine analogues. Nucleophilic attack on C-2 of 2-chloromethyl quinazoline 3-oxide 82 by methylamine followed sequentially by ring opening of intermediate A and intramolecular displacement of chlorine atom by nitrogen atom of the oxime moiety afforded chlordiazepoxide 83 as shown in . Acid hydrolysis of chlordiazepoxide and in situ hydrolysis of the benzodiazepin-2-one 4-oxide intermediate 84 followed by its PCl 3 mediated deoxygenation afforded 1,4-benzodiazepine (diazepam) 85 . Benzodiazepines enhance the effect of the neurotransmitter gamma-amino butyric acid (GABA-A), resulting in sedative, hypnotic, anxiolytic, anticonvulsant and muscle relaxant properties . These properties make benzodiazepines and their analogues useful drugs in the treatment of anxiety, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal and as a premedication for medical or dental procedures. Chlordiazepoxide and diazepam, for example, are the central nervous system (CNS) agents used for the treatment of muscle spasms, seizures, trauma, and anxiety disorders . Aromatic N -oxides are desirable biologically active compounds with a potential for application in pharmaceutical and agrochemical industries. It is imperative for medicinal chemists to continue to develop environmentally friendly and mild methods for the production of quinazoline 3-oxides. This scaffold is capable of undergoing various chemical transformations into biologically-relevant polysubstituted quinazolines and their polynuclear derivatives, as well as ring expansion to afford the benzodiazepine analogues with CNS activity. The potential for the quinazoline 3-oxide scaffold to undergo transition metal catalyzed cross-dehydrogenative-coupling, on the other hand, makes them suitable candidates for the design and synthesis of other novel biologically-relevant molecular hybrids. Moreover, the presence of a halogen atom on the fused benzo ring of the quinazoline 3-oxide framework would facilitate further chemical transformation via transition metal catalyzed cross-coupling reactions to afford polysubstituted derivatives. It is envisaged that this review will help medicinal chemistry researchers to design and synthesize new quinazoline 3-oxides and their derivatives and investigate their biological properties to treat various diseases. |
I-PASS Mentored Implementation Handoff Curriculum: Champion Training Materials | 43ced0eb-765c-4795-98ae-6ad4ff4d794b | 6354793 | Pediatrics[mh] | After reviewing these resources, learners will be able to: 1. Describe the role of the I-PASS champions in implementing the I-PASS Handoff Program. 2. Discuss the importance of handoff observations with formative feedback in promoting successful implementation and sustainment of the I-PASS Handoff Program at their institution. 3. Demonstrate competent use of the I-PASS handoff assessment tools. 4. Articulate key steps for successful implementation of the I-PASS Handoff Program (including training activities, handoff observations, campaign activities, and revisions to printed handoff documents). 5. Identify ways to adapt the I-PASS program to fit the needs of their local environment.
Handoffs in patient care are high-risk events for errors in communication between providers. The Joint Commission and the Department of Defense cite handoffs as playing a role in roughly two-thirds of sentinel events in hospitals. , Recognizing that handoffs in patient care are a critical skill in which physicians must be adequately trained and supervised, the Accreditation Council for Graduate Medical Education now requires all training programs in the United States to teach handoff skills and to monitor the quality of handoffs. In addition, the Association of American Medical Colleges has also identified the ability to give or receive a patient handover to transition care as a Core Entrustable Professional Activity that all medical students should be able to perform upon entering residency. Despite the recognition of handoffs as high-risk events in patient care and the need to train physicians in handoff skills, standardized, evidence-based curricula were lacking when these requirements were implemented. To address this gap, in 2013, the I-PASS Study Group released the suite of I-PASS Handoff Curriculum materials that were developed for the 11-center I-PASS Study. – This curriculum is an evidence-based, standardized approach to teaching, evaluating, and improving handoffs. During the development of this curriculum for the original I-PASS Study, it quickly became evident that having faculty members who were well trained in handoff communication skills and how to observe these skills in residents would be necessary to roll out the intervention at each site. Despite this designation as a critical link, we noted that many faculty members and other clinical leaders who volunteered to serve as champions had never received handoff training during their careers and, therefore, were ill equipped to tackle this monumental task. It was imperative to the project's success that a specialized curriculum be created for these champions that provided training in effective handoff techniques and communication skills, how to conduct handoff observations, how to provide feedback to residents on handoff quality, and, lastly, how champions could assist in the implementation of the entire curriculum and support the I-PASS campaign. These faculty members, also known as I-PASS champions, would be critical to the success of the intervention as they would be responsible for ensuring effective workplace-based assessment of resident handoff skills. Individuals who have typically served as I-PASS champions include faculty or attending physicians, chief residents, senior residents, nursing leaders, quality improvement leaders, and other key educators at an institution. These individuals generally have experience in the clinical environment with handoff processes and therefore are well positioned to guide more-junior clinicians in their handoff skills and provide feedback. Initially, I-PASS champions are typically trained by the site leaders. This is done early in the implementation process and in advance of training frontline providers, whom the champions will be responsible for observing and providing feedback to. Once an initial group of champions is trained, its members can assist in training other champions, as well as training frontline providers. Champions can also be called upon to assist with or lead other elements of the I-PASS Handoff Program, including data collection from and analysis of the handoff observations, implementation of the I-PASS campaign, and interaction with key leaders and stakeholders. The original I-PASS faculty champion materials were published in MedEdPORTAL in 2013. There have been over 100 requests for the materials since that time. The development of the original champion training materials utilized a rigorous approach following Kern's six steps for curriculum development. , As part of the Society of Hospital Medicine (SHM) I-PASS Mentored Implementation Program, the materials underwent review and revision, reflecting the final steps of curriculum development that include evaluation and adaptation based upon feedback received. The updates to the curricular materials for this revised module include the development of training materials for both adult and pediatric providers, as well as the incorporation of more interactive and independent-study curricular elements. In addition to the implementation of the original champion curriculum at the nine original I-PASS Study sites, the newly updated I-PASS champion curriculum has been successfully implemented at 32 adult and pediatric hospitals across North America. The I-PASS Mentored Implementation Handoff Curriculum champion training materials are an all-inclusive curriculum package for those looking to train faculty members or other experienced clinicians who could serve as I-PASS champions at their institution. The curriculum includes the I-PASS champion video module as well as the I-PASS Champion Workshop and complementary materials. The I-PASS Champion Workshop complementary materials include the I-PASS Champion Workshop interactive guide, the handoff observation video, and an I-PASS Champion Workshop evaluation form. This resource can be implemented as an independent curriculum; however, we recommend concurrent implementation of the other complementary I-PASS curricular modules that are available in MedEdPORTAL.
The I-PASS Mentored Implementation Handoff Curriculum champion training materials were typically accessed by individuals leading the implementation and training efforts of a handoff program at a given site. Site leaders traditionally were residency program directors, division or section directors, chief residents, faculty members, hospital quality improvement leaders, or designated institutional officials. They were responsible for identifying individuals who could serve as I-PASS champions at their site. Incorporating a flipped classroom approach to promote efficient training of these adult learners, the training of our champions began with having them view the I-PASS champion video module ( ) independently in advance of the in-person I-PASS Champion Workshop. This was accomplished by showing the video during a meeting time or sending a link to the video (posted on a hospital shared drive or other online format). This 20-minute module provided site champions and faculty members with an overview of the I-PASS Handoff Program and a framework on how to implement all aspects of the program with various groups of providers and how to adapt it to their own institution. Our champions then attended the in-person I-PASS Champion Workshop ( ), which took place in a conference room with video and audio capabilities. The workshop provided site and faculty champions with an overview of the I-PASS handoff techniques, as well as an opportunity to practice evaluating handoffs with the I-PASS observation tools (contained in ) using a simulated handoff scenario in the handoff observation video ( ). I-PASS champions also received guidance on how to adapt the program for their institutional needs, overcome barriers, develop a plan for trainings and observations, and review an example of an organizational chart. This in-person training session took about 90–120 minutes. It could be executed in one longer session or be broken up into two shorter sessions. If electing to break the workshop into two shorter sessions, we suggest covering all the initial material through “The Role of the I-PASS Champion: What We Need From You” in the first session, then resuming with “Handoff Observations: Observation Basics & Assessment Tools” and continuing with the remaining material during the second session. All videos were embedded in the slides; however, if they do not format correctly for future users, they can be embedded in the slides locally. The I-PASS Champion Workshop handout ( ) was printed in advance of the workshop and distributed to champions at the start of the session. The handout contained the necessary information to experience the interactive session, including examples of the I-PASS handoff assessment tools that champions gained competency in using during the session. After viewing the I-PASS champion video module and participating in the in-person I-PASS Champion Workshop, I-PASS champions at all of the 32 SHM I-PASS sites were asked to complete a survey form ( ) evaluating the curriculum. These surveys were distributed as either paper forms or a link to a REDCap survey tool, via either a tiny URL or a QR code at the end of the in-person workshop. The surveys contained various questions about the champions’ confidence in executing various skills related to the curriculum, as well as the importance of the training to their patient care activities. Reponses to these questions were collected on a 5-point Likert scale. The survey results were analyzed by dichotomizing them into strongly agree/agree, neutral, and disagree/strongly disagree. The number of champions trained as a part of the program, as well as the number of handoff observations conducted by these champions, was also collected. Champion training for the SHM I-PASS Mentored Implementation Program took place a few months in advance of the training of any frontline providers, roughly about 2 to 3 months into the implementation process. Training champions well in advance of training frontline providers was necessary because champions would be called upon to observe frontline providers prior to full-scale implementation of the program in order to establish baseline data for handoff performance.
Three hundred sixty-six champions participated in the SHM I-PASS Mentored Implementation Program's champion training at the 32 sites across North America. This group included champions with expertise in both internal medicine and pediatrics. These I-PASS champions participated in a total of 3,491 observations of the giver of a handoff ( M = 194/month) and 2,444 observations of the receiver of a handoff ( M = 136/month) following the start of the program in 2015. Three hundred forty-six champions completed the I-PASS Champion Workshop evaluation form at the end of their training (94.5% response rate). Faculty responses from the evaluation form are detailed in the .
The development of the I-PASS Mentored Implementation Handoff Curriculum reflects a 6-year collaborative effort between medical educators, health services researchers, experts in quality improvement and patient safety, and the SHM to develop an innovative suite of educational materials that has been proven to have a positive impact on the safety, efficiency, and efficacy of shift-to-shift handoffs between providers. The implementation of this curriculum at the nine original study sites was shown to have a significant impact on patient safety and on the educational experience of resident and faculty physicians without adversely impacting their day-to-day workflow. Early data collected from the SHM I-PASS Mentored Implementation Program is demonstrating a similar trend. After receiving the I-PASS Mentored Implementation Handoff Curriculum champion training materials, over 90% of the 346 champions agreed or strongly agreed that the training provided them with knowledge or skills critical to their patient care activities. A large percentage also noted that after receiving the training, they were able to distinguish the difference between high- and poor-quality handoffs, competently use the I-PASS handoff assessment tools, articulate the importance of handoff observations in successful implementation and sustainment of the I-PASS Handoff Program, and articulate key steps for successful implementation of the program. We believe that the success of the I-PASS Handoff Program at all the intervention sites to date can be directly attributed to the development of a robust curriculum for I-PASS champions. These champions were pivotal in assisting with the implementation and sustainment efforts of the program at the nine original study sites and the 32 SHM mentored implementation sites. We found that stimulating culture change and altering physician practice habits during these projects required regular monitoring of handoff practices and provision of feedback on performance, at both an individual and an institutional level. The data the champions obtained during the initial study and subsequent dissemination efforts highlighted the success of the intervention and helped stimulate tests of change through quality improvement efforts. The observations also helped to reinforce good communication and handoff behaviors early in the project. During the implementation of this curriculum at the intervention sites, we encountered three key challenges that we believe need to be addressed at other sites to ensure successful adoption. We present some potential solutions to these challenges below. 1. Lack of insight into the need for handoff training: While most I-PASS champions and frontline providers recognized their individual need for additional training and personal development, some did not recognize this skill gap. In addition, some did not support the rigor of the I-PASS champion curriculum or the structure of the I-PASS handoff process. In order to address this deficiency, we provided published data as to the known skill gap in handoffs amongst health care providers, elicited support from high-level institutional leaders, and capitalized on the support of early adopters of the program. 2. Gaining buy-in from champions: In addition to using published data on the impact of poor handoff communication on patient safety outcomes and gaining support from high-level institutional leaders, we also attempted to incentivize champions to participate in the program. The incentives we used included an attestation noting participation that could be added to one's curriculum vitae and provision of continuing medical education and Maintenance of Certification Part 4 credit through the American Board of Pediatrics and American Board of Internal Medicine. 3. Lack of time to train champions: Universally, across all our intervention sites, finding time to train faculty in the champion curriculum was challenging given the clinical, administrative, and research demands busy clinicians faced. In order to address this challenge in this revised version of the curriculum, we embraced more flexible and multimodal training options, including a video module to be viewed prior to an in-person workshop. This flipped classroom approach to training allowed for asynchronous independent learning, followed by a shorter in-person session focusing mainly on application of skills in simulated scenarios. The I-PASS Mentored Implementation Handoff Curriculum champion training materials are a critical element of institutional implementation of the I-PASS Handoff Program and meet the specialized learning needs of project champions. Future work of the I-PASS Study Group will focus on further dissemination of the curricular materials and reflection on how to continue to meet the learning needs of champions in a wide range of specialties and health care institutions.
A. I-PASS Champion Video Module.mp4 B. I-PASS Champion Workshop.pptx C. I-PASS Champion Workshop Handout.docx D. Handoff Observation Video.wmv E. I-PASS Champion Workshop Evaluation Form.pdf All appendices are peer reviewed as integral parts of the Original Publication.
|
Health-related quality of life associated with diabetic retinopathy in patients at a public primary care service in southern Brazil | 2b0a56a3-10ea-447c-80e7-ef368ef16ad3 | 10118975 | Ophthalmology[mh] | Diabetic retinopathy (DR) is one of the leading causes of preventable visual impairment and blindness worldwide, despite existing accurate diagnostic technologies and effective treatments ( , ). It is usually asymptomatic until late stages and could lead to a sudden visual loss affecting the individual’s functional capabilities (e.g., mobility, independence, self-care, and ability to perform daily activities such as work and leisure) ( ). In Brazil, DR accounts for approximately 25% of all years lived with disability from diabetes mellitus ( ). The impact of DR on quality of life (also referred to as health-related quality of life [HRQoL]) has been reported in several countries ( , ). The results suggest that DR severity has a significant negative impact on HRQoL among patients with diabetes. However, this finding is not consistent across studies ( ). HRQoL is a complex construct involving an individual’s perception of his or her health state (including physical, mental, and social domains) ( ). Sociocultural differences may influence HRQoL perception ( ). Therefore, it is particularly important to provide information on the DR impact on HRQoL considering the social context ( ). The EuroQol five dimensions (EQ-5D) ( ) is the most commonly used generic preference-based measure of HRQoL (other measures include the SF-36 and the HUI-3) ( ). It encompasses five health dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression), all of which contain three severity levels, resulting in 234 possible health states. The health states can be converted into utility values (with a single value representing an individual’s preferences for a given health state) based on preferences of the general population (also referred to as tariffs). Utility values range from 1 (equaling full health) to 0 (equaling death). Negative values may also occur indicating that a person’s health state is worse than death ( ). The obtained utility values can be used to calculate the quality-adjusted life-years (QALY) measure by multiplying the values by the amount of time spent in a specific health state ( ). Many national guidelines (such as those from Brazil and the UK) ( , ) recommend using QALYs in economic evaluations to compare benefits from health technologies ( ). To the best of our knowledge, no study has been conducted using Brazilian EQ-5D tariffs to describe utility values according to DR health states. Therefore, this study aimed to establish the utility values for different health states associated with DR in a Brazilian sample to provide input to model-based economic evaluations, and to explore potential differences in HRQoL among DR health states.
Study design and population This was a cross-sectional study including a convenience sample of patients with type 2 diabetes mellitus (T2D) who underwent teleophthalmology screening at a public primary care service in Southern Brazil from 2014 to 2016. Patients with T2D who were registered at the service were invited to participate by phone calls or were referred by the service’s family physicians for screening. Individuals with T2D who were older than 18 years were included. Patients were excluded if they had type 1 diabetes (T1D; n = 5, 2%), cognition problems (n = 0), blindness due to a disease other than T2D (n = 1, 0.4%), and unreadable retinal photographs due to lens opacity (n = 20, 8.6%). Patients with T1D were not included because of the low prevalence of this disease in primary care. Prior to the measurements, study requirements were explained to the patients by one of the three trained family physicians performing the teleophthalmology screening ( ). Patients who agreed to participate provided written informed consent. A legal guardian signed the written informed consent in case of blindness. A sample size of 126 patients was required to detect a difference of 0.1 in mean utility value between two DR health states with an α of 0.05 and power of β = 80%. The study was approved by the Ethics Committee of the Hospital de Clínicas de Porto Alegre . Teleophthalmology screening Retinal photographs were taken by the aforementioned trained family physicians. Images of two fields of each eye were captured by using the Canon CR-2 Digital Retinal (Canon U.S.A., Inc., Melville, NY, USA). Retinal photographs were remotely evaluated and classified by two ophthalmologists of the teleophthalmology screening based on the International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scale ( ). More details about the teleophthalmology screening training and work process have been described by other authors ( ). Diabetic retinopathy health states Five DR health states were defined based on economic evaluation models previously published in the literature ( - ): absent (NoDR), non-sight-threatening (Non-STDR), sight-threatening (STDR), and bilateral blindness (BB). The Non-STDR health state included mild and moderate nonproliferative DR. The STDR health state included severe nonproliferative, proliferative DR, and clinically significant macular edema. The categorization as Non-STDR and STDR was based on the worse eye. Patients were asked to report previously diagnosed eye conditions. Patients reporting a complete vision loss in both eyes due to T2D and presenting retinographic findings suggestive of vision loss due to DR were classified as BB. Measure of Health-Related Quality of Life – Utility values The EQ-5D is a standardized, generic preference-based measure of HRQoL developed by the EuroQol Group ( ). The three-level version of the EQ-5D consists of two pages: the EQ-5D descriptive system and the EQ visual analogue scale (EQVAS). The descriptive system comprises five HRQoL dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression). Each dimension has three severity levels (no problems, some problems, and extreme problems). The EQVAS records the patient’s self-rated health on a vertical visual analogue scale ranging from 0 to 100, where the endpoints are labeled “best imaginable health state” and “worst imaginable health state” ( ). The visual analogue scale can be used as a quantitative measure of health outcome that reflects the patient’s own judgment ( ). The patients completed the questionnaire before the retinal photographs were taken. The three-level version of the EQ-5D has been validated in Portuguese, and Brazilian tariffs have been published ( ). Description of variables Demographic and clinical variables of interest for descriptive analysis were collected from electronic medical records: age (years), sex, self-reported skin color (white or non-white), education level (no/primary education, secondary education, higher education), diabetes duration in years, diabetes treatment, glycated hemoglobin (HbA1c), diagnosis of hypertension, creatinine, albuminuria, dialysis, low-density lipoprotein cholesterol (LDL), triglycerides (TG), high-density lipoprotein cholesterol (HDL), presence of foot ulcers or lower-extremity amputation, previous coronary heart disease, stroke, ophthalmic diseases, and other self-reported comorbidities. We collected the most recent laboratory results available in the patients’ electronic medical record within 12 months prior to the screening. Diabetes control was defined as an HbA1c level ≤ 7.0% ( ). Hypertension was defined as current antihypertensive therapy and/or hypertension diagnosis reported in the medical record. Levels of systolic and diastolic blood pressure below 140 mmHg and 90 mmHg, respectively, were classified as controlled hypertension. Chronic kidney disease was defined as any abnormal albuminuria in a spot urine sample (≥ 17 mg/L or 20–200 mg/g Cr) or a glomerular filtration rate < 90 mL/min/1.73 m 2 ( ). Dyslipidemia was defined as values of LDL cholesterol ≥ 160 mg/dL, or TG ≥ 150 mg/dL, or HDL < 40 mg/dL (men) and < 50 mg/dL (women) ( ). Dialysis, lower-extremity amputation, foot ulcers, coronary heart disease, stroke, and ophthalmic disease were inquired through direct questions. Ophthalmic diseases included refractive errors, cataract, glaucoma, ocular toxoplasmosis, and other self-reported ocular conditions. Age-related macular degeneration was assessed by two of the aforementioned ophthalmologists through digital retinal photographs and by patient self-report. Statistics analyses Missing data related to demographic and clinical variables (214 [2.6%] out of 8026 values) were imputed by means of regression models. Descriptive statistics were calculated using pooled data from the 10 imputed data sets. Means ± standard deviations (SD) were used to describe normally distributed variables, and medians and interquartile range (IQR) were used for nonparametric variables. The normality of variables was evaluated by histogram graphs and the Kolmogorov-Smirnov test. The utility values for different health states associated with diabetic retinopathy were assessed with adjustment for potential confounders using analysis of covariance (ANCOVA). The variables included in the adjusted analysis were selected from the two following sources: a) from a theoretical model based on the current literature, those variables associated with HRQoL, such as age, sex, other comorbidities, ophthalmic diseases, and macrovascular and microvascular complications ( , ) and b) from a previous univariate analysis, those variables found to be associated with utility values ( p ≤ 0.05). Diabetes duration, HbA1c, diabetes control, and type of treatment were not included in the adjusted analysis because they are usually not associated with HRQoL, despite their strong association with DR ( , , ). Additional adjusted analysis was performed after excluding the cases of BB because this group was very small (two cases). We opted to perform ANCOVA because there was homogeneity of variances in utility values at each level of DR (Levene’s test, p = 0.27) and the residuals followed an approximately normal distribution. For the adjusted analysis, we grouped chronic kidney disease, foot ulcers, and lower-extremity amputation into a single variable named “microvascular complications”. Coronary heart disease and stroke were grouped into a variable named “macrovascular complications”. Cataract, glaucoma, ocular toxoplasmosis, age-related macular degeneration, and other self-reported ocular diseases were grouped into a variable named “ophthalmic diseases”. A variable named “other comorbidities” included other self-reported diseases not included in the three previous variables, such as cancer and rheumatologic and dermatologic disorders. Additional interaction analysis was undertaken considering all possible interactions between variables included in the adjusted analysis. IBM SPSS Statistics version 24.0 was used to perform all analyses.
This was a cross-sectional study including a convenience sample of patients with type 2 diabetes mellitus (T2D) who underwent teleophthalmology screening at a public primary care service in Southern Brazil from 2014 to 2016. Patients with T2D who were registered at the service were invited to participate by phone calls or were referred by the service’s family physicians for screening. Individuals with T2D who were older than 18 years were included. Patients were excluded if they had type 1 diabetes (T1D; n = 5, 2%), cognition problems (n = 0), blindness due to a disease other than T2D (n = 1, 0.4%), and unreadable retinal photographs due to lens opacity (n = 20, 8.6%). Patients with T1D were not included because of the low prevalence of this disease in primary care. Prior to the measurements, study requirements were explained to the patients by one of the three trained family physicians performing the teleophthalmology screening ( ). Patients who agreed to participate provided written informed consent. A legal guardian signed the written informed consent in case of blindness. A sample size of 126 patients was required to detect a difference of 0.1 in mean utility value between two DR health states with an α of 0.05 and power of β = 80%. The study was approved by the Ethics Committee of the Hospital de Clínicas de Porto Alegre .
Retinal photographs were taken by the aforementioned trained family physicians. Images of two fields of each eye were captured by using the Canon CR-2 Digital Retinal (Canon U.S.A., Inc., Melville, NY, USA). Retinal photographs were remotely evaluated and classified by two ophthalmologists of the teleophthalmology screening based on the International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scale ( ). More details about the teleophthalmology screening training and work process have been described by other authors ( ).
Five DR health states were defined based on economic evaluation models previously published in the literature ( - ): absent (NoDR), non-sight-threatening (Non-STDR), sight-threatening (STDR), and bilateral blindness (BB). The Non-STDR health state included mild and moderate nonproliferative DR. The STDR health state included severe nonproliferative, proliferative DR, and clinically significant macular edema. The categorization as Non-STDR and STDR was based on the worse eye. Patients were asked to report previously diagnosed eye conditions. Patients reporting a complete vision loss in both eyes due to T2D and presenting retinographic findings suggestive of vision loss due to DR were classified as BB.
The EQ-5D is a standardized, generic preference-based measure of HRQoL developed by the EuroQol Group ( ). The three-level version of the EQ-5D consists of two pages: the EQ-5D descriptive system and the EQ visual analogue scale (EQVAS). The descriptive system comprises five HRQoL dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression). Each dimension has three severity levels (no problems, some problems, and extreme problems). The EQVAS records the patient’s self-rated health on a vertical visual analogue scale ranging from 0 to 100, where the endpoints are labeled “best imaginable health state” and “worst imaginable health state” ( ). The visual analogue scale can be used as a quantitative measure of health outcome that reflects the patient’s own judgment ( ). The patients completed the questionnaire before the retinal photographs were taken. The three-level version of the EQ-5D has been validated in Portuguese, and Brazilian tariffs have been published ( ).
Demographic and clinical variables of interest for descriptive analysis were collected from electronic medical records: age (years), sex, self-reported skin color (white or non-white), education level (no/primary education, secondary education, higher education), diabetes duration in years, diabetes treatment, glycated hemoglobin (HbA1c), diagnosis of hypertension, creatinine, albuminuria, dialysis, low-density lipoprotein cholesterol (LDL), triglycerides (TG), high-density lipoprotein cholesterol (HDL), presence of foot ulcers or lower-extremity amputation, previous coronary heart disease, stroke, ophthalmic diseases, and other self-reported comorbidities. We collected the most recent laboratory results available in the patients’ electronic medical record within 12 months prior to the screening. Diabetes control was defined as an HbA1c level ≤ 7.0% ( ). Hypertension was defined as current antihypertensive therapy and/or hypertension diagnosis reported in the medical record. Levels of systolic and diastolic blood pressure below 140 mmHg and 90 mmHg, respectively, were classified as controlled hypertension. Chronic kidney disease was defined as any abnormal albuminuria in a spot urine sample (≥ 17 mg/L or 20–200 mg/g Cr) or a glomerular filtration rate < 90 mL/min/1.73 m 2 ( ). Dyslipidemia was defined as values of LDL cholesterol ≥ 160 mg/dL, or TG ≥ 150 mg/dL, or HDL < 40 mg/dL (men) and < 50 mg/dL (women) ( ). Dialysis, lower-extremity amputation, foot ulcers, coronary heart disease, stroke, and ophthalmic disease were inquired through direct questions. Ophthalmic diseases included refractive errors, cataract, glaucoma, ocular toxoplasmosis, and other self-reported ocular conditions. Age-related macular degeneration was assessed by two of the aforementioned ophthalmologists through digital retinal photographs and by patient self-report.
Missing data related to demographic and clinical variables (214 [2.6%] out of 8026 values) were imputed by means of regression models. Descriptive statistics were calculated using pooled data from the 10 imputed data sets. Means ± standard deviations (SD) were used to describe normally distributed variables, and medians and interquartile range (IQR) were used for nonparametric variables. The normality of variables was evaluated by histogram graphs and the Kolmogorov-Smirnov test. The utility values for different health states associated with diabetic retinopathy were assessed with adjustment for potential confounders using analysis of covariance (ANCOVA). The variables included in the adjusted analysis were selected from the two following sources: a) from a theoretical model based on the current literature, those variables associated with HRQoL, such as age, sex, other comorbidities, ophthalmic diseases, and macrovascular and microvascular complications ( , ) and b) from a previous univariate analysis, those variables found to be associated with utility values ( p ≤ 0.05). Diabetes duration, HbA1c, diabetes control, and type of treatment were not included in the adjusted analysis because they are usually not associated with HRQoL, despite their strong association with DR ( , , ). Additional adjusted analysis was performed after excluding the cases of BB because this group was very small (two cases). We opted to perform ANCOVA because there was homogeneity of variances in utility values at each level of DR (Levene’s test, p = 0.27) and the residuals followed an approximately normal distribution. For the adjusted analysis, we grouped chronic kidney disease, foot ulcers, and lower-extremity amputation into a single variable named “microvascular complications”. Coronary heart disease and stroke were grouped into a variable named “macrovascular complications”. Cataract, glaucoma, ocular toxoplasmosis, age-related macular degeneration, and other self-reported ocular diseases were grouped into a variable named “ophthalmic diseases”. A variable named “other comorbidities” included other self-reported diseases not included in the three previous variables, such as cancer and rheumatologic and dermatologic disorders. Additional interaction analysis was undertaken considering all possible interactions between variables included in the adjusted analysis. IBM SPSS Statistics version 24.0 was used to perform all analyses.
We included 206 out of the 232 patients who underwent the teleophthalmology screening. The mean age of the patients included was 63.5 ± 10.6 years, 60.7% (n = 125) were female, 85% (n = 175) were of white ethnicity, and 50.5% (n = 104) had secondary education. The patients included in the study had a statistically significantly higher mean utility value compared with those who were excluded due to unreadable retinal photographs (0.765 ± 0.19 vs. 0.636 ± 0.18, respectively, p =0.004). However, there were no significant differences between included and excluded patients regarding HbA1c (7.5% vs 7.0%, respectively, p = 0.13), diabetes control (68.4% vs 71.1%, respectively, p = 0.49), and diabetes duration (8.7 vs. 8.2 years, respectively, p = 0.71). The overall prevalence of DR was 23.8% (n = 49). In all, 15.5% (n = 32) of the patients had Non-STDR, 7.3% (n=15) had STDR, and 1% (n = 2) had BB ( ). The percentage of patients reporting full health was 25.7% (n = 53). The mean utility was 0.773 ± 0.17 in patients with NoDR and 0.739 ± 0.24 in those with DR. Patients with DR and no BB presented a mean utility of 0.755 ± 0.23, whereas those with BB presented a mean utility of 0.356 ± 0.21. Appendices 1 and 2 provide a more detailed description of the sample regarding the five EQ-5D dimensions of quality of life according to DR health states. shows the utility values for the different health states related to DR with and without adjustment for potential confounders. The mean utility value of the various DR health states decreased after adjustment. The adjusted mean utility was 0.748 (95% CI, 0.698 – 0.798) for a NoDR health state, 0.752 (95% CI, 0.679 – 0.825) for Non-STDR, 0.628 (95% CI, 0.521 – 0.736) for STDR, and 0.355 (95% CI, 0.105 – 0.606) for BB. The adjusted analysis performed after excluding the two cases of BB showed a statistically significant utility decrement between patients at NoDR and STDR health sates (0.748 vs. 0.628, respectively, p = 0.04). No significant differences were found between NoDR and non-STDR health states (0.748 vs. 0.752, respectively, p = 1.0) and between non-STDR and STDR health states (0.752 vs . 0.628, respectively, p = 0.07). The interaction analysis showed statistically significant interactions between DR health states and other comorbidities (F 3,66 = 3679, p = 0.01); between sex, skin color, and DR health states (F 1,66 = 6020, p = 0.01); between other comorbidities, macrovascular complications, and DR health states (F 1,66 = 8596, p < 0.001); and between skin color and microvascular complications (F 1,66 = 3974, p = 0.05). The interaction between DR health states and the variables included in the adjusted analysis is presented in .
This study established the utility values for different health states associated with DR in a sample of patients with T2D undergoing teleophthalmology screening at a public primary care service in Southern Brazil. The results suggest that a later DR health state is associated with a significant decrement in HRQoL compared to the absence of retinopathy in patients with T2D. Additional interaction analysis suggests that the utility values for different health states associated with DR may depend on a combination of DR with other factors such as sex, skin color, other comorbidities, and macrovascular complications. Additional research is needed to further establish this association. Research exploring the utility values of different health states associated with DR has provided mixed results ( - ). Heintz and cols. ( ) found no difference in mean utility values across various levels of DR severity in a sample of Swedish patients with T1D and T2D. Fenwick and cols. also found no differences in a sample of Australian patients with T1D and T2D ( ). Our study differs from both these studies because it was based on a different population (comprising only individuals with T2D) and included different variables in the adjusted analysis (i.e., did not include variables strongly associated with DR, such as diabetes duration and HbA1c, which are usually not directly related to HRQoL). Similar to our results, Polack and cols. found that late DR health states were associated with a lower mean utility value compared with the absence of DR in a sample of patients with T2D in India ( ). Lloyd and cols. found significant utility value decrements associated with lower visual acuity in a sample of T1D and T2D patients in the UK ( ). The present study also found no HRQoL differences in early DR health states (i.e., between patients without DR and Non-STDR), which is in agreement with other studies suggesting that early DR health states are unlikely to be strongly correlated with any of the dimensions of HRQoL ( , , ). This study has a number of limitations that need to be discussed. First, the EQ-5D may not be sensitive enough to detect small differences in HRQoL during early DR health states ( ). The validity of EQ-5D compared to other generic preference-based measures of HRQoL (e.g., HUI-3) regarding DR progression is controversial ( , ). Researchers have proposed adding bolt-ons to expand EQ-5D descriptive systems considering visual symptoms however, this is still under investigation ( ). Second, the convenience sample only allowed us to assess patients registered at a primary care service, thus potentially reducing the generalizability of the findings. Therefore, we would advise researchers to only use these numbers as preliminary input to model-based economic evaluations. Nonetheless, it is noteworthy that the mean overall utility value reported in our study (0.76 ± 0.19) was similar to values found in developed countries, such as the UK (0.77 ± 0.27) ( ) and the Netherlands (0.74 ± 0.27) ( ). Third, this study did not directly assess visual acuity, which is known to be associated with lower HRQoL in late DR health sates ( , ). Consequently, we had to rely on DR diagnosis/classification by image without adjustment for the visual acuity potential confounder. Nevertheless, to be able to populate model-based economic evaluations, the utility values should be classified according to DR health states instead of visual acuity ( ). Fourth, patients with unreadable photographs were excluded from the study, which may have biased the results due to selective patient exclusion. However, the lower HRQoL presented by the excluded patients compared with those included in the study may be related to the lens opacity instead of DR, since there was no difference regarding diabetes control and duration between them. Bearing in mind that HRQoL could be different across country populations and that one of the main outcomes of economic evaluations (i.e., QALY gained) relies on utility values, this study was the first attempt to describe HRQoL associated with DR health states in a Brazilian primary care setting based on general population preferences. These results may be useful as preliminary input to model-based economic evaluations. Further research is needed to investigate the impact of DR progression on HRQoL in a representative sample of the Brazilian population. In conclusion, this study established the utility values for different health states associated with DR in a Southern Brazilian sample of patients with T2D undergoing teleophthalmology screening at a public primary care service. The results suggest that a late DR health state is associated with decrements in HRQoL. The findings may be useful as preliminary input to model-based economic evaluations.
|
Method overtness, forensic autopsy, and the evidentiary suicide note: A multilevel National Violent Death Reporting System analysis | 55c2b84e-eb6e-4907-9dd2-fb10415eaa6f | 5963755 | Pathology[mh] | Emerging as the leading cause of injury mortality in the United States (US) by 2009 , suicide is not a default manner-of-death determination for medical examiners and coroners . A socially condemned and stigmatized phenomenon, suicides are undercounted , with false negativity posing a far greater threat than false positivity to valid certification . National and international research indicates that undercounting is nonrandom across suicide methods . The three leading methods of suicide in the US are firearm, hanging, and poisoning. Collectively, these methods accounted for 92% of total suicides in 2015 . Suicides by poisoning are likely far more challenging for medical examiners and coroners to ascertain than those by shooting and hanging . Gross discrepancies among these major methods in accounting for suicide would inhibit, or even preclude, accurate risk group delineation and risk factor identification, and hence impede the design, targeting, and implementation of effective clinical and population interventions. Availability of a suicide note or an equivalent—whether written, typed, digital, or audio —can serve as a pivotal piece of external forensic and psychological evidence for determining suicide as the manner of death . Some countries even require a suicide note to record a death as suicide , as was true of some US coroners in the 1970s . Thus, lack of an evidentiary note may induce suicide misclassification in the US . Echoing its pivotal nature, a US study found that a suicide manner of death was associated with a far higher prevalence of notes than was undetermined intent, 32% versus 1.5% . Two English studies also revealed large prevalence gaps . Undetermined is the manner of death category most susceptible to obscuring suicides , in relative terms, with poisoning its predominant underlying cause-of-death sub-category in the US . Two earlier American studies found an excess of note-leaving among suicides by poisoning relative to other methods , whereas a later one found an excess among hanging suicides . A subsequent individual-level, multivariable analysis of National Violent Death Reporting System (NVDRS) data, for the period 2003–2006, showed excess note-leaving among suicides by poisoning, firearm, and hanging versus all other methods except jumping from a height and drowning, two other less forensically overt methods . That study also showed note leaving was nonrandom across demographic characteristics such as age, sex, race/ethnicity, marital status, and urban residence. Finally, an Austrian study found no difference in the prevalence of note-leaving across methods of suicide or age, sex, family status, psychiatric care or motive . Unlike the US, Austria is among rare countries whose suicide certification appears very accurate . In this multilevel, multivariable study of NVDRS data for 2011–2013, we posited that a higher proportion of suicide notes in manner of death determinations involving fatal drug introxication, as compared to those involving a firearm or hanging, reflected their use as key evidence in the differential process of establishing suicide as the manner of death. This could signify stricter (i.e., more conservative) evidentiary standards when making these determinations, leading in turn to a greater potential for undercounting suicides. We addressed two questions in our examination of the differential determination of suicide, with a view to informing and improving surveillance, etiologic understanding, and prevention of suicide and related injury mortality. Was an evidentiary note more common among suicide cases involving drug intoxication and other poisoning, less self-evident causes and manner of death, as compared to the violent methods of firearm and hanging, where apparent motivation is more overt? Did the performance of a forensic autopsy serve to mediate the association between the reporting of a note and the overtness of the suicide method? Although a forensic autopsy is not a personal window into decedent intent , together with toxicology it generates evidence that helps medical examiners and coroners identify injury mechanisms (i.e., causes of death) and decedent intent in suspicious or uncertain cases . Salient to our study, forensic autopsy and toxicological testing rates vary greatly across states . Since suicides are local events, and often socially stigmatized , we adjusted our analyses for both county-level and individual-level factors. They comprised characteristics of the medicolegal death investigations and the state-county investigation systems, decedent and areal demographics, and documented mental health antecedents and other death circumstances.
Individual-level data source and variables The source for our individual-level variables was the Restricted Access Database from the NVDRS, which is administered by the Centers for Disease Control, and Prevention (CDC). This state, territory, and incident-based surveillance system employs public health informatics for making data linkages, primarily among death certificates, law enforcement records, and medical examiner and coroner records. Also variably incorporating such optional supplements as laboratory reports and hospital records, the NVDRS provides de-identified information about suicide and other violent deaths, including their geographic location, circumstances, and personal sociodemographics, in addition to investigation specifics . Disaggregatable to county of death, the data analyzed in this study represented the 17 states that participated in the NVDRS throughout the 2011–2013 observation period. They were Alaska, Colorado, Georgia, Kentucky, Maryland, Massachusetts, New Jersey, New Mexico, North Carolina, Ohio, Oklahoma, Oregon, Rhode Island, South Carolina, Utah, Virginia, and Wisconsin. Enhancing study generalizability, these states mirrored the nation in their age and sex composition, manner-of-death distribution, and crude and age-adjusted all-cause, suicide, and undetermined intent death rates . They overrepresented non-Hispanic Blacks and Whites and underrepresented Hispanics. Our study population comprised registered suicides (ICD-10: U03, X60-X84, Y87.0), whose method and state and county of death were specified, and was further confined to decedents aged 15 years and older, since fewer than 1% of known suicides were younger. The number of decedents in our multivariable analyses totaled 32,151. As further background, our first table provides comparative data on suicide methods for the 17 NVDRS states and the US. These comparisons were based on separate counts for the selected demographic variables, not study population and corresponding national population counts. Similarity between the states and the nation would fortify the generalizability of the results. The data source was CDC WISQARS™ (Web-based Injury Statistics Query and Reporting System) . The outcome variable in this study was suicide note (yes versus no or unknown). Our predictor distinguished suicide methods as follows: drug intoxication, other poisoning, jumping and drowning (combined owing to small sample size and similar corroborative challenges), hanging/suffocation, firearm (referent), and all other methods specified in the NVDRS suicide cases. Additional individual-level covariates were prior suicide attempt; primary mental diagnosis; current mental health treatment; blood alcohol concentration; number of other specified drug positives; physical health problem; number of intimate partner or legal or job or financial or school problems; crisis in past two weeks; emergency medical services at scene; region of death; age; sex; race/ethnicity; marital status; education; and military veteran status. The mediator was autopsy status (yes versus no or unknown). County-level data sources and covariates Linked to the individual-level data in this study, our county-level death investigation system covariates were mode of selection of the chief medical examiner or coroner for a given system (elected versus appointed) and accreditation status and type (accredited coroner, accredited medical examiner, unaccredited coroner, unaccredited medical examiner). Selection mode was identified through a CDC website , and accreditation status and system type through the respective websites of the relevant accrediting agencies, the National Association of Medical Examiners (NAME) and the International Association of Coroners and Medical Examiners (IACME) . We resolved outstanding questions by email or telephone communication with state, district, or county offices. County-level demographic covariates were urbanicity (5 categories representing the 12 ordinal categories of the 2013 Urban Influence Codes: large metropolitan/small metropolitan/metropolitan adjacent/micropolitan or adjacent/rural) and percent population below poverty as the measure of the local poverty burden. These two covariates served as proxies for external forces that potentially could inhibit or support medicolegal death investigations . Their source was the County Area Health Resource File for 2014–2015 . Hypotheses and statistical approach We used a generalized linear mixed model (GLMM) to test the following two hypotheses: (1) an evidentiary suicide note is more likely to accompany suicides by drug-intoxication and by other poisoning than suicides by firearm and hanging/suffocation; and (2) performance of a forensic autopsy attenuates the observed association between overtness of method and the reported presence of a note. In testing the first hypothesis, we progressively applied four models, beginning with a univariable analysis, confined to suicide method, followed by multivariable analyses that cumulatively incorporated medicolegal system and investigation characteristics, mental health antecedents and precipitating circumstances, and decedent and areal demographics. Providing for a fifth multivariable model, we then tested our second hypothesis. The GLMM is a two-level model that is logistic at the individual level and linear at the county level. We also included a state-level random effect to incorporate the data structure of counties nested in a state and individuals nested in a county. The statistical software was SAS (Copyright(c) 2002–2010, SAS Institute Inc., Cary, NC).
The source for our individual-level variables was the Restricted Access Database from the NVDRS, which is administered by the Centers for Disease Control, and Prevention (CDC). This state, territory, and incident-based surveillance system employs public health informatics for making data linkages, primarily among death certificates, law enforcement records, and medical examiner and coroner records. Also variably incorporating such optional supplements as laboratory reports and hospital records, the NVDRS provides de-identified information about suicide and other violent deaths, including their geographic location, circumstances, and personal sociodemographics, in addition to investigation specifics . Disaggregatable to county of death, the data analyzed in this study represented the 17 states that participated in the NVDRS throughout the 2011–2013 observation period. They were Alaska, Colorado, Georgia, Kentucky, Maryland, Massachusetts, New Jersey, New Mexico, North Carolina, Ohio, Oklahoma, Oregon, Rhode Island, South Carolina, Utah, Virginia, and Wisconsin. Enhancing study generalizability, these states mirrored the nation in their age and sex composition, manner-of-death distribution, and crude and age-adjusted all-cause, suicide, and undetermined intent death rates . They overrepresented non-Hispanic Blacks and Whites and underrepresented Hispanics. Our study population comprised registered suicides (ICD-10: U03, X60-X84, Y87.0), whose method and state and county of death were specified, and was further confined to decedents aged 15 years and older, since fewer than 1% of known suicides were younger. The number of decedents in our multivariable analyses totaled 32,151. As further background, our first table provides comparative data on suicide methods for the 17 NVDRS states and the US. These comparisons were based on separate counts for the selected demographic variables, not study population and corresponding national population counts. Similarity between the states and the nation would fortify the generalizability of the results. The data source was CDC WISQARS™ (Web-based Injury Statistics Query and Reporting System) . The outcome variable in this study was suicide note (yes versus no or unknown). Our predictor distinguished suicide methods as follows: drug intoxication, other poisoning, jumping and drowning (combined owing to small sample size and similar corroborative challenges), hanging/suffocation, firearm (referent), and all other methods specified in the NVDRS suicide cases. Additional individual-level covariates were prior suicide attempt; primary mental diagnosis; current mental health treatment; blood alcohol concentration; number of other specified drug positives; physical health problem; number of intimate partner or legal or job or financial or school problems; crisis in past two weeks; emergency medical services at scene; region of death; age; sex; race/ethnicity; marital status; education; and military veteran status. The mediator was autopsy status (yes versus no or unknown).
Linked to the individual-level data in this study, our county-level death investigation system covariates were mode of selection of the chief medical examiner or coroner for a given system (elected versus appointed) and accreditation status and type (accredited coroner, accredited medical examiner, unaccredited coroner, unaccredited medical examiner). Selection mode was identified through a CDC website , and accreditation status and system type through the respective websites of the relevant accrediting agencies, the National Association of Medical Examiners (NAME) and the International Association of Coroners and Medical Examiners (IACME) . We resolved outstanding questions by email or telephone communication with state, district, or county offices. County-level demographic covariates were urbanicity (5 categories representing the 12 ordinal categories of the 2013 Urban Influence Codes: large metropolitan/small metropolitan/metropolitan adjacent/micropolitan or adjacent/rural) and percent population below poverty as the measure of the local poverty burden. These two covariates served as proxies for external forces that potentially could inhibit or support medicolegal death investigations . Their source was the County Area Health Resource File for 2014–2015 .
We used a generalized linear mixed model (GLMM) to test the following two hypotheses: (1) an evidentiary suicide note is more likely to accompany suicides by drug-intoxication and by other poisoning than suicides by firearm and hanging/suffocation; and (2) performance of a forensic autopsy attenuates the observed association between overtness of method and the reported presence of a note. In testing the first hypothesis, we progressively applied four models, beginning with a univariable analysis, confined to suicide method, followed by multivariable analyses that cumulatively incorporated medicolegal system and investigation characteristics, mental health antecedents and precipitating circumstances, and decedent and areal demographics. Providing for a fifth multivariable model, we then tested our second hypothesis. The GLMM is a two-level model that is logistic at the individual level and linear at the county level. We also included a state-level random effect to incorporate the data structure of counties nested in a state and individuals nested in a county. The statistical software was SAS (Copyright(c) 2002–2010, SAS Institute Inc., Cary, NC).
describes the frequency of suicide by method across age, sex, and race/ethnicity for the NVDRS states and nationally during the observation period. Firearm, hanging, and drug intoxication were the leading methods of suicide. Generally, the states and nation showed very similar percentage distributions across age and sex, except at ages 14 years and under. The small numbers of the latter induced data instability; this age group was eliminated from our study population. Bivariable frequency data on suicide notes for the study population are presented in . An authenticated suicide note was documented in 31% of the suicide cases. Consistent with our principal hypothesis, an excess prevalence of notes manifested for suicides by drug intoxication and by other poisoning, at 42% and 48%, versus 29% and 31% for firearm and hanging/suffocation suicides, respectively. Pertinent to statistical adjustments in the multivariable analyses, note prevalence reflected some marked variability by selection mode of the chief medical examiner/coroner, blood alcohol concentration, region of death, and within the sociodemographic characteristics of sex, race/ethnicity, marital status, and education. Partially affirming the principal hypothesis, the univariable analysis showed drug intoxication suicides and suicides by other poisoning were respectively 71% and 122% more likely than their referent, firearm suicides, to be associated with a note . Neither hanging suicides, as consistent with our hypothesis, nor combined jumping and drowning suicides deviated from the referent. Findings remained robust as grouped death investigation variables (Model 2), precipitating circumstances (Model 3), and decedent and county-level demographics (Model 4) were cumulatively incorporated into the multivariable analyses. For drug intoxication suicides, percent increase in odds of a note relative to the firearm referent rose from 71% in the univariable analysis to 80% and 88%, respectively, in the multivariable analysis with application of Models 2 and 3, and diminution to 66% following addition of the demographics in Model 4. Corresponding changes were more modest for the other poisoning suicide category. However, under Models 1 and 4, percent increase in odds of a note relative to firearms was larger for other poisoning suicides than for drug intoxication suicides at 122% and 113%, respectively. The stratified analysis supported our second hypothesis as performance of a forensic autopsy attenuated the association between suicide method and suicide note outcome . Percent increase in odds of an evidentiary note for drug intoxication suicides, relative to firearm suicides, was 86% with no autopsy or unknown autopsy status versus 62% with autopsy. Corresponding figures for other poisoning suicides were 125% and 101%. With no autopsy or unknown autopsy status, hanging/suffocation suicides were less likely than firearm referents to be accompanied by a suicide note. Expanding Model 4 to include autopsy status showed increased percent odds of a note for drug intoxication suicides and other poisoning suicides of 70% and 112%, respectively, and a diminished effect for the residual group, all other suicide methods, compared to the firearm referent. Further inspection of the full model revealed that cases with no autopsy or with unknown autopsy status had 26% higher odds than autopsied cases of having a suicide note on record.
Our findings caution users against uncritically taking suicide and other official data at face value. Under-ascertainment means suicide undercounting. Indicative of differential undercounting by method, increased odds of a suicide note manifested both for cases of suicides by drug intoxication and by other poisoning relative to their referent, firearm cases, as consonant with the first hypothesis concerning the relative violence and forensic overtness of methods. Hanging suicides did not deviate from the referent, nor did the combined category of jumping and drowning suicides. The observed note excess for both drug-intoxication and other poisoning suicides persisted through application of each of the multivariable models, including the full multilevel, multivariable model that incorporated autopsy status. The study implication of growing suicide misclassification and suicide undercounting among drug intoxication deaths assumes added importance as an impediment to prevention in the face of the burgeoning opioid mortality epidemic and severely under-resourced and overburdened emergency healthcare and death investigation systems . Although plausibly a gross underestimate, at 1.62 per 100,000 population, the drug intoxication suicide rate was 41% higher in 2015 than in 2000 . However, suicides involving such poisons as gases, vapors, metals, pesticides, and household cleaners have remained relatively less common. At 0.50 per 100,000, the rate of non-drug poisoning suicides in 2015 was 12% lower than the rate in 2000. Moreover, determination of intent in the non-drug poisoning cases often rests on the unmistakable actions required to use these suicide methods (e.g., carbon monoxide poisoning where the decedent was found in a car with a hose from the tailpipe to the passenger compartment, windows sealed with duct tape, ignition key in the on position, and the gas tank empty), which diminishes the potential ambiguity they pose for medical examiners and coroners. On the other hand, the literature provides no evidence drug intoxication suicides are more planned and less impulsive than suicides by more overtly violent methods. As hypothesized, performance of a forensic autopsy attenuated the observed associations between overtness of suicide method and odds of an evidentiary suicide note. Although an autopsy can help investigators identify the mechanism or cause of an injury death, it does not invariably enlighten decedent intentionality . Indeed, performance of an autopsy diminished but did eliminate the note-effect we found when comparing mechanisms of deaths in the manner of death determination. Presence of a suicide note, similar to the effect of an overt suicide method, may even reduce autopsy occurrence . During 2007, 97% of US homicides were autopsied compared with 60% of suicides, 81% of undetermined intent deaths, and 79% of unintentional (accidental) poisoning deaths, pointing to a less rigorous approach to suicide versus homicide determination and accounting . The US National Association of Medical Examiners (NAME) Forensic Autopsy Performance Standards require an autopsy when “the death is by apparent intoxication by alcohol, drugs, or poison, unless a significant interval has passed, and the medical findings and absence of trauma are well documented” . NAME does not require autopsy in other suicidal deaths where the cause is externally manifest (e.g., gunshot wound or hanging) and instead leaves autopsy performance to the discretion of the pathologist, if it is “necessary to determine cause or manner of death, or document injuries/disease, or collect evidence.” While this standard is expected for medical examiners, who work in some states, it does not apply to elected coroners–the norm for many states. Other jurisdictions have both coroners and medical examiners, varying by counties . By comparison with the US, Finnish medicolegal authorities conducted forensic autopsies in 99% of suicides, 98% of homicides, 98% of unintentional poisoning deaths, and 97% of undetermined deaths in the period 2000–2003 . Researchers viewed a lower autopsy rate and greater use of ill-defined and unknown cause-of-death codes as especially problematic for the quality of Danish suicide and other manner-of-death statistics relative to those of Finland . During the period 1998–2007, Finland and Denmark had annualized suicide rates of 20 and 12 per 100,000 population, respectively . Corresponding combined clinical and forensic autopsy rates for total deaths were 31% and 8%. This gap—as a plausible indicator of different standards of thoroughness in medicolegal death investigations —may mean there is an artifactual component in the differential in suicide rates between these two Scandinavian countries. However, substantive factors contributing to a higher rate in Finland might include variable alcohol consumption and firearm ownership between Finland and Denmark. More generally implicating an artifactual component in suicide rates and rate variation, an ecological or correlational study of 35 Eurasian countries reported spatial and temporal associations between the respective magnitudes of the combined clinical and forensic autopsy rate and the suicide rate . Spatially or cross-sectionally, a 1% difference in autopsy rates was associated with a suicide rate difference of 0.49 per 100,000 population, and temporally or longitudinally a 1% decrease in the autopsy rate with a suicide rate decrease of 0.42 per 100,000. Our research, examining registered or known suicides, complements another multilevel (individual/county), multivariable study which used NVDRS data from the same observation period to predict differential odds that suicides pooled with deaths of undetermined intent, included as possible suicides, would be classified by medical examiners and coroners as suicide if there was documentation of a suicide note and mental health antecedents . One hypothesis in the complementary study addressed the association between an evidentiary suicide note and suicide classification. Underscoring a pivotal role an authenticated note can play in separating suicide from undetermined cases, presence was associated with 34-fold increased odds of a suicide classification. In addition, combined firearm and hanging/suffocation deaths showed 42-fold increased odds of a suicide classification relative to drug intoxication deaths or, alternatively expressed, 98% lower odds of an undetermined classification. Drug intoxication cases with an evidentiary note were 45 times more likely to be classified as suicide compared to corresponding cases with no note or unknown note status, and eight times more likely in firearm and hanging cases. A relative strength of the current multilevel study, since it focused on suicides rather than suicides and undetermined deaths, was greater granularity and specificity of suicide methods or injury causes/mechanisms than in the comparative NVDRS study. Based on medicolegal and police investigations, which often involve family and friends of the decedent, NVDRS data are vulnerable to reporter bias, especially among persons who are ashamed or embarrassed by what transpired. This deficiency also may reflect a lack of willingness of investigators to collect data about decedent background and circumstances when suicide is readily apparent as the manner of death, or when as an elected local official, coroners are reluctant to probe potentially sensitive personal issues. Our study and the complementary NVDRS investigation jointly reveal a need for qualitative as well as quantitative research on whether family and friends of the decedents variably destroy or otherwise conceal suicide notes, and withhold other potential corroborative evidence from authorities. The field now would benefit from mixed methods investigations that integrate psychological autopsies, focus groups, surveys, sociocultural autopsies, content analysis, and thematic analysis to examine values and attitudes towards suicide that may continue to influence its reporting. Although high quality data regarding suicides are essential for planning, implementing, and evaluating suicide prevention programs, few resources have been devoted to improving fundamental data quality. Without such rigor, it will be difficult to accept the validity of future efforts to reduce suicide rates. The enriched Restricted Access Database of the NVDRS provides the only population-based data in the US appropriate for evaluating our research questions about the relationship between suicide methods, a forensic autopsy, and an evidentiary suicide note. Besides reporter bias, a limitation of this study was restriction of the geographic domain to the 17 states that contributed data to the NVDRS throughout the observation period, 2011–2013. Nevertheless, high demographic concordance between these states and the nation tempers our concern about reduced generalizability of study findings, which additionally is a by-product of system protocols that emphasize uniform definitions of manner of death and consistent data collection, entry, review, and coding . Moreover, the similarity in the distribution of suicides by method across age, sex, and race/ethnicity, between the NVDRS states and the nation, enhanced our confidence in study generalizability. Since our observation period, the NVDRS has expanded to 40 states, the District of Columbia, and Puerto Rico, with a goal to cover all 50 states and the territories . Further study limitations included the indirect nature of our assessment of differential suicide data quality by method, confinement of our study population to suicides whose death circumstances were captured by the NVDRS and whose methods were specified, and our inability to factor in medicolegal use of a computerized tomography scan in lieu of a forensic autopsy. Other limitations and strengths of the NVDRS have been reported .
Suicide requires substantial affirmative evidence to establish manner of death, and affirmation of drug intoxication suicides appears to demand an especially high burden of proof. Findings and their implications argue for more stringent investigative standards, better training, and more resources to support accurate and comprehensive case ascertainment, as the foundation for developing evidence-based suicide prevention initiatives.
|
Metabolome-driven microbiome assembly determining the health of ginger crop ( | 1556c3d5-f5f5-40be-a33b-116f9a60bb7e | 11380783 | Microbiology[mh] | Plant-associated microorganisms can be found in various plant niches and collectively comprise the plant microbiome . Plant microbiomes contain beneficial and pathogenic microbes . Advances in high-throughput sequencing techniques have deepened our knowledge of the relationship between microbiomes and hosts . Plant microbiome assemblages separated into above- and belowground constituent parts have been extensively studied, primarily in model species, including the soil microbiome , rhizosphere , root , and phyllosphere . Microbial communities associated with several plant niches have also been analyzed . Fungus-induced changes are correlated with changes in the wheat leaf microbiome . However, understanding the variation in the microbiome is imperative for determining how microbiome assembly affects overall plant holobiome health. Conversely, plant secondary metabolites (PSMs) perform many functions, including defense against pathogens . PSMs capable of broadly changing plant microbiomes have been described . Phytohormones such as jasmonic acid (JA), salicylic acid (SA), ethylene (ET), and abscisic acid (ABA), among the most studied pathogenesis mediators, have also been shown to have an impact on the microbiome of plants . The plant microbiome and metabolome are closely correlated, which indicates that endophytes can promote the accumulation of secondary metabolites that are relevant to active medicinal properties . The rhizosphere microbiome was shown to drive the systemically induced root exudation of metabolites . Less attention has been given to the effects of the metabolome–microbiome relationship on plant health, although the interactive effect of host plant defense and root-associated microbiota is evident after Fusarium oxysporum infection in Arabidopsis thaliana . Although little research has been conducted on ginger ( Zingiber officinale L. Roscoe) compared to other agricultural plants , ginger is a perennial monocotyledonous herb with underground rhizomes and a long history of use as a fresh vegetable, spice, and herbal medicine. However, this crop is vulnerable to various plant pathogens , and rhizome rot has been a significant limiting factor for ginger’s yield and marketing potential in China. Rhizome rot is a highly destructive disease that has been found to reduce ginger production by 50–90% . The disease causes significant losses, especially in warm and humid conditions, with severe outbreaks observed in recent years. In 2020, rhizome rot led to an average yield loss of 20 to 25% in the Tangshan region, posing a significant threat to local ginger farming . This disease has increasingly become one of the most devastating issues for ginger cultivation in Shandong Province, a key ginger production area in China . Further research on the disease’s epidemiology and potential management options is necessary. Ginger rhizome rot can be attributed to multiple causal agents, including Fusarium oxysporum f. sp. Zingiberi , Pectobacterium brasiliense , Bacillus pumilus , Pythium myriotylum , and Enterobacter cloacae . This complex pathosystem is worth studying to determine the microbiome and the metabolome assembly that keeps plants healthy. Here, we performed metataxonomic analyses using bacterial and fungal amplicon sequencing and untargeted metabolomics analysis to identify the metabolome-driven structure and function of microbial communities associated with rhizome rot and ginger plant health.
Sample collection and preparation Samples were collected in the Laiwu district of Jinan, Shandong Province (1.36°19′50" N, 117°29′29" E; northern China), which has optimal growing conditions, but rhizome rot is a factor limiting the yield and marketability of ginger . The sampling area is in a typical warm-temperate humid/semihumid climate zone, with an annual mean temperature of 12.5 °C, annual mean precipitation of 688.9 mm, and 62% relative humidity. The frost-free period is 191 days, and the annual sunshine hours are 2629 h . Almost 70% of the total precipitation occurs from July to September. The soil in the area is classified as sandy loam . The ginger variety used, Zingiber officinale var. officinale, was the same as that planted by local farmers. The size of each plot was approximately 666 m 2 , and 7000 to 8000 plants were grown in each plot. The plots were subjected to the same irrigation and fertilization regimes. These plants were watered ten times during the crop growth cycle. Approximately 100 kg of compound and organic fertilizer (chicken manure) were applied to the soil at various times during the crop cycle, including during soil preparation, sowing, and crop growth. Sample collection was performed on September 12, 2021. In September, the mean temperature is 25°C during the day and 18°C at night. The relative humidity of the soil is 75–85%, and the area is exposed to 9 h of sunshine on average. Only ginger was grown within a radius of at least 1500 m in the sampled area. The area where samples are collected is also utilized for planting garlic. The crop rotation cycle occurs every 2 years, and this area has been dedicated to ginger farming for approximately 40 years. The diseased plants were stunted with yellowish, dry lower leaves that turned brown. Additionally, their rhizomes were rotted or spongy, which aligns with the symptoms of rhizome rot previously described . Endophytic bacteria were identified from asymptomatic plant tissues, but there was a notable increase in P ectobacterium_carotovorum _subsp._ brasiliense (Supplementary Table 1) in diseased plants compared to healthy plants. Three replicates of healthy and diseased plants were collected from three adjacent plots. Each replicate consisted of a composite sample obtained by mixing three samples collected from the same niche (leaf, stem, root, rhizome, rhizosphere soil, and bulk soil; Fig. D) from three symptomatic or asymptomatic plants per plot for a total of 36 composite samples. The rhizosphere is the microbial habitat around the root , although we also applied this term to the soil adjacent to the rhizome. Approximately 30 g of bulk soil sample was collected at a distance of 20 cm from the root and at a depth of 0 to 15 cm, and the rhizosphere soil attached to the roots and rhizomes was collected by manual shaking. The samples were subsequently transferred to collection bags and transported to the laboratory on dry ice. Plant samples were washed immediately upon arrival at the laboratory with tap water until they appeared to be free of debris and then rinsed three times with distilled water (dH 2 O). To sterilize the surface of the plant organs and remove exogenous bacteria and fungi, the samples that were used for endophytic diversity analyses were immersed in 70% ethanol for 5 min, 2.5% sodium hypochlorite solution for 1–2 min, and 70% ethanol for 1 min and then rinsed vigorously three times with sterilized Millipore water. To verify the efficacy of the sterilization process, a sample from the last portion of the water used for washing was inoculated on potato dextrose agar (PDA) plates, which were incubated at 28 °C for 10 days, and on LB plates, which were incubated at 37 °C for 5 days before checking for the appearance of colonies . The surface-sterilized plant organs constituted the endophyte samples. Samples for molecular analysis were stored in a − 80°C freezer until DNA extraction. DNA extraction and PCR amplification All laboratory protocols were performed at Shanghai Majorbio Bio-pharm Technology Co., Ltd. The samples were processed under normal experimental conditions. Illumina metagenomic library preparation guidelines were followed to create 16S and ITS rRNA gene amplicon libraries. DNA extraction from 0.5 g of rhizosphere and bulk soil samples or 5 g of plant tissues was performed using a DNeasy PowerSoil Kit (Qiagen, MD, USA) according to the manufacturer’s instructions. After the genomic DNA extraction was completed, 1% agarose gel electrophoresis was carried out to detect the extracted genomic DNA. DNA was quantified using a NanoDrop spectrophotometer. Each sample was tested three times and kept at − 20℃ until PCR amplification was performed. The V5–V7 hypervariable region of the bacterial 16S rRNA gene was amplified using the universal primers 799F (5′-AACMGGATTAGATACCCKG-3′) and 1193R (5′-ACGTCATCCCCACCTTCC-3′), which provided a more accurate picture of the bacterial community structure and very low amplification of nontarget DNA , while the fungal ITS2 region was amplified using the primers ITS3F (5′-GCATCGATGAAGAACGCAGC-3′) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′) , which proved to be the most appropriate for the characterization of fungal communities with metabarcoding . An AxyPrep DNA Gel Recovery Kit (AXYGEN) was used to excise the products from the gel and recover them according to the manufacturer’s instructions. PCR products were assessed and quantified with the QuantiFluorTM-ST Blue Fluorescence Quantitative System (Promega Corporation, Madison, WI, USA). Replicates of the same sample were pooled in equimolar proportions for sequencing. Amplicon sequencing and bioinformatic analysis The bacterial and fungal amplicon sequences of the 36 analyzed samples were independently sequenced. Negative controls were used (sterile water was used instead of template DNA) to exclude contamination by PCR amplification. Amplicon libraries were sequenced on the Illumina MiSeq PE300 platform (Illumina, USA) according to the manufacturer’s protocols, and 250 bp paired-end reads were generated. The 16S rRNA and ITS gene sequences generated were analyzed using the online Majorbio Cloud Platform based on the QIIME pipeline version 1.9.1 using recommended parameters. Paired-end reads obtained from the Illumina platform were assembled, and the primer sequences and low-quality reads with scores less than Q30 were trimmed using USEARCH v.11.0 software with default parameters. The sequencing run produced 2,645,244 high-quality reads across the 36 input libraries. Operational taxonomic units (OTUs) were assigned based on 97% similarity among clustered reads and then checked for chimeras using the UPARSE (v.7.0.1090, https://drive5.com/uparse/ ) pipeline in USEARCH v.11.0 software with default parameters before generating an OTU count table. OTUs were taxonomically annotated using the SILVA reference database (v.138, https://www.arb-silva.de ) and I database (v.8.0, http://unite.ut.ee/index.php ) for bacteria and fungi, respectively. The Shannon rarefaction curve was calculated (Supplementary Fig. 1A and 1B) by randomly resampling each sample several times, plotting the rarefied number of OTUs defined at a 97% sequence similarity threshold relative to the number of samples (Mothur v.1.30.2, https://www.mothur.org/wiki/Download_mothur ), and the minimum number required for subsequent analysis was validated. We performed a single rarefaction at a depth of the shallowest sample to control for variable sequencing effort between representatives. Then, we chose a subsampling depth of 27,618 sequences per bacterial sample and 45,861 per fungal sample, which yielded a final rarefied dataset for all 36 models. Bacterial and fungal sequences were assigned to each sample based on their barcodes using the SILVA v138 16S ( http://www.arb-silva.de ) and UNITE v8.0 ITS ( http://unite.ut.ee/index.php ) databases, respectively. Microbial diversity analysis Although different indices showed very similar results (source data Fig. ), plant health was related to both diversity and microbial composition . Thus, two alpha diversity indices were considered at the genus level: observed richness (Sobs), which provides a direct measure of population complexity by counting the number of different species in a sample (observed OTUs), and the Shannon H’ index, which is an estimator of taxon diversity, combining richness, and uniformity with the Kruskal–Wallis test for all pairwise combinations. Principal coordinate analysis (PCoA) was conducted with the vegan package v.2.4.3 in R software v.3.3. based on the Bray–Curtis distance algorithm to visualize the β diversity pattern of microbial communities between samples from different microbial niches of healthy and diseased plants. Permutational multivariate analysis of variance (PERMANOVA) was performed using 999 permutations computed from the rarefied dataset ( n = 36) to test the relative contribution of both disease and plant compartment microhabitats to community dissimilarity. The core or generalist taxa in the ginger microbiomes were defined as OTUs present in 100% of the plant samples, while the specialists were present in only one plant niche. Predictive and statistical analysis The data are displayed as the average of at least three independent replications and the standard deviation. P values less than 0.05 were considered to indicate statistical significance. We summarized the distribution of the annotated OTUs based on the species results to reveal the general species distribution patterns of the different samples. In particular, pie diagrams were generated to indicate the numbers of shared (core) or unique (specialist) microbial genera among compartments for healthy and diseased ginger plants. Clustering heatmaps reflecting differences in the abundance of different samples through color changes were generated (ggplot2’ package v3.2.1 in R Studio v3.5.3). Microbial functional assemblages from 16S rRNA gene sequences were predicted by FAPROTAX and were compared using the Kruskal–Wallis rank sum test, while fungal OTUs were classified into ecological guilds using the online application FUNGuild . A confidence ranking of “highly probable” or “probable” was retained for high accuracy, whereas those with “possible” confidence rankings were considered unclassified. Undefined guilds: undefined pathogens, defined as nonspecific pathogens of fungi, plants, or animals; undefined saprotrophs, defined as nonspecific saprotrophs of wood, plants, or litter soil. Linear discriminant analysis (LDA) effect size (LEfSe) was applied to determine the features (differentially enriched microbial taxa and functions) most likely to explain differences between healthy and diseased ginger plants. The samples were pooled to analyze the soil and endophyte microbiomes of plants that appeared healthy or diseased. Taxa with an LDA effect size greater than 4.0 ( P < 0.05) were considered significant. Metabolomics analysis We analyzed changes in the endophyte microbiome of plants driven by the metabolome and implications for plant health. The same 24 samples of leaves, stems, roots, and rhizomes from healthy and diseased plants that were used for the microbiome analysis were analyzed using an untargeted metabolomics approach. Fifty milligrams of each sample was added to a 2-ml centrifuge tube, and a 6-mm diameter grinding bead was added. For the extraction of the metabolite, 400 μL of methanol:water (4:1 (v:v)) containing 0.02 mg/mL internal standard (L-2-chlorophenylalanine) was used. The samples were ground with a Wonbio-96c frozen tissue grinder (Shanghai Wanbo Biotechnology Co., Ltd.) for 6 min (− 10°C, 50 Hz), followed by ultrasonic extraction at a low temperature for 30 min (5°C, 40 kHz). The samples were kept at − 20°C for 30 min and then centrifuged for 15 min (4°C, 13,000 g ), after which the supernatant was transferred to an injection vial for LC‒MS/MS analysis in positive and negative ionization modes. A pooled quality control sample (QC) was prepared by mixing equal volumes of all the samples. The QC samples were disposed and tested in the same manner as the analytic samples. LC‒MS/MS analysis of the samples was conducted on a SCIEX UPLC-Triple TOF 5600 system equipped with an ACQUITY HSS T3 column (100 mm × 2.1 mm i.d., 1.8 μm; Waters, USA) at Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). The mobile phases consisted of 0.1% formic acid in water:acetonitrile (95:5, v/v) (solvent A) and 0.1% formic acid in acetonitrile:isopropanol:water (47.5:47.5, v/v) (solvent B). The flow rate was 0.40 mL/min, and the column temperature was 40°C. The UPLC system was coupled to a quadrupole time-of-flight mass spectrometer (Triple TOFTM5600 + , Sciex, USA) equipped with an electrospray ionization (ESI) source operating in positive mode and negative mode. The optimal conditions were set as follows: source temperature, 550°C; curtain gas (CUR), 30 psi; Ion Source Gas1 and Gas2, 50 psi; ion-spray voltage floating (ISVF), − 4000 V in negative mode and 5000 V in positive mode; declustering potential, 80 V; collision energy (CE); and 20–60 eV rolling for MS/MS. Data acquisition was performed in information-dependent acquisition (IDA) mode. Detection was carried out over a mass range of 50–1000 m/z. Pretreatment of the raw LC/MS data was performed with Progenesis QI (Waters Corporation, Milford, USA) software, and a three-dimensional data matrix was exported in CSV format. Internal standard peaks, as well as any known false-positive peaks (including noise, column bleeding, and derivatized reagent peaks), were removed from the data matrix, and the peaks were pooled. Moreover, the metabolites were identified by searching databases, and the main databases used were the HMDB ( http://www.hmdb.ca/ ), Metlin ( https://metlin.scripps.edu/ ), and Majorbio databases. The data were analyzed through the free online platform Majorbio cloud (cloud.majorbio.com). At least 80% of the metabolic features detected in any set of samples were retained. To reduce the errors caused by sample preparation and instrument instability, the response intensity of the sample mass spectrum peaks was normalized by the sum normalization method, after which the normalized data matrix was obtained. The RSD was less than 0.3 for the overall dataset, and the peak ratio was more than 70%, so the overall data were suitable for subsequent analysis (Supplementary Fig. 3). The total set of metabolites identified was annotated using public databases, including KEGG (Kyoto Encyclopedia of Genes and Genomics, http://www.genome.jp/kegg/ ) and HMDB (Human Metabolome Database, www.hmdb.ca ). Pearson’s correlation based on the Bray‒Curtis distance algorithm was used to evaluate the abundance of endophyte microbiome at the genus level and metabolites in the ginger plant compartments. Correlation analysis heatmaps were drawn, and KEGG enrichment analysis of the differentially abundant metabolites was performed using SciPy v.1.0.0 (Python) software. The differentially abundant metabolites were screened with the orthogonal projections to latent structures discriminant analysis (PLS-DA) model using the default criteria, with a variable importance value (VIP) ≥ 1 and a significance threshold of P < 0.001, using ropls v.1.6.2 (R software). Procrustes analysis of the Euclidian distances of eigenvalues for both the bacterial or fungal microbiome and metabolome datasets was executed to analyze the congruence of two-dimensional shapes produced from the superimposition of principal component analyses (PCAs) .
Samples were collected in the Laiwu district of Jinan, Shandong Province (1.36°19′50" N, 117°29′29" E; northern China), which has optimal growing conditions, but rhizome rot is a factor limiting the yield and marketability of ginger . The sampling area is in a typical warm-temperate humid/semihumid climate zone, with an annual mean temperature of 12.5 °C, annual mean precipitation of 688.9 mm, and 62% relative humidity. The frost-free period is 191 days, and the annual sunshine hours are 2629 h . Almost 70% of the total precipitation occurs from July to September. The soil in the area is classified as sandy loam . The ginger variety used, Zingiber officinale var. officinale, was the same as that planted by local farmers. The size of each plot was approximately 666 m 2 , and 7000 to 8000 plants were grown in each plot. The plots were subjected to the same irrigation and fertilization regimes. These plants were watered ten times during the crop growth cycle. Approximately 100 kg of compound and organic fertilizer (chicken manure) were applied to the soil at various times during the crop cycle, including during soil preparation, sowing, and crop growth. Sample collection was performed on September 12, 2021. In September, the mean temperature is 25°C during the day and 18°C at night. The relative humidity of the soil is 75–85%, and the area is exposed to 9 h of sunshine on average. Only ginger was grown within a radius of at least 1500 m in the sampled area. The area where samples are collected is also utilized for planting garlic. The crop rotation cycle occurs every 2 years, and this area has been dedicated to ginger farming for approximately 40 years. The diseased plants were stunted with yellowish, dry lower leaves that turned brown. Additionally, their rhizomes were rotted or spongy, which aligns with the symptoms of rhizome rot previously described . Endophytic bacteria were identified from asymptomatic plant tissues, but there was a notable increase in P ectobacterium_carotovorum _subsp._ brasiliense (Supplementary Table 1) in diseased plants compared to healthy plants. Three replicates of healthy and diseased plants were collected from three adjacent plots. Each replicate consisted of a composite sample obtained by mixing three samples collected from the same niche (leaf, stem, root, rhizome, rhizosphere soil, and bulk soil; Fig. D) from three symptomatic or asymptomatic plants per plot for a total of 36 composite samples. The rhizosphere is the microbial habitat around the root , although we also applied this term to the soil adjacent to the rhizome. Approximately 30 g of bulk soil sample was collected at a distance of 20 cm from the root and at a depth of 0 to 15 cm, and the rhizosphere soil attached to the roots and rhizomes was collected by manual shaking. The samples were subsequently transferred to collection bags and transported to the laboratory on dry ice. Plant samples were washed immediately upon arrival at the laboratory with tap water until they appeared to be free of debris and then rinsed three times with distilled water (dH 2 O). To sterilize the surface of the plant organs and remove exogenous bacteria and fungi, the samples that were used for endophytic diversity analyses were immersed in 70% ethanol for 5 min, 2.5% sodium hypochlorite solution for 1–2 min, and 70% ethanol for 1 min and then rinsed vigorously three times with sterilized Millipore water. To verify the efficacy of the sterilization process, a sample from the last portion of the water used for washing was inoculated on potato dextrose agar (PDA) plates, which were incubated at 28 °C for 10 days, and on LB plates, which were incubated at 37 °C for 5 days before checking for the appearance of colonies . The surface-sterilized plant organs constituted the endophyte samples. Samples for molecular analysis were stored in a − 80°C freezer until DNA extraction.
All laboratory protocols were performed at Shanghai Majorbio Bio-pharm Technology Co., Ltd. The samples were processed under normal experimental conditions. Illumina metagenomic library preparation guidelines were followed to create 16S and ITS rRNA gene amplicon libraries. DNA extraction from 0.5 g of rhizosphere and bulk soil samples or 5 g of plant tissues was performed using a DNeasy PowerSoil Kit (Qiagen, MD, USA) according to the manufacturer’s instructions. After the genomic DNA extraction was completed, 1% agarose gel electrophoresis was carried out to detect the extracted genomic DNA. DNA was quantified using a NanoDrop spectrophotometer. Each sample was tested three times and kept at − 20℃ until PCR amplification was performed. The V5–V7 hypervariable region of the bacterial 16S rRNA gene was amplified using the universal primers 799F (5′-AACMGGATTAGATACCCKG-3′) and 1193R (5′-ACGTCATCCCCACCTTCC-3′), which provided a more accurate picture of the bacterial community structure and very low amplification of nontarget DNA , while the fungal ITS2 region was amplified using the primers ITS3F (5′-GCATCGATGAAGAACGCAGC-3′) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′) , which proved to be the most appropriate for the characterization of fungal communities with metabarcoding . An AxyPrep DNA Gel Recovery Kit (AXYGEN) was used to excise the products from the gel and recover them according to the manufacturer’s instructions. PCR products were assessed and quantified with the QuantiFluorTM-ST Blue Fluorescence Quantitative System (Promega Corporation, Madison, WI, USA). Replicates of the same sample were pooled in equimolar proportions for sequencing.
The bacterial and fungal amplicon sequences of the 36 analyzed samples were independently sequenced. Negative controls were used (sterile water was used instead of template DNA) to exclude contamination by PCR amplification. Amplicon libraries were sequenced on the Illumina MiSeq PE300 platform (Illumina, USA) according to the manufacturer’s protocols, and 250 bp paired-end reads were generated. The 16S rRNA and ITS gene sequences generated were analyzed using the online Majorbio Cloud Platform based on the QIIME pipeline version 1.9.1 using recommended parameters. Paired-end reads obtained from the Illumina platform were assembled, and the primer sequences and low-quality reads with scores less than Q30 were trimmed using USEARCH v.11.0 software with default parameters. The sequencing run produced 2,645,244 high-quality reads across the 36 input libraries. Operational taxonomic units (OTUs) were assigned based on 97% similarity among clustered reads and then checked for chimeras using the UPARSE (v.7.0.1090, https://drive5.com/uparse/ ) pipeline in USEARCH v.11.0 software with default parameters before generating an OTU count table. OTUs were taxonomically annotated using the SILVA reference database (v.138, https://www.arb-silva.de ) and I database (v.8.0, http://unite.ut.ee/index.php ) for bacteria and fungi, respectively. The Shannon rarefaction curve was calculated (Supplementary Fig. 1A and 1B) by randomly resampling each sample several times, plotting the rarefied number of OTUs defined at a 97% sequence similarity threshold relative to the number of samples (Mothur v.1.30.2, https://www.mothur.org/wiki/Download_mothur ), and the minimum number required for subsequent analysis was validated. We performed a single rarefaction at a depth of the shallowest sample to control for variable sequencing effort between representatives. Then, we chose a subsampling depth of 27,618 sequences per bacterial sample and 45,861 per fungal sample, which yielded a final rarefied dataset for all 36 models. Bacterial and fungal sequences were assigned to each sample based on their barcodes using the SILVA v138 16S ( http://www.arb-silva.de ) and UNITE v8.0 ITS ( http://unite.ut.ee/index.php ) databases, respectively.
Although different indices showed very similar results (source data Fig. ), plant health was related to both diversity and microbial composition . Thus, two alpha diversity indices were considered at the genus level: observed richness (Sobs), which provides a direct measure of population complexity by counting the number of different species in a sample (observed OTUs), and the Shannon H’ index, which is an estimator of taxon diversity, combining richness, and uniformity with the Kruskal–Wallis test for all pairwise combinations. Principal coordinate analysis (PCoA) was conducted with the vegan package v.2.4.3 in R software v.3.3. based on the Bray–Curtis distance algorithm to visualize the β diversity pattern of microbial communities between samples from different microbial niches of healthy and diseased plants. Permutational multivariate analysis of variance (PERMANOVA) was performed using 999 permutations computed from the rarefied dataset ( n = 36) to test the relative contribution of both disease and plant compartment microhabitats to community dissimilarity. The core or generalist taxa in the ginger microbiomes were defined as OTUs present in 100% of the plant samples, while the specialists were present in only one plant niche.
The data are displayed as the average of at least three independent replications and the standard deviation. P values less than 0.05 were considered to indicate statistical significance. We summarized the distribution of the annotated OTUs based on the species results to reveal the general species distribution patterns of the different samples. In particular, pie diagrams were generated to indicate the numbers of shared (core) or unique (specialist) microbial genera among compartments for healthy and diseased ginger plants. Clustering heatmaps reflecting differences in the abundance of different samples through color changes were generated (ggplot2’ package v3.2.1 in R Studio v3.5.3). Microbial functional assemblages from 16S rRNA gene sequences were predicted by FAPROTAX and were compared using the Kruskal–Wallis rank sum test, while fungal OTUs were classified into ecological guilds using the online application FUNGuild . A confidence ranking of “highly probable” or “probable” was retained for high accuracy, whereas those with “possible” confidence rankings were considered unclassified. Undefined guilds: undefined pathogens, defined as nonspecific pathogens of fungi, plants, or animals; undefined saprotrophs, defined as nonspecific saprotrophs of wood, plants, or litter soil. Linear discriminant analysis (LDA) effect size (LEfSe) was applied to determine the features (differentially enriched microbial taxa and functions) most likely to explain differences between healthy and diseased ginger plants. The samples were pooled to analyze the soil and endophyte microbiomes of plants that appeared healthy or diseased. Taxa with an LDA effect size greater than 4.0 ( P < 0.05) were considered significant.
We analyzed changes in the endophyte microbiome of plants driven by the metabolome and implications for plant health. The same 24 samples of leaves, stems, roots, and rhizomes from healthy and diseased plants that were used for the microbiome analysis were analyzed using an untargeted metabolomics approach. Fifty milligrams of each sample was added to a 2-ml centrifuge tube, and a 6-mm diameter grinding bead was added. For the extraction of the metabolite, 400 μL of methanol:water (4:1 (v:v)) containing 0.02 mg/mL internal standard (L-2-chlorophenylalanine) was used. The samples were ground with a Wonbio-96c frozen tissue grinder (Shanghai Wanbo Biotechnology Co., Ltd.) for 6 min (− 10°C, 50 Hz), followed by ultrasonic extraction at a low temperature for 30 min (5°C, 40 kHz). The samples were kept at − 20°C for 30 min and then centrifuged for 15 min (4°C, 13,000 g ), after which the supernatant was transferred to an injection vial for LC‒MS/MS analysis in positive and negative ionization modes. A pooled quality control sample (QC) was prepared by mixing equal volumes of all the samples. The QC samples were disposed and tested in the same manner as the analytic samples. LC‒MS/MS analysis of the samples was conducted on a SCIEX UPLC-Triple TOF 5600 system equipped with an ACQUITY HSS T3 column (100 mm × 2.1 mm i.d., 1.8 μm; Waters, USA) at Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). The mobile phases consisted of 0.1% formic acid in water:acetonitrile (95:5, v/v) (solvent A) and 0.1% formic acid in acetonitrile:isopropanol:water (47.5:47.5, v/v) (solvent B). The flow rate was 0.40 mL/min, and the column temperature was 40°C. The UPLC system was coupled to a quadrupole time-of-flight mass spectrometer (Triple TOFTM5600 + , Sciex, USA) equipped with an electrospray ionization (ESI) source operating in positive mode and negative mode. The optimal conditions were set as follows: source temperature, 550°C; curtain gas (CUR), 30 psi; Ion Source Gas1 and Gas2, 50 psi; ion-spray voltage floating (ISVF), − 4000 V in negative mode and 5000 V in positive mode; declustering potential, 80 V; collision energy (CE); and 20–60 eV rolling for MS/MS. Data acquisition was performed in information-dependent acquisition (IDA) mode. Detection was carried out over a mass range of 50–1000 m/z. Pretreatment of the raw LC/MS data was performed with Progenesis QI (Waters Corporation, Milford, USA) software, and a three-dimensional data matrix was exported in CSV format. Internal standard peaks, as well as any known false-positive peaks (including noise, column bleeding, and derivatized reagent peaks), were removed from the data matrix, and the peaks were pooled. Moreover, the metabolites were identified by searching databases, and the main databases used were the HMDB ( http://www.hmdb.ca/ ), Metlin ( https://metlin.scripps.edu/ ), and Majorbio databases. The data were analyzed through the free online platform Majorbio cloud (cloud.majorbio.com). At least 80% of the metabolic features detected in any set of samples were retained. To reduce the errors caused by sample preparation and instrument instability, the response intensity of the sample mass spectrum peaks was normalized by the sum normalization method, after which the normalized data matrix was obtained. The RSD was less than 0.3 for the overall dataset, and the peak ratio was more than 70%, so the overall data were suitable for subsequent analysis (Supplementary Fig. 3). The total set of metabolites identified was annotated using public databases, including KEGG (Kyoto Encyclopedia of Genes and Genomics, http://www.genome.jp/kegg/ ) and HMDB (Human Metabolome Database, www.hmdb.ca ). Pearson’s correlation based on the Bray‒Curtis distance algorithm was used to evaluate the abundance of endophyte microbiome at the genus level and metabolites in the ginger plant compartments. Correlation analysis heatmaps were drawn, and KEGG enrichment analysis of the differentially abundant metabolites was performed using SciPy v.1.0.0 (Python) software. The differentially abundant metabolites were screened with the orthogonal projections to latent structures discriminant analysis (PLS-DA) model using the default criteria, with a variable importance value (VIP) ≥ 1 and a significance threshold of P < 0.001, using ropls v.1.6.2 (R software). Procrustes analysis of the Euclidian distances of eigenvalues for both the bacterial or fungal microbiome and metabolome datasets was executed to analyze the congruence of two-dimensional shapes produced from the superimposition of principal component analyses (PCAs) .
Overview of the sequencing and de novo assembly Data analysis of 36 composite samples from 6 microbial niches of healthy and diseased plants was carried out to characterize the microbial communities associated with the sampled ginger plants. Supervised taxonomic classification of all high-quality reads was performed using the SILVA and UNITE databases to examine the taxonomic structure of the bacterial and fungal communities, respectively. A total of 994,248 archaeal/bacterial and 1,650,996 fungal high-quality reads were obtained and sorted into 5353 archaeal/bacterial and 1793 fungal operational taxonomic units (OTUs). Archaeal/bacterial OTUs were assigned to 2 domains, 2 kingdoms, 44 phyla, 125 classes, 304 orders, 517 families, 1073 genera, and 2101 species (Supplementary Table 1), and the fungal OTUs were assigned to 15 phyla, 51 classes, 112 orders, 233 families, 426 genera, and 667 species (Supplementary Table 2). The saturated rarefaction curves (Supplementary Fig. 1A, B) and species diversity (Shannon index) for both the archaeal/bacterial (Supplementary Fig. 1C) and fungal communities (Supplementary Fig. 1D) indicated that the sampling efforts were adequate to reflect the microbial communities within each sample. Proteobacteria (57.40%), Actinobacteriota (16.74%), Bacteroidota (8.24%), and Firmicutes (6.83%) were the dominant bacterial phyla, while unclassified_k_Fungi (62.08%) and Ascomycota (32.53%) were the dominant fungal phyla. Despite the great diversity, the ginger ecosystem’s bacterial and especially fungal communities were dominated by a few phyla among all samples, and we next examined how differences in the microbiome assembly can impact plant health. Assemblage of plant-associated bacterial and fungal microbiota Microbial composition in plant niches associated with plant health To determine the microbial composition and relative abundance in the niches of healthy and diseased plants, we constructed pie diagrams to represent the number of generalist (shared) and specialist (inhabitants of a single niche) microbes between the niches of all the plants. The total number of microbes per niche included generalists, specialists, and those inhabiting more than one niche, known as satellites. A greater number of bacterial genera was found in the microbial communities of healthy plants compared to diseased plants, except in the rhizosphere soil, where the number of bacterial genera was higher in the diseased plants. A total of 3331 bacterial genera were identified in the healthy plants, with 138 unique to that group. In contrast, 2512 genera were detected in the diseased plants, with 58 unique to that group (Fig. ). Only two representatives of archaeal microorganisms (g_norank_f_ Nitrososphaeraceae and g_ Candidatus _ nitrocosmicus ) were present in the analyzed soil samples analyzed, and their relative abundances were very low to be included in further analyses. Eighty-three bacterial genera were identified as members of the core (generalist) bacterial microbiota (Fig. C, Supplementary Table 3). The most abundant bacterial genera were Flavobacterium (10.48%), Acidovorax (8.78%), Sphingomonas (7.92%), Methylobacterium - Methylorubrum (6.38%), and Bacillus (5.07%). Compared with the same niches in the diseased plants, all the organs of the healthy plants except for the stem harbored the largest number of endophyte bacteria (total; specialist); this trend was more notable in the leaves and rhizomes than in the other organs (Fig. A). Healthy plants’ roots (588; 11) and rhizomes (550; 25) harbored the most significant number of endophyte bacterial genera. However, rhizome rot strongly reduced the number of these in the roots (220; 1) and rhizomes (332; 3). Diseased plants exhibited fewer specialist bacteria in all plant organs except the stems. The most abundant fungal generalist genera were unclassified_k_Fungi (85.89%) and Gibellulopsis (5.20%) (Supplementary Table 3). The greatest number of fungal genera (total; specialist) was detected in the niches of healthy plants (922; 61) compared with diseased plants (833; 38) so the presence of the specialists was noticeably more affected by the disease. A greater abundance of fungal genera was observed in the rhizomes of diseased plants (70; 1) than in those of healthy plants (53; 0) (Fig. B). Most of the taxa with relatively high abundances inside the diseased ginger plants were also detected in the soil, indicating that these taxa might have colonized the plants from the ground. Interestingly, the rhizomes of the diseased plants harbored a greater diversity of fungi than did those of the healthy plants, while the opposite occurred for bacteria. Rhizome rot drives microbial community assembly in diverse plant niches To quantify the diversity and summarize the structural changes in the microbial community, we first used the Kruskal‒Wallis test to calculate the microbial alpha diversity across all niches of healthy and diseased plants. The soil samples showed the highest diversity of bacteria (Fig. A, C) and fungi (Fig. B, D). The microbial communities in the rhizosphere were similar to those in the bulk soil, except for the fungal community in healthy plants, which was notably richer in the rhizosphere (Sobs index: 180 ± 21.6). Significant differences were observed in the bacteria and fungal populations among healthy plants. The disease significantly reduced bacterial richness in both roots (healthy, 396.3 ± 56.2; diseased, 130.0 ± 26.2; P = 0.0439) and rhizomes (healthy, 353.7 ± 44.5; diseased, 185.0 ± 35.0; P = 0.0431) (Fig. A). Healthy plants had higher bacterial diversity in rhizomes (Shannon H': 4.30 ± 0.6). Diseased plants showed significant differences in fungal richness (Sobs index: 24.33 ± 9.3, Fig. B) and diversity (Shannon H': 0.04 ± 0.0, Fig. D) in rhizomes compared to healthy plants. To assess the microbial community dissimilarity between the niches of healthy and diseased plants, principal coordinate analysis (PCoA) based on Bray–Curtis distance was performed (Fig. E, F). The closer the distance between samples in the PCoA diagram, the more similar their community composition. The analysis revealed differences in bacterial and fungal microbiome compositions between healthy and diseased plants. The first two axes account for about 50% and 47.5% of the variation for bacterial microbiomes (PERMANOVA, R = 0.70, P < 0.001; ANOSIM: R = 0.73, P < 0.001) and fungal microbiomes (PERMANOVA: R 2 = 0.63, P < 0.001; ANOSIM: R = 0.39, P < 0.001), respectively. Different plant niches displayed distinct microbial communities, suggesting a potential link to plant health. These findings indicate that plant health is connected to unique microbial communities in various parts of the plant. Additionally, functional signatures related to plant health status were predicted via FAPROTAX analysis based on the classification results from 16S amplicon sequencing. Testing for significance was performed using a Kruskal–Wallis rank sum test (Fig. A, Supplementary Table 4). The analysis predicted that the bacteria inhabiting the stems of diseased plants would have the highest functional potential for nitrogen (9.29 ± 2.19%), nitrate (8.05 ± 3.70%), and nitrite (8.79 ± 2.28%) respiration; nitrite (7.55 ± 1.85%) and nitrate (7.55 ± 1.55%) ammonification; nitrate reduction (10.47 ± 4.16%); and plant pathogens (8.37 ± 2.04%), presumably associated with the highest relative abundance of Pectobacterium (Fig. B). The most common functional groups of fungi were undefined saprotrophs in the bulk (25.19 ± 2.21%) and rhizosphere (37.09 ± 2.90%) soils of healthy plants, while in the diseased plants, these functional groups were dominant in the bulk soil (13.91 ± 1.03%), roots (18.12 ± 1.92%), and rhizomes (17.12 ± 1.28%). Interestingly, the highest levels associated with the ecological guild of plant pathogens were observed in the rhizosphere (27.75 ± 2.08%) and rhizomes (27.75 ± 2.19%) of diseased plants (Fig. C), associated with an increased relative abundance of various potential pathogens in these microbial niches (Fig. D, Supplementary Table 5). The alpha diversity analysis indicates that plant roots and rhizomes harbor a significant number and variety of bacterial microbes. However, the presence of rhizome rot disease reduced these indices. In contrast, plant disease increased the diversity of the fungal microbes. Beta diversity analysis revealed changes in the composition of microbial communities in rhizosphere soil due to plant disease. Additionally, the composition of the bacterial microbiome in rhizomes and roots differed from that in stems and leaves in healthy plants, but the disease nullified this difference. Bacterial and fungal taxa potentially involved in plant health We used linear discriminant analysis (LDA) effect size (LEfSe) to identify discriminative features at taxonomic levels for overall plant health regardless of the microbial niche. This study focused on potentially pathogenic and disease-suppressive microbes in soil and endophytes in plant tissues. In total, 105 taxa (from phylum to species) were identified with a log10 (LDA) score > 4.0 and a P value < 0.05. In the LEfSe analysis, we found seven plant-endophyte bacteria (Fig. A) and five soilborne bacteria (Fig. B) that are biomarkers for plant health. Specifically, we observed that bacteria such as s_unclassified_g_ Sphingomonas , Quadrisphaera granulorum , and Methylobacterium komagatae were significantly enriched in healthy plants. On the other hand, bacteria like P. carotovorum subsp. brasiliense , s_unclassified_f_Alcaligenaceae, Alcaligenes faecalis , and Klebsiella aerogenes were found to be significantly increased in diseased plants. Additionally, we found certain bacteria enriched in the soil of healthy and diseased plants. Four species of endophyte plant fungi (Fig. A) and ten soil-borne fungi (Fig. B) were identified as potential biomarkers. In healthy plants, s_unclassified_k_Fungi (from phylum to species) was significantly enriched. Biomarkers associated with s_unclassified_g_ Cheilymenia (from class to species), Pseudaleuria sp. (from genus to species), Lophotrichus sp. (order to species), Pseudogymnoascus sp. (from class to species), Gymnoascus sp. (order to species), Mortierella polycephala (phylum to species), and Eleutherascus cristatus (from family to species) were significantly increased in the soil of healthy plants. In diseased plants, Gibellulopsis piscis (from phylum to species), Pyxidiophorales sp. (from class to species), and Plectosphaerella cucumerina (from phylum to species) were enriched, serving as potential biomarkers of disease. However, only three fungal biomarkers were characteristic of the soil of diseased plants: P. cucumerina (from genus to species), Trichoderm longibrachiatum (species), and Fusarium nematophilum (species). A probabilistic graph model related to a co-occurrence Bayesian network model (Supplementary Fig. 2A and B) shows a robust core bacterial and fungal microbiota and biomarkers linked to ginger plant health, including identifying Pectobacterium associated with rhizome rot in ginger plants. A greater network complexity has been associated with microbial communities exhibiting more intense activity and higher resilience to perturbation . Our analysis of intrakingdom networks revealed a higher network complexity associated with increased nodes and edges in the bacterial networks (Supplementary Fig. 2B and C) than in the fungal networks (Supplementary Fig. 2D and E). The node average degree (206.48) and positive edges (48,524) were higher in the bacterial co-occurrence network in diseased plants than the node average degree (106.67) and positive edges (24,449) in healthy plants. However, the bacterial community in healthy plants had much higher opposing edges (1458) than in diseased ones (206). For fungal networks, the node average degree (35.97), positive edges (6,487), and opposing edges (23) were all higher in the healthy plants related to the node average degree (29.64), positive edges (4,816), and opposing edges (cero) in diseased plants. Network statistics can determine the importance of microorganisms in co-occurrence networks ; in a co-occurrence network, hub or keystone species can be inferred by identifying species with the highest network centrality indices. The network analysis revealed that all bacterial biomarkers were highly prevalent in the system. Bacillus and Sphingomonas were identified as the most crucial nodes in the genus-level network within the healthy ginger ecosystem (Supplementary Table 6). The co-occurrence network for diseased plants revealed significant bacterial biomarkers. However, the ginger pathogen Pectobacterium was identified as the top-ranking bacterium (Supplementary Table 7). Moreover, the fungal biomarker exhibited a significant correlation within the microbial co-occurrence network of healthy ginger plants. Notably, Pseudaleuria and Mortierella emerged as prominent nodes with high degrees within the top 10 hub nodes (Supplementary Table 8). Conversely, the fungal networks of diseased plants featured Plectosphaerella and Gibellulopsis (Supplementary Table 9). These results emphasize the potential significance of these microbial strains in preserving the health of ginger plants. Metabolites driving ginger microbial community assembly and plant health Overview of metabolite information We used untargeted metabolomics to simultaneously detect and analyze small-molecule metabolites that impact microbiome assembly and the health of ginger plants. The metabolomes of the niches corresponding to the vegetative organs of healthy and diseased plants were analyzed via LC–MS/MS, which revealed a total of 10,415 chromatographic peaks with 735 metabolites, 500 of which were in the library (annotated to public databases like HMDB and Lipidmaps), and 199 of which were annotated to the KEGG database (Table , Supplementary Table 10). The metabolites identified across all the samples included 170 lipids and lipid-like molecules, 79 organic acids and derivatives, 63 organic oxygen compounds, and other compounds (Fig. A). The highest numbers of differentially accumulated metabolites (total; specific to each niche) were found in the rhizomes (164; 86), followed by the leaves (135; 63), roots (89; 35), and stems (76; 25). Interestingly, a metabolite associated with the health of the whole ginger plant (6-({[3,4-dihidroxi-4-(hidroximetil)oxolan-2-il]oxi}metil)oxano-2,3,4,5-tetrol) was identified (Fig. B). The plant health-associated microbiome is driven by the metabolome The relationship between plant health-associated microbes and metabolites was examined. Procrustes analyses were performed using distance plots (PCA) as input based on the matrix of endophytic microbial communities (Bray–Curtis). Significant associations were found between certain bacterial (M2 = 0.58, P = 0.00) and fungal (M2 = 0.84, P = 0.04) genera and metabolite synthesis. The associations varied based on plant health and microbial niches (Fig. A). In diseased plants, only fungi in the rhizomes and roots were closely linked to metabolite synthesis (Fig. B). Furthermore, the metabolites that drive the assembly of the potentially plant health-determining microbiota according to the LEfSe analysis are detailed below. Similarly, trans-EKODE-(E)-Ib ( P = 0.0137) and 2,3-dinor prostaglandin E1 ( P = 0.0359) were positively related to Sphingomonas , while piperidine ( P = 0.0004), cyclohexane ( P = 0.0006), tripropylamine ( P = 0.0005), palmitoleamide ( P = 0.0012), farnesyl acetone ( P = 0.0031), and C16 sphinganine ( P = 0.0055) were negatively correlated with this bacterial genus. trans-EKODE-(E)-Ib ( P = 0.0341) was positively related, while ethyl hydrogen sulfate ( P = 0.0008), 2-dodecylbenzenesulfonic acid ( P = 0.0009), piperidine ( P = 0.0034), farnesyl acetone ( P = 0.0040), 9,12,15-octadecatrien-1-ol ( P = 0.0042), 13Z-docosenamide ( P = 0.0090), palmitoleamide ( P = 0.0108), cyclohexane ( P = 0.0227), palmitic amide (0.0358), p-chlorophenylalanine ( P = 0.0424), and tripropylamine ( P = 0.0452) were negatively correlated with Methylobacterium–Methylorubrum . Moreover, PA (16:0/18:2(9Z,12Z)) ( P = 0.0120) and isocitrate ( P = 0.0482) were positively related to Quadrisphaera . 2-Amino-4-methylpentanoic acid ( P = 0.0421) and p-chlorophenylalanine ( p = 0.0449) were positively related to Pectobacterium , while 6-{4-[3-(3,7-dimethylocta-2,6-dien-1-yl)-7-hydroxy-8-(4-hydroxy-3-methylbut-2-en-1-yl)-4-oxo-4H-chromen-2-yl]-3-hydroxyphenoxy}-3,4,5-trihydroxyoxane-2-carboxylic acid ( P = 0.0364) and quercetin tetramethyl (5,7,3',4') ether ( P = 0.0379) were negatively correlated with these bacteria. There was a significant positive correlation between linoleamide ( P = 0.0185) and Alcaligenes , and a negative correlation between this bacterial genus and DG (18:4 (6Z,9Z,12Z,15Z)/18:2 (9Z,12Z)/0:0) ( P = 0.0086) was detected. DG (18:4(6Z,9Z,12Z,15Z)/18:2(9Z,12Z)/0:0) was negatively correlated with Klebsiella ( P = 0.005). DG (18:4(6Z,9Z,12Z,15Z)/18:2(9Z,12Z)/0:0) ( P = 0.005) and 6-{4-[3-(3,7-dimethylocta-2,6-dien-1-yl)-7-hydroxy-8-(4-hydroxy-3-methylbut-2-en-1-yl)-4-oxo-4H-chromen-2-yl]-3-hydroxyphenoxy}-3,4,5-trihydroxyoxane-2-carboxylic acid ( P = 0.0025) were negatively correlated with Enterobacter (Fig. C). 12,13-Epoxy-9,15-octadecadienoic acid ( P = 0.0235), L-glutamate ( P = 0.0353), and (E)-9,12,13-trihydroxyoctadec-10-enoic acid ( P = 0.0481) were positively associated with Gibellulopsis , while PA(16:0/18:2(9Z,12Z)) ( P = 0.0079) and LysoPA(0:0/18:2(9Z,12Z)) ( P = 0.0207) were negatively related to this genus. 12,13-Epoxy-9,15-octadecadienoic acid ( P = 0.0468) was positively correlated with Plectosphaerella . ( ±)9-HpODE ( P = 0.0428) was positively correlated with Lophotrichus . Isocitrate ( P = 0.0087) and 3'-methoxy--gingerdiol 3,5-diacetate ( P = 0.0167) significantly affected the presence of Mortierella (Fig. D). The metabolome can directly impact the health of ginger plants To identify the compounds that play roles in plant health, the VIP combined with univariate statistical analysis was used. Of the total metabolites (anion plus cation), 470 (4.51%) were enriched and 711 (6.83%) were depleted in healthy ginger plants compared to diseased ginger plants (Fig. A). One hundred five annotated metabolites exhibited significant differences in abundance (Welch’s two-sided t test, P < 0.05) between the diseased and healthy ginger plants. In contrast, the abundances of 469 annotated metabolites were unchanged in either plant group (Fig. B). The abundance of 74 named metabolites was reduced, and 31 annotated metabolites were enriched in healthy plants compared to diseased plants. Particularly notorious, niacinamide, a heterocyclic aromatic amide ( P < 0.001), the metabolic intermediates involved in de novo lipid synthesis 1-oleoyl lysophosphatidic acid ( P < 0.001), and the phospholipid PG (16:0/16:0) ( P < 0.05) were enriched in healthy ginger plants, while the nonproteinogenic L-alpha-amino acid 4-methylene-L-glutamine, the alkaloid xanthine, and the purine derivative hypoxanthine, among others, were significantly more abundant ( P < 0.001) in diseased plants (Fig. C). Metabolite profiles of plant niches revealed that niacinamide and PG (16:0/16:0) were upregulated in rhizomes (VIP value = 2.56, P = 0.0002, and VIP value = 2.37, P = 0.0012) and leaves (VIP value = 2.54, P = 0.0002, and VIP value = 2.36, P = 0.0012), while 1-oleoyl lysophosphatidic acid was upregulated in rhizomes (VIP value = 2.19, P = 0.0003) and roots (VIP value = 2.19, P = 0.0003) of healthy plants. In diseased plants, 4-methylene-L-glutamine was upregulated in leaves (VIP = 3.49, P = 0.0000). Hypoxanthine and xanthine also were upregulated in leaves (VIP = 3.16, P = 0.0000, and VIP = 3.29, P = 0.0000), and the latter was also in rhizomes (VIP = 3.16, P = 0.0000) (Supplementary Table 11). The analyzed data support that the identified metabolites drive the assembly of the healthy endophytic microbiota and directly influence plant health. However, further research is required to define whether the metabolites come from the plants or their microbiota.
Data analysis of 36 composite samples from 6 microbial niches of healthy and diseased plants was carried out to characterize the microbial communities associated with the sampled ginger plants. Supervised taxonomic classification of all high-quality reads was performed using the SILVA and UNITE databases to examine the taxonomic structure of the bacterial and fungal communities, respectively. A total of 994,248 archaeal/bacterial and 1,650,996 fungal high-quality reads were obtained and sorted into 5353 archaeal/bacterial and 1793 fungal operational taxonomic units (OTUs). Archaeal/bacterial OTUs were assigned to 2 domains, 2 kingdoms, 44 phyla, 125 classes, 304 orders, 517 families, 1073 genera, and 2101 species (Supplementary Table 1), and the fungal OTUs were assigned to 15 phyla, 51 classes, 112 orders, 233 families, 426 genera, and 667 species (Supplementary Table 2). The saturated rarefaction curves (Supplementary Fig. 1A, B) and species diversity (Shannon index) for both the archaeal/bacterial (Supplementary Fig. 1C) and fungal communities (Supplementary Fig. 1D) indicated that the sampling efforts were adequate to reflect the microbial communities within each sample. Proteobacteria (57.40%), Actinobacteriota (16.74%), Bacteroidota (8.24%), and Firmicutes (6.83%) were the dominant bacterial phyla, while unclassified_k_Fungi (62.08%) and Ascomycota (32.53%) were the dominant fungal phyla. Despite the great diversity, the ginger ecosystem’s bacterial and especially fungal communities were dominated by a few phyla among all samples, and we next examined how differences in the microbiome assembly can impact plant health.
Microbial composition in plant niches associated with plant health To determine the microbial composition and relative abundance in the niches of healthy and diseased plants, we constructed pie diagrams to represent the number of generalist (shared) and specialist (inhabitants of a single niche) microbes between the niches of all the plants. The total number of microbes per niche included generalists, specialists, and those inhabiting more than one niche, known as satellites. A greater number of bacterial genera was found in the microbial communities of healthy plants compared to diseased plants, except in the rhizosphere soil, where the number of bacterial genera was higher in the diseased plants. A total of 3331 bacterial genera were identified in the healthy plants, with 138 unique to that group. In contrast, 2512 genera were detected in the diseased plants, with 58 unique to that group (Fig. ). Only two representatives of archaeal microorganisms (g_norank_f_ Nitrososphaeraceae and g_ Candidatus _ nitrocosmicus ) were present in the analyzed soil samples analyzed, and their relative abundances were very low to be included in further analyses. Eighty-three bacterial genera were identified as members of the core (generalist) bacterial microbiota (Fig. C, Supplementary Table 3). The most abundant bacterial genera were Flavobacterium (10.48%), Acidovorax (8.78%), Sphingomonas (7.92%), Methylobacterium - Methylorubrum (6.38%), and Bacillus (5.07%). Compared with the same niches in the diseased plants, all the organs of the healthy plants except for the stem harbored the largest number of endophyte bacteria (total; specialist); this trend was more notable in the leaves and rhizomes than in the other organs (Fig. A). Healthy plants’ roots (588; 11) and rhizomes (550; 25) harbored the most significant number of endophyte bacterial genera. However, rhizome rot strongly reduced the number of these in the roots (220; 1) and rhizomes (332; 3). Diseased plants exhibited fewer specialist bacteria in all plant organs except the stems. The most abundant fungal generalist genera were unclassified_k_Fungi (85.89%) and Gibellulopsis (5.20%) (Supplementary Table 3). The greatest number of fungal genera (total; specialist) was detected in the niches of healthy plants (922; 61) compared with diseased plants (833; 38) so the presence of the specialists was noticeably more affected by the disease. A greater abundance of fungal genera was observed in the rhizomes of diseased plants (70; 1) than in those of healthy plants (53; 0) (Fig. B). Most of the taxa with relatively high abundances inside the diseased ginger plants were also detected in the soil, indicating that these taxa might have colonized the plants from the ground. Interestingly, the rhizomes of the diseased plants harbored a greater diversity of fungi than did those of the healthy plants, while the opposite occurred for bacteria. Rhizome rot drives microbial community assembly in diverse plant niches To quantify the diversity and summarize the structural changes in the microbial community, we first used the Kruskal‒Wallis test to calculate the microbial alpha diversity across all niches of healthy and diseased plants. The soil samples showed the highest diversity of bacteria (Fig. A, C) and fungi (Fig. B, D). The microbial communities in the rhizosphere were similar to those in the bulk soil, except for the fungal community in healthy plants, which was notably richer in the rhizosphere (Sobs index: 180 ± 21.6). Significant differences were observed in the bacteria and fungal populations among healthy plants. The disease significantly reduced bacterial richness in both roots (healthy, 396.3 ± 56.2; diseased, 130.0 ± 26.2; P = 0.0439) and rhizomes (healthy, 353.7 ± 44.5; diseased, 185.0 ± 35.0; P = 0.0431) (Fig. A). Healthy plants had higher bacterial diversity in rhizomes (Shannon H': 4.30 ± 0.6). Diseased plants showed significant differences in fungal richness (Sobs index: 24.33 ± 9.3, Fig. B) and diversity (Shannon H': 0.04 ± 0.0, Fig. D) in rhizomes compared to healthy plants. To assess the microbial community dissimilarity between the niches of healthy and diseased plants, principal coordinate analysis (PCoA) based on Bray–Curtis distance was performed (Fig. E, F). The closer the distance between samples in the PCoA diagram, the more similar their community composition. The analysis revealed differences in bacterial and fungal microbiome compositions between healthy and diseased plants. The first two axes account for about 50% and 47.5% of the variation for bacterial microbiomes (PERMANOVA, R = 0.70, P < 0.001; ANOSIM: R = 0.73, P < 0.001) and fungal microbiomes (PERMANOVA: R 2 = 0.63, P < 0.001; ANOSIM: R = 0.39, P < 0.001), respectively. Different plant niches displayed distinct microbial communities, suggesting a potential link to plant health. These findings indicate that plant health is connected to unique microbial communities in various parts of the plant. Additionally, functional signatures related to plant health status were predicted via FAPROTAX analysis based on the classification results from 16S amplicon sequencing. Testing for significance was performed using a Kruskal–Wallis rank sum test (Fig. A, Supplementary Table 4). The analysis predicted that the bacteria inhabiting the stems of diseased plants would have the highest functional potential for nitrogen (9.29 ± 2.19%), nitrate (8.05 ± 3.70%), and nitrite (8.79 ± 2.28%) respiration; nitrite (7.55 ± 1.85%) and nitrate (7.55 ± 1.55%) ammonification; nitrate reduction (10.47 ± 4.16%); and plant pathogens (8.37 ± 2.04%), presumably associated with the highest relative abundance of Pectobacterium (Fig. B). The most common functional groups of fungi were undefined saprotrophs in the bulk (25.19 ± 2.21%) and rhizosphere (37.09 ± 2.90%) soils of healthy plants, while in the diseased plants, these functional groups were dominant in the bulk soil (13.91 ± 1.03%), roots (18.12 ± 1.92%), and rhizomes (17.12 ± 1.28%). Interestingly, the highest levels associated with the ecological guild of plant pathogens were observed in the rhizosphere (27.75 ± 2.08%) and rhizomes (27.75 ± 2.19%) of diseased plants (Fig. C), associated with an increased relative abundance of various potential pathogens in these microbial niches (Fig. D, Supplementary Table 5). The alpha diversity analysis indicates that plant roots and rhizomes harbor a significant number and variety of bacterial microbes. However, the presence of rhizome rot disease reduced these indices. In contrast, plant disease increased the diversity of the fungal microbes. Beta diversity analysis revealed changes in the composition of microbial communities in rhizosphere soil due to plant disease. Additionally, the composition of the bacterial microbiome in rhizomes and roots differed from that in stems and leaves in healthy plants, but the disease nullified this difference. Bacterial and fungal taxa potentially involved in plant health We used linear discriminant analysis (LDA) effect size (LEfSe) to identify discriminative features at taxonomic levels for overall plant health regardless of the microbial niche. This study focused on potentially pathogenic and disease-suppressive microbes in soil and endophytes in plant tissues. In total, 105 taxa (from phylum to species) were identified with a log10 (LDA) score > 4.0 and a P value < 0.05. In the LEfSe analysis, we found seven plant-endophyte bacteria (Fig. A) and five soilborne bacteria (Fig. B) that are biomarkers for plant health. Specifically, we observed that bacteria such as s_unclassified_g_ Sphingomonas , Quadrisphaera granulorum , and Methylobacterium komagatae were significantly enriched in healthy plants. On the other hand, bacteria like P. carotovorum subsp. brasiliense , s_unclassified_f_Alcaligenaceae, Alcaligenes faecalis , and Klebsiella aerogenes were found to be significantly increased in diseased plants. Additionally, we found certain bacteria enriched in the soil of healthy and diseased plants. Four species of endophyte plant fungi (Fig. A) and ten soil-borne fungi (Fig. B) were identified as potential biomarkers. In healthy plants, s_unclassified_k_Fungi (from phylum to species) was significantly enriched. Biomarkers associated with s_unclassified_g_ Cheilymenia (from class to species), Pseudaleuria sp. (from genus to species), Lophotrichus sp. (order to species), Pseudogymnoascus sp. (from class to species), Gymnoascus sp. (order to species), Mortierella polycephala (phylum to species), and Eleutherascus cristatus (from family to species) were significantly increased in the soil of healthy plants. In diseased plants, Gibellulopsis piscis (from phylum to species), Pyxidiophorales sp. (from class to species), and Plectosphaerella cucumerina (from phylum to species) were enriched, serving as potential biomarkers of disease. However, only three fungal biomarkers were characteristic of the soil of diseased plants: P. cucumerina (from genus to species), Trichoderm longibrachiatum (species), and Fusarium nematophilum (species). A probabilistic graph model related to a co-occurrence Bayesian network model (Supplementary Fig. 2A and B) shows a robust core bacterial and fungal microbiota and biomarkers linked to ginger plant health, including identifying Pectobacterium associated with rhizome rot in ginger plants. A greater network complexity has been associated with microbial communities exhibiting more intense activity and higher resilience to perturbation . Our analysis of intrakingdom networks revealed a higher network complexity associated with increased nodes and edges in the bacterial networks (Supplementary Fig. 2B and C) than in the fungal networks (Supplementary Fig. 2D and E). The node average degree (206.48) and positive edges (48,524) were higher in the bacterial co-occurrence network in diseased plants than the node average degree (106.67) and positive edges (24,449) in healthy plants. However, the bacterial community in healthy plants had much higher opposing edges (1458) than in diseased ones (206). For fungal networks, the node average degree (35.97), positive edges (6,487), and opposing edges (23) were all higher in the healthy plants related to the node average degree (29.64), positive edges (4,816), and opposing edges (cero) in diseased plants. Network statistics can determine the importance of microorganisms in co-occurrence networks ; in a co-occurrence network, hub or keystone species can be inferred by identifying species with the highest network centrality indices. The network analysis revealed that all bacterial biomarkers were highly prevalent in the system. Bacillus and Sphingomonas were identified as the most crucial nodes in the genus-level network within the healthy ginger ecosystem (Supplementary Table 6). The co-occurrence network for diseased plants revealed significant bacterial biomarkers. However, the ginger pathogen Pectobacterium was identified as the top-ranking bacterium (Supplementary Table 7). Moreover, the fungal biomarker exhibited a significant correlation within the microbial co-occurrence network of healthy ginger plants. Notably, Pseudaleuria and Mortierella emerged as prominent nodes with high degrees within the top 10 hub nodes (Supplementary Table 8). Conversely, the fungal networks of diseased plants featured Plectosphaerella and Gibellulopsis (Supplementary Table 9). These results emphasize the potential significance of these microbial strains in preserving the health of ginger plants.
To determine the microbial composition and relative abundance in the niches of healthy and diseased plants, we constructed pie diagrams to represent the number of generalist (shared) and specialist (inhabitants of a single niche) microbes between the niches of all the plants. The total number of microbes per niche included generalists, specialists, and those inhabiting more than one niche, known as satellites. A greater number of bacterial genera was found in the microbial communities of healthy plants compared to diseased plants, except in the rhizosphere soil, where the number of bacterial genera was higher in the diseased plants. A total of 3331 bacterial genera were identified in the healthy plants, with 138 unique to that group. In contrast, 2512 genera were detected in the diseased plants, with 58 unique to that group (Fig. ). Only two representatives of archaeal microorganisms (g_norank_f_ Nitrososphaeraceae and g_ Candidatus _ nitrocosmicus ) were present in the analyzed soil samples analyzed, and their relative abundances were very low to be included in further analyses. Eighty-three bacterial genera were identified as members of the core (generalist) bacterial microbiota (Fig. C, Supplementary Table 3). The most abundant bacterial genera were Flavobacterium (10.48%), Acidovorax (8.78%), Sphingomonas (7.92%), Methylobacterium - Methylorubrum (6.38%), and Bacillus (5.07%). Compared with the same niches in the diseased plants, all the organs of the healthy plants except for the stem harbored the largest number of endophyte bacteria (total; specialist); this trend was more notable in the leaves and rhizomes than in the other organs (Fig. A). Healthy plants’ roots (588; 11) and rhizomes (550; 25) harbored the most significant number of endophyte bacterial genera. However, rhizome rot strongly reduced the number of these in the roots (220; 1) and rhizomes (332; 3). Diseased plants exhibited fewer specialist bacteria in all plant organs except the stems. The most abundant fungal generalist genera were unclassified_k_Fungi (85.89%) and Gibellulopsis (5.20%) (Supplementary Table 3). The greatest number of fungal genera (total; specialist) was detected in the niches of healthy plants (922; 61) compared with diseased plants (833; 38) so the presence of the specialists was noticeably more affected by the disease. A greater abundance of fungal genera was observed in the rhizomes of diseased plants (70; 1) than in those of healthy plants (53; 0) (Fig. B). Most of the taxa with relatively high abundances inside the diseased ginger plants were also detected in the soil, indicating that these taxa might have colonized the plants from the ground. Interestingly, the rhizomes of the diseased plants harbored a greater diversity of fungi than did those of the healthy plants, while the opposite occurred for bacteria.
To quantify the diversity and summarize the structural changes in the microbial community, we first used the Kruskal‒Wallis test to calculate the microbial alpha diversity across all niches of healthy and diseased plants. The soil samples showed the highest diversity of bacteria (Fig. A, C) and fungi (Fig. B, D). The microbial communities in the rhizosphere were similar to those in the bulk soil, except for the fungal community in healthy plants, which was notably richer in the rhizosphere (Sobs index: 180 ± 21.6). Significant differences were observed in the bacteria and fungal populations among healthy plants. The disease significantly reduced bacterial richness in both roots (healthy, 396.3 ± 56.2; diseased, 130.0 ± 26.2; P = 0.0439) and rhizomes (healthy, 353.7 ± 44.5; diseased, 185.0 ± 35.0; P = 0.0431) (Fig. A). Healthy plants had higher bacterial diversity in rhizomes (Shannon H': 4.30 ± 0.6). Diseased plants showed significant differences in fungal richness (Sobs index: 24.33 ± 9.3, Fig. B) and diversity (Shannon H': 0.04 ± 0.0, Fig. D) in rhizomes compared to healthy plants. To assess the microbial community dissimilarity between the niches of healthy and diseased plants, principal coordinate analysis (PCoA) based on Bray–Curtis distance was performed (Fig. E, F). The closer the distance between samples in the PCoA diagram, the more similar their community composition. The analysis revealed differences in bacterial and fungal microbiome compositions between healthy and diseased plants. The first two axes account for about 50% and 47.5% of the variation for bacterial microbiomes (PERMANOVA, R = 0.70, P < 0.001; ANOSIM: R = 0.73, P < 0.001) and fungal microbiomes (PERMANOVA: R 2 = 0.63, P < 0.001; ANOSIM: R = 0.39, P < 0.001), respectively. Different plant niches displayed distinct microbial communities, suggesting a potential link to plant health. These findings indicate that plant health is connected to unique microbial communities in various parts of the plant. Additionally, functional signatures related to plant health status were predicted via FAPROTAX analysis based on the classification results from 16S amplicon sequencing. Testing for significance was performed using a Kruskal–Wallis rank sum test (Fig. A, Supplementary Table 4). The analysis predicted that the bacteria inhabiting the stems of diseased plants would have the highest functional potential for nitrogen (9.29 ± 2.19%), nitrate (8.05 ± 3.70%), and nitrite (8.79 ± 2.28%) respiration; nitrite (7.55 ± 1.85%) and nitrate (7.55 ± 1.55%) ammonification; nitrate reduction (10.47 ± 4.16%); and plant pathogens (8.37 ± 2.04%), presumably associated with the highest relative abundance of Pectobacterium (Fig. B). The most common functional groups of fungi were undefined saprotrophs in the bulk (25.19 ± 2.21%) and rhizosphere (37.09 ± 2.90%) soils of healthy plants, while in the diseased plants, these functional groups were dominant in the bulk soil (13.91 ± 1.03%), roots (18.12 ± 1.92%), and rhizomes (17.12 ± 1.28%). Interestingly, the highest levels associated with the ecological guild of plant pathogens were observed in the rhizosphere (27.75 ± 2.08%) and rhizomes (27.75 ± 2.19%) of diseased plants (Fig. C), associated with an increased relative abundance of various potential pathogens in these microbial niches (Fig. D, Supplementary Table 5). The alpha diversity analysis indicates that plant roots and rhizomes harbor a significant number and variety of bacterial microbes. However, the presence of rhizome rot disease reduced these indices. In contrast, plant disease increased the diversity of the fungal microbes. Beta diversity analysis revealed changes in the composition of microbial communities in rhizosphere soil due to plant disease. Additionally, the composition of the bacterial microbiome in rhizomes and roots differed from that in stems and leaves in healthy plants, but the disease nullified this difference.
We used linear discriminant analysis (LDA) effect size (LEfSe) to identify discriminative features at taxonomic levels for overall plant health regardless of the microbial niche. This study focused on potentially pathogenic and disease-suppressive microbes in soil and endophytes in plant tissues. In total, 105 taxa (from phylum to species) were identified with a log10 (LDA) score > 4.0 and a P value < 0.05. In the LEfSe analysis, we found seven plant-endophyte bacteria (Fig. A) and five soilborne bacteria (Fig. B) that are biomarkers for plant health. Specifically, we observed that bacteria such as s_unclassified_g_ Sphingomonas , Quadrisphaera granulorum , and Methylobacterium komagatae were significantly enriched in healthy plants. On the other hand, bacteria like P. carotovorum subsp. brasiliense , s_unclassified_f_Alcaligenaceae, Alcaligenes faecalis , and Klebsiella aerogenes were found to be significantly increased in diseased plants. Additionally, we found certain bacteria enriched in the soil of healthy and diseased plants. Four species of endophyte plant fungi (Fig. A) and ten soil-borne fungi (Fig. B) were identified as potential biomarkers. In healthy plants, s_unclassified_k_Fungi (from phylum to species) was significantly enriched. Biomarkers associated with s_unclassified_g_ Cheilymenia (from class to species), Pseudaleuria sp. (from genus to species), Lophotrichus sp. (order to species), Pseudogymnoascus sp. (from class to species), Gymnoascus sp. (order to species), Mortierella polycephala (phylum to species), and Eleutherascus cristatus (from family to species) were significantly increased in the soil of healthy plants. In diseased plants, Gibellulopsis piscis (from phylum to species), Pyxidiophorales sp. (from class to species), and Plectosphaerella cucumerina (from phylum to species) were enriched, serving as potential biomarkers of disease. However, only three fungal biomarkers were characteristic of the soil of diseased plants: P. cucumerina (from genus to species), Trichoderm longibrachiatum (species), and Fusarium nematophilum (species). A probabilistic graph model related to a co-occurrence Bayesian network model (Supplementary Fig. 2A and B) shows a robust core bacterial and fungal microbiota and biomarkers linked to ginger plant health, including identifying Pectobacterium associated with rhizome rot in ginger plants. A greater network complexity has been associated with microbial communities exhibiting more intense activity and higher resilience to perturbation . Our analysis of intrakingdom networks revealed a higher network complexity associated with increased nodes and edges in the bacterial networks (Supplementary Fig. 2B and C) than in the fungal networks (Supplementary Fig. 2D and E). The node average degree (206.48) and positive edges (48,524) were higher in the bacterial co-occurrence network in diseased plants than the node average degree (106.67) and positive edges (24,449) in healthy plants. However, the bacterial community in healthy plants had much higher opposing edges (1458) than in diseased ones (206). For fungal networks, the node average degree (35.97), positive edges (6,487), and opposing edges (23) were all higher in the healthy plants related to the node average degree (29.64), positive edges (4,816), and opposing edges (cero) in diseased plants. Network statistics can determine the importance of microorganisms in co-occurrence networks ; in a co-occurrence network, hub or keystone species can be inferred by identifying species with the highest network centrality indices. The network analysis revealed that all bacterial biomarkers were highly prevalent in the system. Bacillus and Sphingomonas were identified as the most crucial nodes in the genus-level network within the healthy ginger ecosystem (Supplementary Table 6). The co-occurrence network for diseased plants revealed significant bacterial biomarkers. However, the ginger pathogen Pectobacterium was identified as the top-ranking bacterium (Supplementary Table 7). Moreover, the fungal biomarker exhibited a significant correlation within the microbial co-occurrence network of healthy ginger plants. Notably, Pseudaleuria and Mortierella emerged as prominent nodes with high degrees within the top 10 hub nodes (Supplementary Table 8). Conversely, the fungal networks of diseased plants featured Plectosphaerella and Gibellulopsis (Supplementary Table 9). These results emphasize the potential significance of these microbial strains in preserving the health of ginger plants.
Overview of metabolite information We used untargeted metabolomics to simultaneously detect and analyze small-molecule metabolites that impact microbiome assembly and the health of ginger plants. The metabolomes of the niches corresponding to the vegetative organs of healthy and diseased plants were analyzed via LC–MS/MS, which revealed a total of 10,415 chromatographic peaks with 735 metabolites, 500 of which were in the library (annotated to public databases like HMDB and Lipidmaps), and 199 of which were annotated to the KEGG database (Table , Supplementary Table 10). The metabolites identified across all the samples included 170 lipids and lipid-like molecules, 79 organic acids and derivatives, 63 organic oxygen compounds, and other compounds (Fig. A). The highest numbers of differentially accumulated metabolites (total; specific to each niche) were found in the rhizomes (164; 86), followed by the leaves (135; 63), roots (89; 35), and stems (76; 25). Interestingly, a metabolite associated with the health of the whole ginger plant (6-({[3,4-dihidroxi-4-(hidroximetil)oxolan-2-il]oxi}metil)oxano-2,3,4,5-tetrol) was identified (Fig. B). The plant health-associated microbiome is driven by the metabolome The relationship between plant health-associated microbes and metabolites was examined. Procrustes analyses were performed using distance plots (PCA) as input based on the matrix of endophytic microbial communities (Bray–Curtis). Significant associations were found between certain bacterial (M2 = 0.58, P = 0.00) and fungal (M2 = 0.84, P = 0.04) genera and metabolite synthesis. The associations varied based on plant health and microbial niches (Fig. A). In diseased plants, only fungi in the rhizomes and roots were closely linked to metabolite synthesis (Fig. B). Furthermore, the metabolites that drive the assembly of the potentially plant health-determining microbiota according to the LEfSe analysis are detailed below. Similarly, trans-EKODE-(E)-Ib ( P = 0.0137) and 2,3-dinor prostaglandin E1 ( P = 0.0359) were positively related to Sphingomonas , while piperidine ( P = 0.0004), cyclohexane ( P = 0.0006), tripropylamine ( P = 0.0005), palmitoleamide ( P = 0.0012), farnesyl acetone ( P = 0.0031), and C16 sphinganine ( P = 0.0055) were negatively correlated with this bacterial genus. trans-EKODE-(E)-Ib ( P = 0.0341) was positively related, while ethyl hydrogen sulfate ( P = 0.0008), 2-dodecylbenzenesulfonic acid ( P = 0.0009), piperidine ( P = 0.0034), farnesyl acetone ( P = 0.0040), 9,12,15-octadecatrien-1-ol ( P = 0.0042), 13Z-docosenamide ( P = 0.0090), palmitoleamide ( P = 0.0108), cyclohexane ( P = 0.0227), palmitic amide (0.0358), p-chlorophenylalanine ( P = 0.0424), and tripropylamine ( P = 0.0452) were negatively correlated with Methylobacterium–Methylorubrum . Moreover, PA (16:0/18:2(9Z,12Z)) ( P = 0.0120) and isocitrate ( P = 0.0482) were positively related to Quadrisphaera . 2-Amino-4-methylpentanoic acid ( P = 0.0421) and p-chlorophenylalanine ( p = 0.0449) were positively related to Pectobacterium , while 6-{4-[3-(3,7-dimethylocta-2,6-dien-1-yl)-7-hydroxy-8-(4-hydroxy-3-methylbut-2-en-1-yl)-4-oxo-4H-chromen-2-yl]-3-hydroxyphenoxy}-3,4,5-trihydroxyoxane-2-carboxylic acid ( P = 0.0364) and quercetin tetramethyl (5,7,3',4') ether ( P = 0.0379) were negatively correlated with these bacteria. There was a significant positive correlation between linoleamide ( P = 0.0185) and Alcaligenes , and a negative correlation between this bacterial genus and DG (18:4 (6Z,9Z,12Z,15Z)/18:2 (9Z,12Z)/0:0) ( P = 0.0086) was detected. DG (18:4(6Z,9Z,12Z,15Z)/18:2(9Z,12Z)/0:0) was negatively correlated with Klebsiella ( P = 0.005). DG (18:4(6Z,9Z,12Z,15Z)/18:2(9Z,12Z)/0:0) ( P = 0.005) and 6-{4-[3-(3,7-dimethylocta-2,6-dien-1-yl)-7-hydroxy-8-(4-hydroxy-3-methylbut-2-en-1-yl)-4-oxo-4H-chromen-2-yl]-3-hydroxyphenoxy}-3,4,5-trihydroxyoxane-2-carboxylic acid ( P = 0.0025) were negatively correlated with Enterobacter (Fig. C). 12,13-Epoxy-9,15-octadecadienoic acid ( P = 0.0235), L-glutamate ( P = 0.0353), and (E)-9,12,13-trihydroxyoctadec-10-enoic acid ( P = 0.0481) were positively associated with Gibellulopsis , while PA(16:0/18:2(9Z,12Z)) ( P = 0.0079) and LysoPA(0:0/18:2(9Z,12Z)) ( P = 0.0207) were negatively related to this genus. 12,13-Epoxy-9,15-octadecadienoic acid ( P = 0.0468) was positively correlated with Plectosphaerella . ( ±)9-HpODE ( P = 0.0428) was positively correlated with Lophotrichus . Isocitrate ( P = 0.0087) and 3'-methoxy--gingerdiol 3,5-diacetate ( P = 0.0167) significantly affected the presence of Mortierella (Fig. D). The metabolome can directly impact the health of ginger plants To identify the compounds that play roles in plant health, the VIP combined with univariate statistical analysis was used. Of the total metabolites (anion plus cation), 470 (4.51%) were enriched and 711 (6.83%) were depleted in healthy ginger plants compared to diseased ginger plants (Fig. A). One hundred five annotated metabolites exhibited significant differences in abundance (Welch’s two-sided t test, P < 0.05) between the diseased and healthy ginger plants. In contrast, the abundances of 469 annotated metabolites were unchanged in either plant group (Fig. B). The abundance of 74 named metabolites was reduced, and 31 annotated metabolites were enriched in healthy plants compared to diseased plants. Particularly notorious, niacinamide, a heterocyclic aromatic amide ( P < 0.001), the metabolic intermediates involved in de novo lipid synthesis 1-oleoyl lysophosphatidic acid ( P < 0.001), and the phospholipid PG (16:0/16:0) ( P < 0.05) were enriched in healthy ginger plants, while the nonproteinogenic L-alpha-amino acid 4-methylene-L-glutamine, the alkaloid xanthine, and the purine derivative hypoxanthine, among others, were significantly more abundant ( P < 0.001) in diseased plants (Fig. C). Metabolite profiles of plant niches revealed that niacinamide and PG (16:0/16:0) were upregulated in rhizomes (VIP value = 2.56, P = 0.0002, and VIP value = 2.37, P = 0.0012) and leaves (VIP value = 2.54, P = 0.0002, and VIP value = 2.36, P = 0.0012), while 1-oleoyl lysophosphatidic acid was upregulated in rhizomes (VIP value = 2.19, P = 0.0003) and roots (VIP value = 2.19, P = 0.0003) of healthy plants. In diseased plants, 4-methylene-L-glutamine was upregulated in leaves (VIP = 3.49, P = 0.0000). Hypoxanthine and xanthine also were upregulated in leaves (VIP = 3.16, P = 0.0000, and VIP = 3.29, P = 0.0000), and the latter was also in rhizomes (VIP = 3.16, P = 0.0000) (Supplementary Table 11). The analyzed data support that the identified metabolites drive the assembly of the healthy endophytic microbiota and directly influence plant health. However, further research is required to define whether the metabolites come from the plants or their microbiota.
We used untargeted metabolomics to simultaneously detect and analyze small-molecule metabolites that impact microbiome assembly and the health of ginger plants. The metabolomes of the niches corresponding to the vegetative organs of healthy and diseased plants were analyzed via LC–MS/MS, which revealed a total of 10,415 chromatographic peaks with 735 metabolites, 500 of which were in the library (annotated to public databases like HMDB and Lipidmaps), and 199 of which were annotated to the KEGG database (Table , Supplementary Table 10). The metabolites identified across all the samples included 170 lipids and lipid-like molecules, 79 organic acids and derivatives, 63 organic oxygen compounds, and other compounds (Fig. A). The highest numbers of differentially accumulated metabolites (total; specific to each niche) were found in the rhizomes (164; 86), followed by the leaves (135; 63), roots (89; 35), and stems (76; 25). Interestingly, a metabolite associated with the health of the whole ginger plant (6-({[3,4-dihidroxi-4-(hidroximetil)oxolan-2-il]oxi}metil)oxano-2,3,4,5-tetrol) was identified (Fig. B).
The relationship between plant health-associated microbes and metabolites was examined. Procrustes analyses were performed using distance plots (PCA) as input based on the matrix of endophytic microbial communities (Bray–Curtis). Significant associations were found between certain bacterial (M2 = 0.58, P = 0.00) and fungal (M2 = 0.84, P = 0.04) genera and metabolite synthesis. The associations varied based on plant health and microbial niches (Fig. A). In diseased plants, only fungi in the rhizomes and roots were closely linked to metabolite synthesis (Fig. B). Furthermore, the metabolites that drive the assembly of the potentially plant health-determining microbiota according to the LEfSe analysis are detailed below. Similarly, trans-EKODE-(E)-Ib ( P = 0.0137) and 2,3-dinor prostaglandin E1 ( P = 0.0359) were positively related to Sphingomonas , while piperidine ( P = 0.0004), cyclohexane ( P = 0.0006), tripropylamine ( P = 0.0005), palmitoleamide ( P = 0.0012), farnesyl acetone ( P = 0.0031), and C16 sphinganine ( P = 0.0055) were negatively correlated with this bacterial genus. trans-EKODE-(E)-Ib ( P = 0.0341) was positively related, while ethyl hydrogen sulfate ( P = 0.0008), 2-dodecylbenzenesulfonic acid ( P = 0.0009), piperidine ( P = 0.0034), farnesyl acetone ( P = 0.0040), 9,12,15-octadecatrien-1-ol ( P = 0.0042), 13Z-docosenamide ( P = 0.0090), palmitoleamide ( P = 0.0108), cyclohexane ( P = 0.0227), palmitic amide (0.0358), p-chlorophenylalanine ( P = 0.0424), and tripropylamine ( P = 0.0452) were negatively correlated with Methylobacterium–Methylorubrum . Moreover, PA (16:0/18:2(9Z,12Z)) ( P = 0.0120) and isocitrate ( P = 0.0482) were positively related to Quadrisphaera . 2-Amino-4-methylpentanoic acid ( P = 0.0421) and p-chlorophenylalanine ( p = 0.0449) were positively related to Pectobacterium , while 6-{4-[3-(3,7-dimethylocta-2,6-dien-1-yl)-7-hydroxy-8-(4-hydroxy-3-methylbut-2-en-1-yl)-4-oxo-4H-chromen-2-yl]-3-hydroxyphenoxy}-3,4,5-trihydroxyoxane-2-carboxylic acid ( P = 0.0364) and quercetin tetramethyl (5,7,3',4') ether ( P = 0.0379) were negatively correlated with these bacteria. There was a significant positive correlation between linoleamide ( P = 0.0185) and Alcaligenes , and a negative correlation between this bacterial genus and DG (18:4 (6Z,9Z,12Z,15Z)/18:2 (9Z,12Z)/0:0) ( P = 0.0086) was detected. DG (18:4(6Z,9Z,12Z,15Z)/18:2(9Z,12Z)/0:0) was negatively correlated with Klebsiella ( P = 0.005). DG (18:4(6Z,9Z,12Z,15Z)/18:2(9Z,12Z)/0:0) ( P = 0.005) and 6-{4-[3-(3,7-dimethylocta-2,6-dien-1-yl)-7-hydroxy-8-(4-hydroxy-3-methylbut-2-en-1-yl)-4-oxo-4H-chromen-2-yl]-3-hydroxyphenoxy}-3,4,5-trihydroxyoxane-2-carboxylic acid ( P = 0.0025) were negatively correlated with Enterobacter (Fig. C). 12,13-Epoxy-9,15-octadecadienoic acid ( P = 0.0235), L-glutamate ( P = 0.0353), and (E)-9,12,13-trihydroxyoctadec-10-enoic acid ( P = 0.0481) were positively associated with Gibellulopsis , while PA(16:0/18:2(9Z,12Z)) ( P = 0.0079) and LysoPA(0:0/18:2(9Z,12Z)) ( P = 0.0207) were negatively related to this genus. 12,13-Epoxy-9,15-octadecadienoic acid ( P = 0.0468) was positively correlated with Plectosphaerella . ( ±)9-HpODE ( P = 0.0428) was positively correlated with Lophotrichus . Isocitrate ( P = 0.0087) and 3'-methoxy--gingerdiol 3,5-diacetate ( P = 0.0167) significantly affected the presence of Mortierella (Fig. D).
To identify the compounds that play roles in plant health, the VIP combined with univariate statistical analysis was used. Of the total metabolites (anion plus cation), 470 (4.51%) were enriched and 711 (6.83%) were depleted in healthy ginger plants compared to diseased ginger plants (Fig. A). One hundred five annotated metabolites exhibited significant differences in abundance (Welch’s two-sided t test, P < 0.05) between the diseased and healthy ginger plants. In contrast, the abundances of 469 annotated metabolites were unchanged in either plant group (Fig. B). The abundance of 74 named metabolites was reduced, and 31 annotated metabolites were enriched in healthy plants compared to diseased plants. Particularly notorious, niacinamide, a heterocyclic aromatic amide ( P < 0.001), the metabolic intermediates involved in de novo lipid synthesis 1-oleoyl lysophosphatidic acid ( P < 0.001), and the phospholipid PG (16:0/16:0) ( P < 0.05) were enriched in healthy ginger plants, while the nonproteinogenic L-alpha-amino acid 4-methylene-L-glutamine, the alkaloid xanthine, and the purine derivative hypoxanthine, among others, were significantly more abundant ( P < 0.001) in diseased plants (Fig. C). Metabolite profiles of plant niches revealed that niacinamide and PG (16:0/16:0) were upregulated in rhizomes (VIP value = 2.56, P = 0.0002, and VIP value = 2.37, P = 0.0012) and leaves (VIP value = 2.54, P = 0.0002, and VIP value = 2.36, P = 0.0012), while 1-oleoyl lysophosphatidic acid was upregulated in rhizomes (VIP value = 2.19, P = 0.0003) and roots (VIP value = 2.19, P = 0.0003) of healthy plants. In diseased plants, 4-methylene-L-glutamine was upregulated in leaves (VIP = 3.49, P = 0.0000). Hypoxanthine and xanthine also were upregulated in leaves (VIP = 3.16, P = 0.0000, and VIP = 3.29, P = 0.0000), and the latter was also in rhizomes (VIP = 3.16, P = 0.0000) (Supplementary Table 11). The analyzed data support that the identified metabolites drive the assembly of the healthy endophytic microbiota and directly influence plant health. However, further research is required to define whether the metabolites come from the plants or their microbiota.
We performed untargeted metabolomic and metataxonomic analyses based on 16S and internal transcribed spacer (ITS) rRNA gene amplicons to identify metabolome-driven microbiome changes associated with ginger plant health and rhizome rot disease. The key findings of our study present a comprehensive overview of the biodiversity of soilborne and endophytic microbiota in both healthy and diseased ginger plant environments. This highlights the bacterial and fungal microbes that may contribute to plant health, as well as the specific metabolites that play a role in healthy microbial assembly and overall plant health. Members of Proteobacteria, such as Burkholderiales, Rhizobiales, and Enterobacteriales, were the predominant members of the global bacterial community in ginger plants. Actinobacteria, Bacteroidetes, and Firmicutes followed in abundance. This differs from the top four reported for natural ecosystems . However, it has been reported that host species and soil type , crop rotation , and environmental conditions like temperature, relative humidity, and pH cooperatively modulated microbiome assembly. The global fungal assemblage of ginger plants was dominated by members of Ascomycota, with Hypocreale , Glomerellales , Pezizales , and Sordariales being the most abundant. The kingdom of fungi, including true fungi (Fungi) and fungus-like organisms (e.g., Oomycota), is the second largest group of organisms, with an estimated 2.2 to 3.8 million species worldwide . Surprisingly, approximately 60% of the fungal taxa are classified as unclassified_k_Fungi, indicating a need for further analysis. More comprehensive information on the complete ITS sequence of these microbes in databases is required to address this issue. Several agents can cause soft rot (rhizome rot) in ginger, but generally, the “bad guys” are fungi from the Fusarium and Pythium genera. Interestingly, despite sequencing, these soilborne fungus was rarely detected. Previous studies also failed to identify Pythium , possibly due to the limitations of the ITS region . The ITS3/ITS4 primer set effectively analyzed soil fungal biodiversity in various soil types . DNA metabarcoding targeting the ITS region revealed the widespread presence of potentially plant-pathogenic Phytophthora and Pythium species in rhizospheric soil associated with internationally transported plants . However, the ITS region lacks sufficient resolution for distinguishing closely related species of indoor and foodborne molds, plant pathogens, or other fungi, for which secondary barcode markers have been suggested . We identified these species using ITS3/ITS4 barcoding, except for oomycotes in the ginger ecosystem. Further research is required to understand better the absence of such globally widespread fungal species in ginger ecosystems. However, manure application promotes saprotrophic fungi while suppressing potential soilborne pathogenic fungi . Pectobacterium spp. use synchronized production of plant cell wall-degrading enzymes (PCWDEs) as their primary virulence attribute. These bacteria enter the host through stomatal openings and wounds, colonizing xylem vessels, parenchyma, and protoxylem cells . At the genus level, 16S rRNA gene sequencing revealed Flavobacterium , Acidovorax , Sphingomonas , Methylobacterium - Methylorubrum , and Bacillus as the most abundant genera. These genera were shared across all the ginger microbial niches. Research on the assembly of the bacterial microbiota in the endosphere and rhizosphere of rice plants has identified Acidovorax , Sphingomonas , Bacillus , and Pseudomonas as members of the core generalist microbiota . The diversity and species richness of the ginger microbiota narrowed from the soil as a “seed bank” to the plant organs, which suggest that the plants actively filtered the microbiota composition. Rhizome rot disease causes a significant change in the microbial community of ginger plants, especially in terms of microbial diversity. This change may be due to the plant’s reduced ability to filter organisms as the disease progresses. The microbial structure detected in the rhizomes of both healthy and diseased plants revealed that specialist microbes did not cause rhizome rot. Instead, an imbalance caused by satellite microbes like Pectobacterium was primarily detected in the stems and rhizomes of diseased plants. Saprotrophic fungi often take advantage of weakened diseased plants by colonizing their roots and rhizomes, while healthy plants maintain them in the soil. The presence of these fungi suggests that the cause of rhizome root disease is a necrotrophic pathogen that kills plant cells to feed on dead tissues and encourages the presence of other saprotrophs . Most plant pathogens are mainly found in the rhizomes of diseased plants, although they have been discovered in all plant organs. This aligns with disease symptoms that spread to the entire plant. Healthy plants harbor a more significant number and variety of bacterial microbes compared to diseased plants, while rhizome rot increases the diversity of fungal microbes. Changes in microbiota composition have been associated with immune suppression during pathogen infections. In the leaves of Arabidopsis immune-compromised mutants, the Shannon diversity index and the relative abundance of Firmicutes were significantly decreased, while Proteobacteria were more prevalent . These findings are similar to some aspects of dysbiosis in human inflammatory bowel disease . The higher diversity of endophytic bacteria in healthy plants is likely due to the abundance of beneficial bacteria. Conversely, diseased plants have a more diverse range of bacteria in the rhizosphere, possibly due to decaying roots providing nutrients for soil organisms. In a study involving tobacco plants infected with Ralstonia solanacearum wilt, researchers found that healthy plants had a greater diversity of microorganisms than diseased plants. They observed increased levels of certain bacteria that promote plant growth and suppress diseases . Similarly, healthy mulberry plant samples exhibited greater diversity of beneficial bacteria compared to those infected with bacterial wilt . Among the bacterial species important in keeping plants healthy, Q. granulorum is capable of nitrification, denitrification, and polyphosphate accumulation . M. komagatae was reported to be a potential biostimulator against fungal pathogens of ginger . Sphingomonas species have variable functions, ranging from the remediation of environmental pollution to the production of highly beneficial plant growth regulators , and some strains are also involved in nitrogen fixation . Bacillus spp. serves multiple ecological functions, from soil nutrient cycling to inducing plant growth and stress tolerance . In contrast, among the bacteria that were associated with the disease, only a P. brasiliense strain TS20HJ1 was isolated from ginger rhizome and shown to cause soft rot symptoms . A. faecalis is a heterotrophic nitrifying bacterium that oxidizes ammonia and generates nitrite and nitrate , and K. aerogenes significantly enhances the production of plant biomass and plant secondary metabolites . In relation to the fungi that were enriched in the disease-suppressive soil, Pseudaleuria had been negatively correlated with the disease severity index of Pisum sativum L. and its abundance was favored by the application of manure rather than mineral fertilization . A high abundance of Pseudogymnoascus in the rhizosphere contributes to the nutrient cycling and helps crops better adapt to the environment ; these fungi are antagonistic to potato scab pathogens . Gymnoascus spp. can also antagonistically affect pathogens and promote plant growth . Mortierella species promote plant growth and have beneficial effects by modifying the soil microbiological composition . Interestingly, P. cucumerina served as a biomarker for the endophyte microbiota of diseased plants and soil. The Plectosphaerellaceae species G. piscis and P. cucumerina have been previously described as pathogens of essential crop plants . However, to our knowledge, these fungi have not been previously reported as pathogens of ginger. Analysis of the correlation between microbial communities and metabolomes remains scarce. Specific metabolites can attract beneficial microbes that defend against pathogens, while others exclude specific species from the microbial community . The results revealed a metabolome-associated deterministic assembly process in the microbiota of the various microbial niches of ginger plants. The highest number of differentially accumulated metabolites between healthy and diseased plants was found in the plant compartments that hosted a greater diversity of fungal microbes, i.e., rhizomes and roots. Recent research on ginger has revealed detailed information about its over 60 bioactive compounds, including phenolic compounds, terpenes, polysaccharides, lipids, and dietary fibers . Some compounds can attract beneficial microbes that protect the plant from pathogens, while others may harm the microbial community . Remarkably, our research has shown that lipids and lipid-like molecules are the most prevalent metabolites, among the more than 700 identified using untargeted metabolomics, that contribute to the health of ginger plants, particularly in preventing rhizome rot disease. Lipids, a principal constituent of cell membranes, act as the interface and mediate cell signaling pathways after microbe recognition, allowing advantageous resource exchange or inhibiting interaction through downstream signaling cascades . Furthermore, when plants are exposed to necrotrophic pathogens such as Pectobacterium species, their immune responses often involve oxylipins, signaling molecules derived from oxygenated fatty acids and related metabolites . We hypothesized that the metabolites exhibiting more variability in abundance in healthy or diseased ginger plants may be closely associated with the plants' responses to disease onset. Interestingly, the organoxygen compound 6-({[3,4-dihydroxy-4-(hydroxymethyl)oxolan-2-yl]oxy}methyl)oxane-2,3,4,5-tetrol was the only overexpressed metabolite in all the vegetative organs of healthy plants related to those of diseased plants, but its role in plant protection needs to be elucidated. The levels of numerous rice amino acids increased in response to high saline–alkali stress, with threoninyl-proline showing the most significant increase . Glu-Val is a dipeptide composed of L-valine and L-glutamic acid residues. Amino acids and their metabolites have also been observed to stimulate the immune system in plants. Treating rice roots with Glu, and to a lesser extent Val, led to systemic disease resistance against rice blast ( Magnaporthe oryzae ) in leaves. Niacinamide derivatives have been synthesized, and their fungicidal activity has been demonstrated . Arachidonic acid (AA), a microbe-associated molecular pattern (MAMP) not commonly found in plants, is a potent elicitor of plant defense. Treating roots with AA-protected pepper and tomato seedlings from root and crown rot caused by Phytophthora capsici , leading to lignification at sites of attempted infection . A relative of the ginger health biomarker M. polycephala , M. alpina has also been identified as an attractive AA producer . In transgenic A. thaliana plants producing arachidonic acid, levels of jasmonic acid were increased, while levels of salicylic acid were decreased . 4-Methylene-L-glutamine is a nonproteinogenic L-alpha-amino acid that has been implicated in the transport of nitrogen ; coincidentally, the most prominent features of bacterial dysbiosis related to rhizome rot are related to the nitrogen cycle. Asparagine accumulation as part of nitrogen remobilization has been recorded in response to diverse abiotic and biotic stressors, such as disease and mineral limitation, as an adaptative process . These changes in amino acids may be the result of disease in niches of ginger plants, although members of the Rhizobium complex of nitrogen-fixing bacteria were also enriched in the rhizome, stem, and leaves of diseased ginger plants. Palmitoleamide is a primary fatty amide. A crude extract from the endophytic fungus Botryodiplodia theobromae containing fatty acid amides was observed to be broadly antimicrobial . This metabolite was accumulated in stems of diseased ginger plants and showed a negative effect on microbes of the plant growth-promoting bacterial genus Methylobacterium – Methylorubrum . 4-Hydroxy nonenal alkyne, primarily detected in leaves of diseased ginger plants, is a significant aldehyde produced during the lipid peroxidation of ω-6 polyunsaturated fatty acids . Despite the limitations of this study, particularly concerning abundance thresholds for microbe inclusion, which need to be proven by culturomics methods, these limitations do not negatively impact the conclusions. Our findings provide a foundation for achieving disease suppression via modification of the metabolome-associated microbiome and have implications for further exploring pathogens, biocontrol agents, and plant growth promoters associated with economically important crop. Most microbial species and metabolites have not been previously identified in ginger plants. The assembly of the microbiota rather than the occurrence of a particular microbe drove plant health.
Supplementary Material 1: Supplementary Table 1. OTU table for the archaeal/bacterial ginger microbiome. The total number of OTUs in each of the three composite biological replicates is shown. HPBS: healthy plant bulk soil, DPBS: diseased plant bulk soil, HPRhS: healthy plant rhizosphere soil, DPRhS: diseased plant rhizosphere soil, HPRh: healthy plant rhizome, DPRh: diseased plant rhizome, HPR: healthy plant root, DPR: diseased plant root, HPS: healthy plant stem, DPS: diseased plant stem, HPL: healthy plant leaf, DPL: diseased plant leaf. Supplementary Material 2: Supplementary Table 2. OTU table for the fungal ginger microbiome. The total number of OTUs in each of the three composite biological replicates is shown. HPBS: healthy plant bulk soil, DPBS: diseased plant bulk soil, HPRhS: healthy plant rhizosphere soil, DPRhS: diseased plant rhizosphere soil, HPRh: healthy plant rhizome, DPRh: diseased plant rhizome, HPR: healthy plant root, DPR: diseased plant root, HPS: healthy plant stem, DPS: diseased plant stem, HPL: healthy plant leaf, DPL: diseased plant leaf. Supplementary Material 3: Supplementary Table 3. Core and specialists bacterial and fungal microbes. HPBS: healthy plant bulk soil, DPBS: diseased plant bulk soil, HPRhS: healthy plant rhizosphere soil, DPRhS: diseased plant rhizosphere soil, HPRh: healthy plant rhizome, DPRh: diseased plant rhizome, HPR: healthy plant root, DPR: diseased plant root, HPS: healthy plant stem, DPS: diseased plant stem, HPL: healthy plant leaf, DPL: diseased plant leaf. Supplementary Material 4: Supplementary Table 4. Bacterial functional assemblages based on FAPROTAX analysis. The values are the average of three composite biological replicates (mean), sd is the standard variation in the mean for each microbial niche. Testing for significance was performed using a Kruskal–Wallis rank sum test. Supplementary Material 5: Supplementary Table 5. Identification of specific ecological categories of fungi through FUNGuild functional classification. The values are the average of three composite biological replicates. HPBS: healthy plant bulk soil, DPBS: diseased plant bulk soil, HPRhS: healthy plant rhizosphere soil, DPRhS: diseased plant rhizosphere soil, HPRh: healthy plant rhizome, DPRh: diseased plant rhizome, HPR: healthy plant root, DPR: diseased plant root, HPS: healthy plant stem, DPS: diseased plant stem, HPL: healthy plant leaf, DPL: diseased plant leaf. Supplementary Material 6: Supplementary Table 6. Co-occurrence Network statistics for bacterial microbiota in healthy plants. Nodes represent microbial genera, and edges represent the statistically significant associations between nodes. Connections were drawn between significantly correlated nodes ( P < 0.05 and Spearman’s r > 0.96; Spearman’s rank correlation test). Supplementary Material 7: Supplementary Table 7. Co-occurrence Network statistics for bacterial microbiota in diseased plants. Nodes represent microbial genera, and edges represent the statistically significant associations between nodes. Connections were drawn between significantly correlated nodes ( P < 0.05 and Spearman’s r > 0.96; Spearman’s rank correlation test). Supplementary Material 8: Supplementary Table 8. Co-occurrence Network statistics for fungal microbiota in healthy ginger plants. Nodes represent microbial genera, and edges represent the statistically significant associations between nodes. Connections were drawn between significantly correlated nodes ( P < 0.05 and Spearman’s r > 0.96; Spearman’s rank correlation test). Supplementary Material 9: Supplementary Table 9.Co-occurrence Network statistics for fungal microbiota in diseased ginger plants. Nodes represent microbial genera, and edges represent the statistically significant associations between nodes. Connections were drawn between significantly correlated nodes ( P < 0.05 and Spearman’s r > 0.96; Spearman’s rank correlation test). Supplementary Material 10: Supplementary Table 10. Overview of metabolite information. ID: In the data matrix identified by searching the mass spectrometry library, the number of each identified ion peak is randomly assigned according to different ion modes; Metabolite: the name of the metabolite identified in this project; Metab ID: In the cloud platform analysis, the number of each identified ion peak is randomly assigned; Library ID: the metabolite is found in the corresponding accession number of the search database; KEGG compound ID: the accession number of the KEGG database; M/Z or Quantum Mass: mass-charge ratio; Retention time: refers to the retention time of charged ions in chromatography; Mode: ion detection mode, including positive ion and negative ion mode; Adducts: adduct ionic mode, refers to the covalent bond between metabolites and cellular macromolecules; Formula: chemical formula of metabolites; Fragmentation score: Metlin database search score; Theoretical fragmentation score: HMDB database search score; Mass error: mass deviation (ppm)); CAS ID: chemical substance registration number; RSD: relative standard deviation of quality control samples. Supplementary Material 11: Supplementary Table 11. Niche-specific expression of metabolites associated with plant health. ID: In the data matrix obtained by the mass spectrometry search database, each ion peak is randomly numbered according to different ion modes; metabolite: metabolite detected; VIP_value represents the contribution value of the metabolite to the difference between the two niches of healthy and diseased plants; VIP: The higher the value, the more significant the difference between the two groups of metabolites; P _value: indicates the significance of the difference between the two groups of samples for the given metabolite; HP: indicates the relative expression level of the metabolite in the healthy plant samples; DP: indicates the relative expression level of the metabolite in the samples from the diseased plants. Supplementary Material 12: Supplementary Fig. 1. Shannon rarefaction curves of the archaeal/bacterial (A) and fungal (B) community groups at the OTU level. The rarefaction curve was calculated by randomly resampling each sample several times and then plotting the rarefied number of OTUs defined at a 97% sequence similarity threshold relative to the number of samples. The abscissa represents the amount of sequencing data randomly sampled, and the ordinate represents the diversity index (Shannon index) at the OTU level. Rank abundance analysis of the archaeal/bacterial (C) and fungal (D) community groups at the OTU level. The abscissa represents the rank of the OTU, and the ordinate represents the relative percentage of the abundance of the OTU. The position on the abscissa of the open end of the sample curve corresponds to the number of OTUs in the sample. Supplementary Material 13: Supplementary Fig. 2. Co-occurrence network analysis of the microbial community associated with the health of the ginger plant. Co-occurrence of bacterial (A) and fungal genera (B) in health (S1) and diseased (S1D) ginger plants based on relative abundance. Different colors represent microbial genera associated with healthy (blue) and diseased (red) plants; black is the keystone core genera. Intra-kingdom network analysis of the ginger microbiome is conducted based on correlation analysis of taxonomic profiles in healthy (C for bacteria and E for fungi) and diseased (D for bacteria and F for fungi) ginger plants. Nodes represent microbial genera, and edges represent the statistically significant associations between nodes. Connections were drawn between significantly correlated nodes ( P < 0.05 and Spearman’s r > 0.96; Spearman’s rank correlation test). The red edges are indicators of co-occurrence (positive), and the green edges are indicators of mutual exclusion (negative) correlations. Hub microbes for each network are ranked according to the number of connections in the network. Supplementary Material 14: Supplementary Fig. 3. Quality control (QC) metabolomic sample evaluation. By calculating the relative standard deviation (RSD) value of each variable in the QC sample, variables whose RSD exceeds the threshold are eliminated, and variables with RSD ≤ 30% are retained. The abscissa is the RSD value , i.e., the standard deviation/mean value, and the ordinate is the ratio of ion peaks. The dotted line indicates the value before preprocessing, while the solid line shows the results after preprocessing.
|
Decompression effects on bone healing in rat mandible osteomyelitis | 0096228d-edf5-4466-bd63-e4b02dd74a07 | 8175588 | Histology[mh] | Osteomyelitis (OM) of the jaw is an inflammatory process that starts in the medullary space of the bone and progresses to cortical bone, the Haversian system, periosteum, and overlying soft tissue. This is usually caused by micro-organism infection into the bone tissues due to a trauma or odontogenic infection . The gram-positive pathogen Staphylococcus aureus ( S. aureus ) is the most common OM causative agent in both children and adults . The literature on OM of the jaw is extensive, and vast terminologies and classifications are used to describe this disease. Chronic non-bacterial OM is a rare non-infectious autoinflammatory bone disorder of unknown etiology, that occurs in all ages with 7 to 12 years peak onset with female predominance . The treatment of jaw OM in the literature is classified as surgical and non-surgical, while the aim differs depending whether or not bacterial infection is apparent . The universally acknowledged and effectual treatment is a combination of antibiotic therapy and surgery consisting of sequestrectomy, saucerization, decortication, and closed-wound suction irrigation . The surgical therapy approach has three main goals, which includes decompression and drainage of intramedullary pressure and subperiosteal abscesses caused by the osteomyelitic effect, surgical treatment of infected tissue and removal of infectious foci, and grafting healthy bone tissue into the infected area . By definition, decompression is a technique that creates a small opening in the cystic wall using surgical drains for drainage that releases intraluminal pressure that causes cystic reduction and permits gradual bone growth from the periphery . Beside cystic reduction purpose, decompression is also applied to the management of chronic suppurative osteomyelitis of the jaw and bisphosphonate-related osteonecrosis of mandible , . The postoperative exudate obtained from the wound using decompression showed increase in macrophage activation, angiogenesis and osteogenesis related proteins, downregulation of interleukin (IL) 10 and upregulation of tumor necrosis factor-α (TNF-α), IL-1, -6, -8, and -28 through quantitative analysis using immunoprecipitation high-performance liquid chromatography with minimal error range less than 5% . The mechanism of decompression in jaw OM postoperative management is pressure releasing, removing postoperative inflammatory product, and allowing gradual bone growth. In the progression of OM, most of bacterial invasion induces cascade of inflammatory host responses that lead to hyperemia, increased capillary permeability, and local inflammation of granulocytes. During this host response, proteolytic enzymes are released, creating tissue necrosis. Pus consisting of necrotic tissue, dead bacteria within white blood cells (WBC) accumulates within the medullary cavity and increases the intramedullary pressure, generating drop in blood supply . After saucerization and decortication surgical treatment, some inflammatory exudate with pathogenic bacteria is assumed to be retained after surgical closure, some of this retaining microorganisms and exudate are eventually phagocytized by the immune system. However, part of them might retained inflammatory exudate might remain in the wound area, causing wound healing delay and recurrence . Therefore, decompression can be applied to the treatment of jaw OM, for reducing the intraluminal pressure, removing retained pathogenic bacteria, reduce swelling, pain, trismus and aid in the bone healing and bone regeneration process. While the surgical drain eliminate pooled blood, pus, serum and edema reduction, exudate management, they promote drainage in the lymphatic vessels and dead space reduction of the surgical wound by drawing the separated surfaces together . Following that, the drain reduces the edema and trismus. Decompression interventions are classified as open and closed with the closed one subdivided as passive or active. Active negative pressure drainage, also known as vacuum-assisted wound closure or negative-pressure wound therapy is a popular method for wound care including limb wounds, soft-tissue defects, chronic OM, osteofascial compartment syndrome, amputation, and replantation. Past research results have shown the effects of using negative pressure wound therapy in the head and neck region that include decreased healing time, less pain, and full drainage effects . Despite the increasing number of studies of decompression, there are no reported studies of decompressive effects using drains in the management of jaw OM. The purpose of this study was to investigate the effectiveness of decompression using a drain compared to management without drainage in a rat model of S. aureus -induced OM using micro-computed tomography (micro-CT) and histopathological analysis. The null hypothesis of this study is that decompression does not have therapeutic effects on infectious jaw OM and does not facilitate bone healing.
Establishment of an S. aureus- infected jaw osteomyelitis rat model Fourteen 8-week-old SPF Sprague–Dawley rats (OrientBio, Seongnam, Korea) weighing 230.13 (± 13.87) g on average were used in our study. The experimental protocols were approved by the Seoul National University (SNU) Institutional Animal Care and Use Committee (SNU-121123-12-11) and Institutional Biosafety Committee of SNU (SNUIBC-R121226-1-6). The experiment was in accordance with the “Recommendations for handling of Laboratory Animals for Biomedical Research” and complied with the Committee on Safety and ethical Handling Regulations for Laboratory Experiments at SNU. Animal studies were conducted following the ARRIVE guidelines for animal research . The animal experiment was conducted at the Institute for Experimental Animals, College of Medicine, SNU, in a laboratory infection room classified as for high risk infection studies or infectious studies that use experimental animals (Animal Biosafety Level 2: ABL 2). All animals were maintained in an individually ventilated 12-h light/dark cycle cage system with the temperature ranging from 20 to 26 °C (23 ± 3 °C), and were provided rodent food and water ad libitum. The bacterial strain used in our study was S. aureus , the most common causative pathogen for jaw OM . We used a 2 to 8 °C freeze-dried S. aureus subsp. Aureus (ATCC 29213; American Type Culture Collection, Manassas, VA, US) and a Wichita designated clinical isolate that was provided by the Korean Culture Center of Microorganisms (KCCM, Seoul, Korea) . The suspended sample containing the S. aureus strain was then inoculated and spread with the spread method into a tryptic soy agar (TSA; Becton, Dickinson and Company, Franklin Lakes, NJ, US) plate medium using a sterilized inoculation loop and cultured in an incubator for 24 h at 37 °C . After incubation, a visible colony of S. aureus formed. To determine bacterial density, we used the direct method of plate count technique (PCT) and the indirect method of turbidometry . The number of bacterial inoculation was determined by PCT, in which the number of colonies formed on the plate medium is proportional to the live bacteria contained in the sample, and the dilution ratio and the number of colonies are calculated by stepwise dilutions. In the turbidity measurement, as the concentration of bacteria increases, the turbidity (absorbance) increases proportionally, therefore in order to measure turbidity as the actual number of bacteria, a correlation must be obtained. This can be obtained by measuring the number of bacteria with the direct plate count technique in parallel. The bacterial colony was harvested and was washed two times with phosphate-buffered saline (1 × PBS) by vortexing and by centrifuge. The suspended S. aureus solution was transferred to a new glass cuvette containing 1 × PBS and was adjusted to an optical density (OD) of 0.8 using a UV/VIS spectrophotometer (LAMBDA 850 + UV/Vis Spectrophotometer; PerkinElmer, Waltham, MA, US) at 600 nm with a clear PBS solution as a control (Fig. a). For the study, the TSA culture was diluted by 4 different OD values in four steps: (OD = 0.2) 1.1 × 10 6 ; (OD = 0.4) 2.0 × 10 6 ; (OD = 0.6) 4.5 × 10 6 ; (OD = 0.8) 1.1 × 10 7 . The bacterial inoculation was then determined to be (OD = 0.8) 1 × 10 7 CFU/ml at 600 nm, as the optimal bacterial amount required to induce jaw OM. The infection with S. aureus was performed using a local inoculation route by injecting the bacterial suspension through the created defect . The inoculation procedure was performed under general anesthesia using 90 mg/kg ketamine (50 mg/ml) (Ketamine; Yuhan, Seoul, Korea) + 10 mg/kg xylazine (23.32 mg/ml) (Rompun; Bayer Korea, Ansan, Korea) that was administered intraperitoneally . The preparations for the surgical procedure including the skin preparation, disinfection, and draping were all performed according to standard protocols (Fig. b). An approximately 12 mm full-thickness longitudinal extra-oral incision was made parallel to the inferior border of the right and left side of rat mandibles. Adequate subcutaneous (Fig. c), deep fascial and periosteal dissections were performed followed by retraction with forceps (Fig. d). Using a low-speed hand piece with 1.2 mm diameter round bur, a bilateral circular 4 mm defect was created in the rat mandible (Fig. e) with copious irrigation. The defect was made from the buccal side, inferior to the incisor tooth root, posterior to the second molar, and at the attachment site of the superficial masseter muscle. Considering the anatomy of the rat and the objective of the study, a circular 4 mm defect is a generally accepted mandibular bone defect . All animals received 20 μl of 10 7 CFU/ml S. aureus injection (Fig. f) into the defect and were covered with fibrin glue (Greenplast Q; Green Cross, Yongin, Korea) to prevent bacterial leakage (Fig. g) . The surgical wound was then carefully sutured at the subcutaneous layer with resorbable 4-0 Vicryl sutures (Polyglactin 910; Johnson & Johnson, Somerville, NJ, US) and the skin closure was performed using 2-0 silk sutures (BLACK SILK 4–0; AILEE, Busan, Korea) (Fig. h). Grouping and experimental design The animals were randomly divided into control (non-decompression groups, C1 and C2) and experimental groups (decompression groups, E1 and E2) (Table ). The C1 group (n = 3) served as the control group, who only received wound closure. The C2 control group (n = 4) received conventional surgical curettage for jaw OM. The experimental groups were further classified into two subgroups, E1 group (n = 3), which received removal of pus and necrotic bone tissue and curettage, followed by introducing the tube drain and E2 group (n = 4), which received removal of pus and necrotic bone tissue and curettage, drain insertion and irrigation with normal saline every week. Blood samples were collected from the tail vein pre-infection, 1-week post-infection, 2-weeks post-infection/the start of the treatment, 1-week after treatment, and 4-weeks after the treatment, and rat weights were checked. Surgical treatment was performed under same general anesthesia protocol used in the inoculation procedure that mentioned in the previous section of materials and methods . Two mm inner diameter and three mm outer diameter silicone tubes that were approximately 2 cm in length (Translucent PFA Tubing; DAIHAN Scientific, Wonju, Korea) were used as a drain. The length of the tube was adjusted to each animal according to the post-curettage conditions and these were sutured in place using 4-0 silk sutures. To keep the tubes intact and in place, we used a plastic collar to prevent scratching and accidental displacement of the draining tubes. The animals that died during the experimental trial were recorded for weight loss and clinical symptoms of OM of the jaw. Upon the completion of the six-week experimental trial, the animals were euthanized by CO 2 inhalation. The mandibles of the rats were immediately harvested and carefully isolated. The analysis of bone healing with micro-CT The rat mandible specimens were subjected to high-resolution micro-CT scanning Skyscan 1172 (SKYSCAN 1172; Bruker, Kontich, Belgium). The scanning parameters of the source were adjusted to an Al filter of 0.5 mm, source voltage of 70 kV, source current of 141 μA, and 360° rotations at 0.4° rotation steps. This resulted in images that were 496 pixels in width and 900 pixels in height. Following the scanning procedure, the raw data sets were reconstructed using NRecon 1.6.9.8 (NRecon 1.6.9.8; Bruker, Kontich, Belgium) software. The smoothing was adjusted to 6, ring artifact correction to 7, with a beam hardening correction to 10%. Each dataset was opened and further adjusted using DataViewer (DataViewer; Bruker, Kontich, Belgium) software. The region of interest (ROI) were determined in the sagittal plane and the image analysis were performed using CTAn software (CTAn version 1.18.4.0, Bruker, Kontich, Belgium). Incisor roots were excluded from the analysis and only bone tissue was included for bone analysis. Equivalent thresholds were adjusted in all images. To determine the ROI, a 4 mm wide circular area was set up in the sagittal plane where the initial 4 mm bone defect area could be seen. For optimal comparison between the samples, an identical number of slices were selected. Four square shaped ROI’s were defined as 1.0 mm in width and 1.0 mm in height that were adjusted for analysis at the center and the inferior borders of the circular area as seen in Fig. . The same procedure was performed on the contralateral rat mandible. Within the ROI, bone mineral density (BMD, g/cm 3 ), bone volume (BV, mm 3 ), and bone volume/volume of interest (BV/VOI, %), bone surface (BS, mm 2 ), bone surface/volume ratio (BS/BV, 1/mm), trabecular thickness (TB.Th., mm), trabecular number (Tb.N, 1/mm), and trabecular separation (Tb.Sp., mm) were measured and compared. The datasets were reconstructed into three dimensional (3D) images using CTvox volume rendering software (CTvox; Bruker, Kontich, Belgium). The histological and immunohistological analysis of OM healing The samples from each group were trimmed and decalcified with 0.5 M ethylene diamine tetra-acetic acid (pH 8.0) (0.5 M EDTA, pH 8.0; BIOSESANG, Sungnam, Korea) solution for ten days, dehydrated with 70% ethanol, and embedded into paraffin. The 4 μm thick slides were then washed with xylene for approximately 10 min and were stained with hematoxylin and eosin (H&E) and Masson’s trichrome (MT). The histological slides were then scanned with a 3D scanner (PANNORAMIC 250 Flash III; 3DHISTECH, Budapest, Hungary) and examined using slide-viewing software (CaseViewer version 2.0; 3DHISTECH, Budapest, Hungary). For quantitative analysis, the number of osteocytes and Haversian canals within the regenerated bone tissues of the defect area were counted. The 4 mm circular defect area was determined from the micro-CT 3D images. An area of interest using a fixed rectangular form of 350 × 300 μm within the initial defect area was established by the histological slide-viewing software (CaseViewer version 2.0; 3DHISTECH, Budapest, Hungary) in all of the specimens at a magnification of 20 × (Fig. ). Paraffin-embedded samples were cut into a thickness of 4 μm and were mounted on glass slides. The examination was performed by using a light microscope (OLYMPUS BX41; OLYMPUS, Tokyo, Japan). For immunohistochemistry (IHC) staining we used vascular endothelial growth factor A (VEGF-A) (ab46154; Abcam, Cambridge, MA, US) at 1:100 dilution, Transforming growth factor β1 (TGF-β1) (sc-130348; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution, Osteopontin (OPN) (sc-73631; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution, Alkaline Phosphatase (ALP) (sc-271431; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution, TNF-α (300-01A; PeproTech, Cranbury, NJ, US) at 1:100 dilution, and IL-6 (sc-28343; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution antibodies. The staining was scored as follows: “1”: none, “2”: 1–25%, “3”: 26–50%, “4”: 51–75%, and “5”: 76–100% cells stained . The intensity of the antibody staining was assessed using a previously described method . Statistical analysis Means and standard deviations for bone healing parameters were obtained. The data normal distribution was tested by Shapiro–Wilk test and showed homogeneity. The differences between groups were tested by ANOVA followed by Tukey–Kramer multiple comparison tests. Statistical analyses were done using SPSS for Windows, version 25.0 (IBM SPSS Statistics; IBM, Armonk, NY, US). P < 0.05 were considered statistically significant. Ethical approval The experimental protocols were approved by the Seoul National University (SNU) Institutional Animal Care and Use Committee (SNU-121123-12-11) and Institutional Biosafety Committee of SNU (SNUIBC-R121226-1-6). The experiment was in accordance with the “Recommendations for handling of Laboratory Animals for Biomedical Research” and complied with the Committee on Safety and ethical Handling Regulations for Laboratory Experiments at SNU. Animal studies were conducted following the ARRIVE guidelines and are in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
S. aureus- infected jaw osteomyelitis rat model Fourteen 8-week-old SPF Sprague–Dawley rats (OrientBio, Seongnam, Korea) weighing 230.13 (± 13.87) g on average were used in our study. The experimental protocols were approved by the Seoul National University (SNU) Institutional Animal Care and Use Committee (SNU-121123-12-11) and Institutional Biosafety Committee of SNU (SNUIBC-R121226-1-6). The experiment was in accordance with the “Recommendations for handling of Laboratory Animals for Biomedical Research” and complied with the Committee on Safety and ethical Handling Regulations for Laboratory Experiments at SNU. Animal studies were conducted following the ARRIVE guidelines for animal research . The animal experiment was conducted at the Institute for Experimental Animals, College of Medicine, SNU, in a laboratory infection room classified as for high risk infection studies or infectious studies that use experimental animals (Animal Biosafety Level 2: ABL 2). All animals were maintained in an individually ventilated 12-h light/dark cycle cage system with the temperature ranging from 20 to 26 °C (23 ± 3 °C), and were provided rodent food and water ad libitum. The bacterial strain used in our study was S. aureus , the most common causative pathogen for jaw OM . We used a 2 to 8 °C freeze-dried S. aureus subsp. Aureus (ATCC 29213; American Type Culture Collection, Manassas, VA, US) and a Wichita designated clinical isolate that was provided by the Korean Culture Center of Microorganisms (KCCM, Seoul, Korea) . The suspended sample containing the S. aureus strain was then inoculated and spread with the spread method into a tryptic soy agar (TSA; Becton, Dickinson and Company, Franklin Lakes, NJ, US) plate medium using a sterilized inoculation loop and cultured in an incubator for 24 h at 37 °C . After incubation, a visible colony of S. aureus formed. To determine bacterial density, we used the direct method of plate count technique (PCT) and the indirect method of turbidometry . The number of bacterial inoculation was determined by PCT, in which the number of colonies formed on the plate medium is proportional to the live bacteria contained in the sample, and the dilution ratio and the number of colonies are calculated by stepwise dilutions. In the turbidity measurement, as the concentration of bacteria increases, the turbidity (absorbance) increases proportionally, therefore in order to measure turbidity as the actual number of bacteria, a correlation must be obtained. This can be obtained by measuring the number of bacteria with the direct plate count technique in parallel. The bacterial colony was harvested and was washed two times with phosphate-buffered saline (1 × PBS) by vortexing and by centrifuge. The suspended S. aureus solution was transferred to a new glass cuvette containing 1 × PBS and was adjusted to an optical density (OD) of 0.8 using a UV/VIS spectrophotometer (LAMBDA 850 + UV/Vis Spectrophotometer; PerkinElmer, Waltham, MA, US) at 600 nm with a clear PBS solution as a control (Fig. a). For the study, the TSA culture was diluted by 4 different OD values in four steps: (OD = 0.2) 1.1 × 10 6 ; (OD = 0.4) 2.0 × 10 6 ; (OD = 0.6) 4.5 × 10 6 ; (OD = 0.8) 1.1 × 10 7 . The bacterial inoculation was then determined to be (OD = 0.8) 1 × 10 7 CFU/ml at 600 nm, as the optimal bacterial amount required to induce jaw OM. The infection with S. aureus was performed using a local inoculation route by injecting the bacterial suspension through the created defect . The inoculation procedure was performed under general anesthesia using 90 mg/kg ketamine (50 mg/ml) (Ketamine; Yuhan, Seoul, Korea) + 10 mg/kg xylazine (23.32 mg/ml) (Rompun; Bayer Korea, Ansan, Korea) that was administered intraperitoneally . The preparations for the surgical procedure including the skin preparation, disinfection, and draping were all performed according to standard protocols (Fig. b). An approximately 12 mm full-thickness longitudinal extra-oral incision was made parallel to the inferior border of the right and left side of rat mandibles. Adequate subcutaneous (Fig. c), deep fascial and periosteal dissections were performed followed by retraction with forceps (Fig. d). Using a low-speed hand piece with 1.2 mm diameter round bur, a bilateral circular 4 mm defect was created in the rat mandible (Fig. e) with copious irrigation. The defect was made from the buccal side, inferior to the incisor tooth root, posterior to the second molar, and at the attachment site of the superficial masseter muscle. Considering the anatomy of the rat and the objective of the study, a circular 4 mm defect is a generally accepted mandibular bone defect . All animals received 20 μl of 10 7 CFU/ml S. aureus injection (Fig. f) into the defect and were covered with fibrin glue (Greenplast Q; Green Cross, Yongin, Korea) to prevent bacterial leakage (Fig. g) . The surgical wound was then carefully sutured at the subcutaneous layer with resorbable 4-0 Vicryl sutures (Polyglactin 910; Johnson & Johnson, Somerville, NJ, US) and the skin closure was performed using 2-0 silk sutures (BLACK SILK 4–0; AILEE, Busan, Korea) (Fig. h).
The animals were randomly divided into control (non-decompression groups, C1 and C2) and experimental groups (decompression groups, E1 and E2) (Table ). The C1 group (n = 3) served as the control group, who only received wound closure. The C2 control group (n = 4) received conventional surgical curettage for jaw OM. The experimental groups were further classified into two subgroups, E1 group (n = 3), which received removal of pus and necrotic bone tissue and curettage, followed by introducing the tube drain and E2 group (n = 4), which received removal of pus and necrotic bone tissue and curettage, drain insertion and irrigation with normal saline every week. Blood samples were collected from the tail vein pre-infection, 1-week post-infection, 2-weeks post-infection/the start of the treatment, 1-week after treatment, and 4-weeks after the treatment, and rat weights were checked. Surgical treatment was performed under same general anesthesia protocol used in the inoculation procedure that mentioned in the previous section of materials and methods . Two mm inner diameter and three mm outer diameter silicone tubes that were approximately 2 cm in length (Translucent PFA Tubing; DAIHAN Scientific, Wonju, Korea) were used as a drain. The length of the tube was adjusted to each animal according to the post-curettage conditions and these were sutured in place using 4-0 silk sutures. To keep the tubes intact and in place, we used a plastic collar to prevent scratching and accidental displacement of the draining tubes. The animals that died during the experimental trial were recorded for weight loss and clinical symptoms of OM of the jaw. Upon the completion of the six-week experimental trial, the animals were euthanized by CO 2 inhalation. The mandibles of the rats were immediately harvested and carefully isolated.
The rat mandible specimens were subjected to high-resolution micro-CT scanning Skyscan 1172 (SKYSCAN 1172; Bruker, Kontich, Belgium). The scanning parameters of the source were adjusted to an Al filter of 0.5 mm, source voltage of 70 kV, source current of 141 μA, and 360° rotations at 0.4° rotation steps. This resulted in images that were 496 pixels in width and 900 pixels in height. Following the scanning procedure, the raw data sets were reconstructed using NRecon 1.6.9.8 (NRecon 1.6.9.8; Bruker, Kontich, Belgium) software. The smoothing was adjusted to 6, ring artifact correction to 7, with a beam hardening correction to 10%. Each dataset was opened and further adjusted using DataViewer (DataViewer; Bruker, Kontich, Belgium) software. The region of interest (ROI) were determined in the sagittal plane and the image analysis were performed using CTAn software (CTAn version 1.18.4.0, Bruker, Kontich, Belgium). Incisor roots were excluded from the analysis and only bone tissue was included for bone analysis. Equivalent thresholds were adjusted in all images. To determine the ROI, a 4 mm wide circular area was set up in the sagittal plane where the initial 4 mm bone defect area could be seen. For optimal comparison between the samples, an identical number of slices were selected. Four square shaped ROI’s were defined as 1.0 mm in width and 1.0 mm in height that were adjusted for analysis at the center and the inferior borders of the circular area as seen in Fig. . The same procedure was performed on the contralateral rat mandible. Within the ROI, bone mineral density (BMD, g/cm 3 ), bone volume (BV, mm 3 ), and bone volume/volume of interest (BV/VOI, %), bone surface (BS, mm 2 ), bone surface/volume ratio (BS/BV, 1/mm), trabecular thickness (TB.Th., mm), trabecular number (Tb.N, 1/mm), and trabecular separation (Tb.Sp., mm) were measured and compared. The datasets were reconstructed into three dimensional (3D) images using CTvox volume rendering software (CTvox; Bruker, Kontich, Belgium).
The samples from each group were trimmed and decalcified with 0.5 M ethylene diamine tetra-acetic acid (pH 8.0) (0.5 M EDTA, pH 8.0; BIOSESANG, Sungnam, Korea) solution for ten days, dehydrated with 70% ethanol, and embedded into paraffin. The 4 μm thick slides were then washed with xylene for approximately 10 min and were stained with hematoxylin and eosin (H&E) and Masson’s trichrome (MT). The histological slides were then scanned with a 3D scanner (PANNORAMIC 250 Flash III; 3DHISTECH, Budapest, Hungary) and examined using slide-viewing software (CaseViewer version 2.0; 3DHISTECH, Budapest, Hungary). For quantitative analysis, the number of osteocytes and Haversian canals within the regenerated bone tissues of the defect area were counted. The 4 mm circular defect area was determined from the micro-CT 3D images. An area of interest using a fixed rectangular form of 350 × 300 μm within the initial defect area was established by the histological slide-viewing software (CaseViewer version 2.0; 3DHISTECH, Budapest, Hungary) in all of the specimens at a magnification of 20 × (Fig. ). Paraffin-embedded samples were cut into a thickness of 4 μm and were mounted on glass slides. The examination was performed by using a light microscope (OLYMPUS BX41; OLYMPUS, Tokyo, Japan). For immunohistochemistry (IHC) staining we used vascular endothelial growth factor A (VEGF-A) (ab46154; Abcam, Cambridge, MA, US) at 1:100 dilution, Transforming growth factor β1 (TGF-β1) (sc-130348; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution, Osteopontin (OPN) (sc-73631; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution, Alkaline Phosphatase (ALP) (sc-271431; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution, TNF-α (300-01A; PeproTech, Cranbury, NJ, US) at 1:100 dilution, and IL-6 (sc-28343; Santa Cruz Biotechnology, Dallas, TX, US) at 1:100 dilution antibodies. The staining was scored as follows: “1”: none, “2”: 1–25%, “3”: 26–50%, “4”: 51–75%, and “5”: 76–100% cells stained . The intensity of the antibody staining was assessed using a previously described method .
Means and standard deviations for bone healing parameters were obtained. The data normal distribution was tested by Shapiro–Wilk test and showed homogeneity. The differences between groups were tested by ANOVA followed by Tukey–Kramer multiple comparison tests. Statistical analyses were done using SPSS for Windows, version 25.0 (IBM SPSS Statistics; IBM, Armonk, NY, US). P < 0.05 were considered statistically significant.
The experimental protocols were approved by the Seoul National University (SNU) Institutional Animal Care and Use Committee (SNU-121123-12-11) and Institutional Biosafety Committee of SNU (SNUIBC-R121226-1-6). The experiment was in accordance with the “Recommendations for handling of Laboratory Animals for Biomedical Research” and complied with the Committee on Safety and ethical Handling Regulations for Laboratory Experiments at SNU. Animal studies were conducted following the ARRIVE guidelines and are in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Establishment of an S. aureus- infected jaw osteomyelitis rat model The pathogen dose of 20 μl of 10 7 CFU/ml was effective in creating jaw OM in the rat model and a repeatable animal model was established. After two weeks of infection, all groups showed visible clinical manifestation of infectious jaw OM including: skin redness, swelling, purulent discharge and alopecia. Six animals died during the experimental trial due to OM complication including, one animal from the C1 group, three animals from the C2 group, one animal from the E1, and one animal from E2. Three animals that died from the C2 group, which received conventional surgical curettage for jaw OM without decompression, could be explained by the development of bacterial jaw OM and the surgical intervention that can cause serious adverse burden on the body. Consequently, after six weeks, the specimens of the remaining animals (n = 8) were collected and were further analyzed. All of the animals were in good health before the infection. The clinical findings showed common characteristics of the jaw osteomyelitis. The establishment of S. aureus- infected jaw osteomyelitis rat model were confirmed by the following parameters: clinical findings, blood test, micro-CT bone architecture findings, and histological analysis. Clinical evaluation with blood test After the infection with S. aureus all animals from the control and the experimental groups showed weight loss. Significant weight loss in all groups was observed with an average of −31.57 (± 21.98) g at one week after the infection that was recovered after two weeks ( p = 0.000). There was no statistical significance in weight loss between the groups at 1 week after infection (Table ). The neutrophil count was significantly increased after the infection in all groups. There were no statistically significant differences in neutrophil count between the groups at one week and four weeks after treatment (Table ). The WBC count was significantly increased in all groups after infection and was recovered to the normal range at four weeks after treatment ( p = 0.04). No significant difference was observed between the groups at one week after treatment and four weeks after treatment (Table ). Serum levels of alkaline phosphatase (ALP) were also measured and analyzed. At one week after infection, ALP levels were significantly increased in all groups ( p = 0.001) and a significant reduction was observed at one week and two weeks after treatment ( p = 0.004, p = 0.000, respectively). No significant differences were found between groups (Table ). Micro-CT results of bone healing From the 3D images, more bone healing was observed in the E1 and E2 groups, where the initial bone defect was replaced by new bone tissue. The E2 group had the most compact bone formation compared to the other groups. The C1 group, which received no treatment, showed bone destruction that continuously spread from the initial defect affecting a wider area. The common characteristics of osteomyelitis of the jaw such as the bone necrosis and sequestrum formation, a segment of necrotic bone that is separated from the viable bone by a granulation tissue and bone resorption was observed from the control groups. The BMD results were significantly different between the groups. The BMD in the C1 group was significantly lower compared to that of the E2 group, with a mean difference of -0.45 g/cm 3 ( p = 0.004) (Table ) (Fig. a). The BV was highest among E1 and E2 groups. The E2 group showed significantly higher results compared to that of the C1 group ( p = 0.005) with a mean value of 0.73 (± 0.08) mm 3 ( p < 0.05) (Fig. b). The BV/VOI parameter was significantly higher in the E2 group, with an average value of 75.70 (± 14.32) % compared to that of the C1 group ( p = 0.003) (Fig. c). The BS, BS/VOI, Tb.N, Tb.Sp parameters were significantly different between the control and experimental groups (Table ). Bone healing parameters such as BS, BS/VOI, Tb.N showed significant difference between the control and experimental groups ( p = 0.002, p = 0.001, 0.008, respectively) (Fig. a–c). However, the Tb.Sp in the C1 group was significantly higher than that in the experimental groups (Fig. d). The Tb.Th was not significantly different between the groups. Histological and immunohistochemical results of OM healing The morphological changes in bone healing were macroscopically observed in the H&E and MT stained slides at 4-weeks in the defect area. The C1 group showed high grade inflammatory infiltration consisting of neutrophils, eosinophils, and macrophages around the bacterial colonies. Signs of infective osteomyelitis including bone necrosis, bone resorption and destruction, with no bone healing were observed. In MT staining, the C1 group were stained with thick blue color, indicating old bone, while no new bone formation stained with bright blue color were observed (Fig. A1–A6). Histological findings for the C2 group showed bone healing with osteoblastic cell lining in the parenchymal tissue found at the center of the defect area with evidence of inflammatory infiltrates (Fig. B1–B6). The histology features of the E1 group included scattered lymphocytic inflammatory infiltrates and loose marrow fibrosis. In the MT stain, new bone formation stained with bright blue color and new blood vessels were observed (Fig. C1–C6). The E2 group showed active bone remodeling with the thickest and most compact new bone formation being in the defect area compared to that of other groups. Increased osteophytic bone formation was observed. Furthermore, an increased number of Haversian canals with osteoblast rimming and new blood vessel formation stained with thick red color by the MT stain were seen (Fig. D1–D6). We counted the number of osteocytes and Haversian canals in the ROI for quantitative analysis. The results showed that the E2 groups had a statistically significant greater osteocyte count compared to the control groups ( p = 0.002) (Table ). In order to confirm the inflammatory, angiogenic, and osteogenic properties in the control and experimental groups, IHC staining was performed. The expression of inflammation-related antibody IL-6 in the E2 group was weak (score “2”: 1–25% of cells positive), compared with that of the C1 group (“5”: 76–100% cells stained). TGF-β1 expression was markedly high in the E1 group, while the C1 group showed no expression (“1”: none). The TNF-α antibody stained strongly in the C1 and C2 groups compared to that in the other groups (“5”: 76–100% cells stained). The expression of VEGF-A was the highest in C1 (“5”: 76–100%) compared to that of E2 (“2”: 1–25%). The osteogenesis markers, ALP and OPN, were also strongly expressed in the E1 group compared to that seen in other groups (Fig. ).
S. aureus- infected jaw osteomyelitis rat model The pathogen dose of 20 μl of 10 7 CFU/ml was effective in creating jaw OM in the rat model and a repeatable animal model was established. After two weeks of infection, all groups showed visible clinical manifestation of infectious jaw OM including: skin redness, swelling, purulent discharge and alopecia. Six animals died during the experimental trial due to OM complication including, one animal from the C1 group, three animals from the C2 group, one animal from the E1, and one animal from E2. Three animals that died from the C2 group, which received conventional surgical curettage for jaw OM without decompression, could be explained by the development of bacterial jaw OM and the surgical intervention that can cause serious adverse burden on the body. Consequently, after six weeks, the specimens of the remaining animals (n = 8) were collected and were further analyzed. All of the animals were in good health before the infection. The clinical findings showed common characteristics of the jaw osteomyelitis. The establishment of S. aureus- infected jaw osteomyelitis rat model were confirmed by the following parameters: clinical findings, blood test, micro-CT bone architecture findings, and histological analysis.
After the infection with S. aureus all animals from the control and the experimental groups showed weight loss. Significant weight loss in all groups was observed with an average of −31.57 (± 21.98) g at one week after the infection that was recovered after two weeks ( p = 0.000). There was no statistical significance in weight loss between the groups at 1 week after infection (Table ). The neutrophil count was significantly increased after the infection in all groups. There were no statistically significant differences in neutrophil count between the groups at one week and four weeks after treatment (Table ). The WBC count was significantly increased in all groups after infection and was recovered to the normal range at four weeks after treatment ( p = 0.04). No significant difference was observed between the groups at one week after treatment and four weeks after treatment (Table ). Serum levels of alkaline phosphatase (ALP) were also measured and analyzed. At one week after infection, ALP levels were significantly increased in all groups ( p = 0.001) and a significant reduction was observed at one week and two weeks after treatment ( p = 0.004, p = 0.000, respectively). No significant differences were found between groups (Table ).
From the 3D images, more bone healing was observed in the E1 and E2 groups, where the initial bone defect was replaced by new bone tissue. The E2 group had the most compact bone formation compared to the other groups. The C1 group, which received no treatment, showed bone destruction that continuously spread from the initial defect affecting a wider area. The common characteristics of osteomyelitis of the jaw such as the bone necrosis and sequestrum formation, a segment of necrotic bone that is separated from the viable bone by a granulation tissue and bone resorption was observed from the control groups. The BMD results were significantly different between the groups. The BMD in the C1 group was significantly lower compared to that of the E2 group, with a mean difference of -0.45 g/cm 3 ( p = 0.004) (Table ) (Fig. a). The BV was highest among E1 and E2 groups. The E2 group showed significantly higher results compared to that of the C1 group ( p = 0.005) with a mean value of 0.73 (± 0.08) mm 3 ( p < 0.05) (Fig. b). The BV/VOI parameter was significantly higher in the E2 group, with an average value of 75.70 (± 14.32) % compared to that of the C1 group ( p = 0.003) (Fig. c). The BS, BS/VOI, Tb.N, Tb.Sp parameters were significantly different between the control and experimental groups (Table ). Bone healing parameters such as BS, BS/VOI, Tb.N showed significant difference between the control and experimental groups ( p = 0.002, p = 0.001, 0.008, respectively) (Fig. a–c). However, the Tb.Sp in the C1 group was significantly higher than that in the experimental groups (Fig. d). The Tb.Th was not significantly different between the groups.
The morphological changes in bone healing were macroscopically observed in the H&E and MT stained slides at 4-weeks in the defect area. The C1 group showed high grade inflammatory infiltration consisting of neutrophils, eosinophils, and macrophages around the bacterial colonies. Signs of infective osteomyelitis including bone necrosis, bone resorption and destruction, with no bone healing were observed. In MT staining, the C1 group were stained with thick blue color, indicating old bone, while no new bone formation stained with bright blue color were observed (Fig. A1–A6). Histological findings for the C2 group showed bone healing with osteoblastic cell lining in the parenchymal tissue found at the center of the defect area with evidence of inflammatory infiltrates (Fig. B1–B6). The histology features of the E1 group included scattered lymphocytic inflammatory infiltrates and loose marrow fibrosis. In the MT stain, new bone formation stained with bright blue color and new blood vessels were observed (Fig. C1–C6). The E2 group showed active bone remodeling with the thickest and most compact new bone formation being in the defect area compared to that of other groups. Increased osteophytic bone formation was observed. Furthermore, an increased number of Haversian canals with osteoblast rimming and new blood vessel formation stained with thick red color by the MT stain were seen (Fig. D1–D6). We counted the number of osteocytes and Haversian canals in the ROI for quantitative analysis. The results showed that the E2 groups had a statistically significant greater osteocyte count compared to the control groups ( p = 0.002) (Table ). In order to confirm the inflammatory, angiogenic, and osteogenic properties in the control and experimental groups, IHC staining was performed. The expression of inflammation-related antibody IL-6 in the E2 group was weak (score “2”: 1–25% of cells positive), compared with that of the C1 group (“5”: 76–100% cells stained). TGF-β1 expression was markedly high in the E1 group, while the C1 group showed no expression (“1”: none). The TNF-α antibody stained strongly in the C1 and C2 groups compared to that in the other groups (“5”: 76–100% cells stained). The expression of VEGF-A was the highest in C1 (“5”: 76–100%) compared to that of E2 (“2”: 1–25%). The osteogenesis markers, ALP and OPN, were also strongly expressed in the E1 group compared to that seen in other groups (Fig. ).
The result of current study showed that decompression had significantly greater bone healing properties compared to the conventional surgical treatment alone in infectious jaw OM, based on the clinical, micro-CT, histological and immunohistochemical analysis. Thus, the null hypothesis was rejected. Decompression promotes wound healing through the increased upregulation of innate immunity-related proteins, osteogenic and angiogenic proteins after decompression usage in the postoperative jaw OM wound area . To the best of our knowledge, the current study analyzes for the first time the decompression effects of using drain in vivo jaw OM model. Decompression using drains is a well-established and a reliable conservative treatment method for cystic lesions in the jaw. Although this method is used often for treating cystic lesion and reported extensively for its therapeutic effects in the literature, there are no studies of decompression treatment used for jaw OM cases. After surgical treatments for jaw OM, such as saucerization or decortication, excess fluid build-up can exert high pressure within the bone marrow covered by the cortical bone. Surgeons usually implement surgical drains to remove any excess postoperative exudate to prevent edema formation. However, the same bone healing and bone regenerative effects of decompression can be applied to the treatment jaw OM. Therefore, this study will be discussing about therapeutic effects of decompression in the jaw OM, rather than decompression in cystic lesion. The effectiveness of decompression in the treatment of jaw OM can be explained by fluid removal and alteration of the wound environment to be conducive to healing. This excess fluid built-up with high pressure can be regarded as one of the major factors that compromise healing, partly owing to the compressive pressure that it exerts on local cells and surrounding tissue . If the fluid pressure is elevated in the bone marrow, the proliferative response diminishes due to dampened intrinsic tension build-up. Applying decompression and drainage to this area permits fluid removal from the extracellular space. The removal of postoperative fluid allows decompression of the microvasculature that permits tissue perfusion by reducing pressure and enhancing blood circulation to the area. It will also remove the toxins, inflammatory exudate, and pathogenic bacteria from the operative site, which is considered to be an important element in the wound healing process. The significance of this study is that it demonstrates the effectiveness of decompression using a drain in jaw OM, which had significant bone healing effects according to micro-CT, histology, and IHC analyses. For developing new therapeutic methods for OM of the jaw, it is important to establish standard operative protocols for animal modeling . To our knowledge, there are no scientific data in the literature on decompression effects using drain in jaw OM animal model. Also, the current S. aureus -infected jaw osteomyelitis rat model has not been previously described in the literature before. We report our methodology according to the guidelines for assessment of bone microstructure in rodents using micro-CT and results by following morphometric indices that can determine new bone formation . The most informative parameters which show the course of bone healing are BV, BV/VOI, BS/VOI, and BMD , . More pronounced increase of BV, BV/VOI, BS/VOI, and BMD results were achieved in the experimental E1 and E2 groups compared to that of the control groups. The 3D evaluation showed more rapid bone healing was detected in the E1 and E2 groups compared to that of the C1 and C2 groups (Fig. ). This can be explained by the quantity and the quality of newly formed bone in the defect area of jaw OM rat model. These results were supported by the histology analysis, where the control groups had evident inflammatory infiltration with bacterial colonies, making it unfavorable for bone healing and bone regeneration. The IHC results also in accordance with the micro-CT findings, which show strong staining results osteogenesis-related proteins ALP and OPN. This favorable outcome suggests that decompression in combination with the surgical treatment can remove the post-operative exudate built-up, removing any inflammatory toxins from the wound and decrease the pressure from high pressure area, thereby supporting bone healing and bone regeneration. In terms of bone healing and anti-inflammation, the experimental groups showed more effective results that included rapid bone healing, blood vessel formation, and reduced inflammation. The E2 group also had the highest osteocyte count. An increase in osteocytes plays a pivotal role in regulating bone turnover, which also enhances osteogenesis of stem cells, suggesting an important role in tissue regeneration . The histological results suggested the most optimal bony healing was seen in the experimental groups, which is in accordance with the micro-CT analysis. In our study we demonstrated that VEGF-A was activated and then reduced in the E2 group, leading to enhanced and accelerated bone healing compared to the E1 group, while bone healing was evident, but much slower. Angiogenesis and osteogenesis are two intimately connected processes that must be closely coupled to permit physiological bone function. In fact, alterations in vascular growth can alter the physiological bone healing process, which may lead to osteoporosis, osteonecrosis, and non-union fractures . According to a previous clinical study, decompression had a direct influence on the microvascular circulation, and enhanced VEGF protein in the first day following surgical treatment, thereby activating osteogenesis-related proteins such as OPN and ALP was decreased on the second day . In the bone remodeling phase, TNF-α and other pro-inflammatory cytokines are thought to play an important role in bone healing . IL-6 is a well-known cytokine that stimulate osteoclast differentiation and bone resorption indirectly depending on the context of release . TNF-α is a widely known key player in the pathogenesis of osteomyelitis. In IHC staining, the pro-inflammatory antibodies IL-6 and TNF-α were strongly stained in the C1 group, indicating an inflammatory reaction to the bacteria, while the groups which received surgical treatment were weakly stained. From the Micro-CT and histology analysis, C1 and C2 groups exhibited the weakest bone healing with strong inflammatory infiltration, which could be explained by the high expression of IL-6 found in C1, and high expression of TNF-α in the C1 and C2 groups. The high expression of IL-6 and TNF-α found in these groups had active bone resorptive effects, which suppressed the osteogenesis. The high expression of TGF-β1 found in the E1 group indicate active angiogenic function of decompression as well reduced activation of osteoclasts and bone resorption. Compared to E1 group, the E2 group showed well-formed cortical bone with less marrow bone. This could also explain the weak stain of angiogenesis and osteogenesis related antibodies, since adequate and compact bone healing was already established compared to that of other groups. TGF-β is an important cytokine that balances bone formation and bone resorption, mineral storage, hematopoietic cell generation and osteoimmunology , . Especially, high concentration of TGF-β1 reduces the activation of osteoclasts, while low concentrations promotes osteoclast maturation . The TGF-β1 was expressed in high expression in the E1 group. Previous studies suggest that TGF-β1 prevents TNF-α-induced bone destruction by suppressing effecter T cell function . The results were in accordance with our hypothesis that decompression using a drain had significant therapeutic effects on bone regeneration for jaw OM, therefore the null hypothesis is not rejected. When decompression was applied to the curettage treatment, enhanced wound and bone healing were achieved. The decompression effects on the healing process were much enhanced with weekly normal saline irrigation. Although many surgeons use decompression technique for managing cystic lesions, their use of managing jaw OM is not known in the literature. The current study results strongly support the clinical relevance of decompression in combination with the conventional surgical treatment for jaw OM. From our ongoing clinical study, this treatment method has many clinical merits, such as reducing swelling, discomfort, easy to use, convenient and economical . The results of this clinical study revealed that the group of patients treated with saucerization and drain insertion exhibited more enhanced bone density compared to the groups without drainage at the six-month and one-year follow-ups. In addition, the drain insertion for decompression show the effectiveness in both sclerosing and suppurative types OM in the jaw. In the future, the current model can be enhanced by additional design of an intervention group with antibiotic treatment, for studying this combination of surgical decompressive, and medical treatment. In this way, the most effective treatment protocol for each type of OM can be determined. At present, the application of decompression could be a reliable choice for the surgeons and clinicians as a treatment in combination with surgical treatment to allow accelerated bone healing.
|
Frequency of Expression of PD-1 and PD-L1 In Head And Neck Squamous Cell Carcinoma And their Association With Nodal Metastasis: A Cross-Sectional Study | 13052b86-7d55-45f6-995a-ae9e9696b2f1 | 9272636 | Anatomy[mh] | The cancers of Head and Neck region are reported to be at the sixth level in the list of commonest cancers in the world, with commonest histological type as squamous cell carcinoma (Wang et al., 2015). In spite of the extensive and advanced research area, the significant mortality and morbidity rates are seen in HNSCC (Patil et al., 2014).The most major risk factors of OSCC include mainly tobacco (smoked, smokeless), alcohol, poor nutrition, high risk types of human papillomaviruses and poor oral hygiene (Zeng et al., 2020). By immunoediting ,immune system playa a dual part in tumor (Teixidóet al., 2018). The innate and adaptive immune responses play major role in constraining the tumor growth and destroying the cancer cells in immunosurveillance. Whereas, the tumors can also enter into an ‘escape’ phase that leads to the characteristic immune suppression state. It is caused by recruitment of immunosuppressive cells (Schreiberet al., 2011). The tumor cells evade and evolve the immune system by manipulation of self-immunogenicity and/or by the expression of immunosuppressive mediators, this marks the basis of the development of advanced immunotherapy(Al Azhar and Aisyi, 2021). The check point controllers comprise of the cytotoxic T-lymphocyte-associated antigen-4 (CTLA- 4), programmed cell death-1 (PD-1) and programmed cell death ligand-1 (PD-L1)(Teixidó et al., 2018). PD-1/Cluster of differentiation 279 is on the surface of cells as a protein that plays a significant part in immune response regulation by down-regulation of immune system and promotion of self-tolerance by suppression of the T cells inflammation activity. This helps in preventing autoimmune diseases along with preventing the immune system from killing the cancer cells as well (Syn et al., 2017). Increased expression of PD-1 at CD8+ T cells indicates T-cell fatiguee.g. in cancers (Syn et al., 2017). Programmed cell death-ligand 1 is a 40kDa type 1 transmembrane protein which plays an significant part at suppression of adaptive immune system (Chemnitz et al., 2004) C.When PD-L 1 binds to PD-1, a repressive signal is transmitted which results in reduction of antigen specific T-Cells in the lymph nodes and apoptosis of T cell (Elmusrati et al., 2021). In the fields of reconstructive surgery and diagnostic accuracy, studies have showed marked development in the subsistence rate of patients with SCC (Gorsky et al., 2004). Failure to treat squamous cell carcinoma effectively shows lethal results due to its recurrence and distant metastasis. In recent advancements it is believed that the therapeutic failure in treatment of OSCC may be related to malfunctioning of immune system (Maruse et al., 2018). Further studies have showed that blocking of the PD-L 1/PD-1 checkpoint is being used effectively in treating melanoma and non-small cell lung cancer (Brahmer et al., 2012). However, very few studies have examined the expression and prognostic value of PD- L 1/PD-1 in HNSCC (Lin et al., 2015). The PD-1 and PD-L1-targeted inhibitors show a dynamic part in cancer immunotherapy (Hanna, 2020). The goal of this study is to further insight and check the expression of PD-L1 in the primary tumor cells and PD-1 in TILs and related lymph node metastasis along with some other clinicopathological factors, which in turn will help the clinician and oncologist in many ways, like better delineation of treatment regimens along with the prognosis of the disease. Also, it will help in the correct assessment of disease course at the initial diagnosis on incisional biopsies. Recently the FDA has approved anti-PD - 1and anti – PD - L1 m Abs which targets varied range of the human cancers (Chen and Han, 2015).
EthicalApproval IRB approval was given by the Ethical Review Committee AFIP vialetter: MP-ORP18-6/READ-IRB/19/645 Sample Collection This cross-sectional study was carried out on the total of 66 cases of head and necksquamous cell carcinoma collected from the department of Histopathology, AFIP,Rawalpindi, Pakistan over a period of one year from June 2019 to June 2020. Non probability convenience sampling was used to achieve the required sample size including the patients of all ages and both genders. The freshcases of HNSCC irrespective of the grade and differentiation diagnosed on H&E submittedat Histopathology department were included in the study, whereas the Specimens withthe poorfixation, scarceandscantytissue, patients withpost-chemoradiotherapytissuesamples and the necrotictumor were excluded from this study. Of the selected cases, clinical and demographic detail soft hepatients werere corded by the histories provided with the cases. To avoid the confounding factors,the inclusion criteria and the exclusion criteria were followed. The H&E slides were used to obtain hedefinitive diagnosis which was followed by the IHC staining. Immunohistochemicalstaining The IHC markers were applied by using indirect IHC technique. MonoclonalrabbitantibodytoPD-1(CloneEP239,Catalogueno:PR196-6mlRTU;PathnSitu) and Monoclonalrabbitantibody toPD-L1(CloneRBT-PD-L1,Catalogueno:BSB2649;BioSB) was used by following the standard protocol of application.The microscopic results were also evaluated and verified by the consultant histopathologist. Evaluation Of IHC The PD-1 andPD-L1positivecellsweredefinedas Brownstainingofthe Cellmembrane/cytoplasm ofthecells and depending upon this intensity of staining the cases were recorded as positive and negative staining. The controlforPD-1 wasLymphnodeandTonsil with the recorded staining pattern as cytoplasmicand/ormembranousstaining. The controlforPD-L1 wasLymphnode with the recorded staining pattern ascytoplasmicandmembranousstaining. StatisticalAnalysis The data analysis was performed by using the data collection proforma and compiling and analyzing the collected data on the SPSS (version 24.0). Followed by the calculation of frequencies and percentages of the qualitative variables (gender, site, laterality,Differentiation,IHC expression of PD-1 and PD-L1) and means and standard deviations of the quantitative(age,pT,pN) variables. The chi square test was used to compare and analyze the results and access the significance of the difference. The p-value <0.05 was set as the significant value.
A total 66 cases of oral squamous cell carcinoma with corresponding neck dissections were collected In72.7% (48) specimens the associated lymph node metastasis was seen. Out of 66 patients 44 (66.7%) males and 22 (33.3%) females were included which corresponded to the male: female ratio 2:1. The mean age distribution was 59.5 years±13.637 (range 34 to 91 years). According to the histological gradin gofhead and neck squamous cell carcinoma the rewere 22 (33.3%) well differentiated SCC cases, 28 (42.4%) moderately differentiated SCC cases, 15 (22.7%)poorly differentiated SCC cases whereas grade of one case couldn’t be assessed. The maximum number of HNSCC cases were seen inmandible (37.9%) followed by maxilla 25.8%, buccalmu cosa 13.6%, tongue 10.6%, Palate 4.5%, floor of the mouth 4.5% and salivary glands 3%. The expression of PD -1 in TIL sand PD- L1intumorcells was evaluatedin primary tumor sections of the66specimens andtheirexpression wasalsoseenonthe corresponding metastatic lymphnodesin 48 specimens. PD-1andPD L1expressioninpropersectionsof tumor In total of 66 cases ,47 (71.2%) cases showed positive PD - 1 expression in TIL s seen in the primary tumor sections of the specimens, whereas out of 66 cases 40 (60.6%) cases showed +ve PD - L1 expression in the effected cells of primary tumor. PD-1 and PD- L1 expressionin Corresponding Metastatic Lymph Nodes In total of 66 cases , 48 (72.5%) cases showed lymph nodes metastasis, out of which 45 showed positive expression for PD-1 and 25 cases showe dpositive expression for PD-L1 in metastatic lymphnodes. PD-1expressioninTILs The Chi Square test was applied and asignificant association was seen between the PD-1 expression of TILs in primary tumor with lymph nodes metastasis (p=0.003) (represented in ), gender (p=0.002) of the patient and the pathological stage of regional lymph nodes (p=0.0036) Another note worthy correlation seen amongst the expression of PD-1 in TIL sintumorpropersections and expression of PD-1in the associated metastatic lymph nodes which came out with p-value of 0.003. However, it showed insignificant results with the pathological staging of primary tumor, age, grades of the tumor, site and laterality of the tumor. PD- L1expressioninTumorcells To assess the association of IHC expression of PD -L1 and associated lymph nodes metastasis, the Chi Square test was applied which showed a significant result (p=0.04) given in the . Additionally, a significant association was seen between PD-L1 expression in primary tumor and pathological staging of regional lymph nodes(p=0.04) However, it showed the insignificant results with the pathological staging of primary tumor, age, gender, grades of the tumor, site, laterality of the tumor and the PD-L1 expression in the metastatic lymph nodes. It was also seen that the PD - L1 expression was lower in the high grade invasive OSCC than that in the low grade invasive OSCC.
Annually the new reported cases of Oral Cancers are about 300,000 cases and approximately, along with OSCC as the commonest oral cancer resulting into high morbidity and mortality rates and showing around 60% 5 year survival rate (Lenouvel et al., 2020). For over 90% of all oral malignancies are OSCC (Goel et al., 2020). Oral Squamous cell carcinoma is reported to be the 6th commonest malignancy of head and neck(Wang et al., 2015). In spite of the treatment advances a considerable morbidity and mortality is seen in these malignancies (Hanna, 2020; Wu et al., 2016) This study basically focuses on the latest treatment option for malignancies i.e., immunotherapy. This study targets on the frequency of PD - 1 And PD - L1 expression in HNSCC and associated metastasis in lymph nodes which in turn will direct on the prognosticators and immunotherapy. Very rare studies have analyzed expression and association of PD - 1 and PD - L1 expression in lymph nodes metastasis and primary SCC. Our results would in turn help in the therapeutic decisions and prognosis of HNSCC. Immunotherapy aims at the normalization and enhancement of anti-tumor immune response of body by killing and inhibiting the cancer cells by immune system activation. PD-1 and PD-L1 immunotherapy show increased efficacy and increase the chance of long-term survival. Many types of cancers are intractable to the conventional chemotherapy treatment. Checkpoint immunomodulation maintains the inequity between immune surveillance and proliferation of cancer cells which results in tumor survival (Alsaab et al., 2017). PD - 1 and PD - L1 inhibitors (check point blockers), act as a suppressing factor for the tumor through immunomodulation and so are becoming promising therapeutic method with minimum side effects. Recently four inhibitors are accepted by that include PD-1, PDL-1, and CTLA-4 inhibitors. In spite of successful results of anti-PD1 therapy it is still used for limited types of cancers (Pardoll, 2012) The present study aims at the immunohistochemical expression of PD - 1 and PD - L1 evaluation in primary HNSCC and associated metastatic lymph nodes in the patients already treated with conventional surgery. A total of 66 cases of HNSCC with associated lymph nodes were selected at Histopathology Department of AFIP, Rawalpindi. Out of the 66 cases, 48 (72.7%) cases showed metastasis in corresponding lymph nodes whereas in a study conducted in Austria in 2018, 56.7% cases showed lymph nodes metastasis(Schneider et al., 2018). The mean age of patients recorded was 59.5 ± 13.63 (mean ± SD) with least age as 34 years and the extreme as 91 years. Whereas higher incidence of HNSCC was seen in 40 years and above age groups in the studies conducted in Norway, Sweden, Denmark and Finland in (Akram, Mirza, Mirza, & Qureshi, 2013). However, some of the previous studies that are being conducted in Pakistan and India show the same results. Similarly, the age distribution in our study showed that 37 out of 64 patients were of the age below 60 years and 27 were of age above 60 years. Among the patients 44 (66.7%) were males and 22 (33.3%) were females along with a male: female proportion of 2 :1, that showed a consistency with the local and international previous studies. Whereas inconsistent with a metanalysis carried out in 2019 in Spain showing 56% female and 44% male patients and favoring the high PD- L1 expression in female patients.(Lenouvel et al., 2020) In the current study, the most common site for HNSCC was mandible (37.9%) followed by maxilla 25.8%, buccal mucosa 13.6%, tongue 10.6%, Palate 4.5%, floor of the mouth 4.5% and salivary glands 3%. However in Asia the buccal mucosa is known to be most commonly affected area followed by the tongue(Sharma, Saxena, & Aggarwal, 2010). which might be because of oral habits of the local population. According to the histological grading of HNSCC there were 22(33.3%) well differentiated SCC cases, 28(42.4%) moderately differentiated SCC cases, 15(22.7%) poorly differentiated SCC cases and grade of one case couldn’t be assessed. Which was in contrast to the study carried out in 2020 in which 87% cases were of poorly differentiated Squamous Cell Carcinoma (Lenouvel et al., 2020) In this study PD - 1 and PD - L1 IHC expression was assessed in primary tumor sections and corresponding lymph nodes that were metastasized. In total of 66 cases, 47 (71.2%) cases showed positive expression of PD - 1 in the tumor infiltrating lymphocytes seenin theprimary tumor sections of the specimens, whereas 40 (60.6%) cases presented PD-1 positivity in cancer cells of primary tumor. These results are parallel to international studies which concluded that the PD - L1 positive cases showed from 7% to 87% positivity (Lenouvel et al., 2020).In a study, PD - 1 and PD-L1 targeting immunotherapy for metastatic and also recurrent HNSCC has shown improvement in patient survival (Ferris et al., 2018), furthermore better overall survival rate is seen in tumors expressing high PD-L1 levels (Cohen et al., 2019). The PD - L1 expression is more commonly seen in the tumors with increased number of lymphocytes, however the p-value was insignificant. A study conducted in 2019 stated a relation among the PD-L1 overexpression showing increased number of TILs (Khan et al., 2019). This shows that malignant cells use the immune system for its own advantage (Lenouvel et al., 2020). In our study another significant association was seen amongst PD - L1 expression in primary tumor besides pathological staging of regional lymph nodes (p=0.04). which was different from the study carried out in 2020 showing insignificant relation between the n stage of tumor and PD - L1 expression (Gennen et al., 2020). Surprisingly a lower PD - L1 expression was seen in high-grade OSCC than that in a low-grade OSCC and these data contradicted previous study carried out in 2017 (Hirai et al., 2017). A further investigation is though suggested to test this hypothesis. In this study PD - 1 positivity was seen in 39 out of 46 (84.8%) cases with lymph node metastasis. It presented a noteworthy relation between the PD - 1 expression of TIL s in primary tumor and lymph nodes metastasis (p=0.003). Similarly in a study conducted in Spain PD - 1 expression in TIL s linked significantly in corresponding primary tumor and metastasis in lymph-nodes (p=0.0412), PD - 1 expressing lymphocytes were found in the 90% (69/77) samples of lymph nodes (Schneider et al., 2018). In our study another significant association was seen among the expression of PD - 1 in TIL s in tumor proper sections and expression of PD-1 in the associated metastatic lymph nodes (38/46 cases were positive with p=0.003) In total of 66 cases, 48 cases were associated with lymph nodes metastasis and a significant association (p=0.027) was seen among the PD - L1 expression and lymph nodes metastasis as 67.4% (33/48) showed positive PD - L1 expression in primary tumor. In a study conducted in 2019 it was concluded that high immune histochemicalexpression of PD - L1 might be a significant factor in prediction anti - PD - 1/PD - L1 monotherapy efficacy (Xu et al., 2019). Despite the fact that conventionally chemical and radical therapies have been in use for the treatment of the cancer, still immunotherapy is considered to be the most effective against the cancers (Longo et al., 2019). The most integral suppressors of cytotoxic immune reaction considered are PD - 1 and PD - L1,and their antibody has been approved by FDA (Force et al., 2019). The studies conducted in 2019 concluded that high PD - 1 expression was linked with the improved subsistence of the breast carcinomas patients (Jiang et al., 2019) and patients showing high PD-L1 expressions are expected to get more benefit from the antiPD-1/PD-L1(Xu et al., 2019). The main limitation if this study was sample size. It was relatively smaller for such kind of epidemiological studies. A larger sample size could additionally increase validation of the results of the study. The further recommendation is to conduct the study on relatively larger sample size. This study has paved the path for further research in the field of immunotherapy in HNSCC. However, the conclusions of the study sustenance more studies to further prove the use of immune checkpoint inhibitors in the therapy of HNSCC. Conclusively, the vital part of checkpoint (PD-1 and PD-L1) inhibitors in most cancers makes them to be researched further as they act an opportunity as well as a challenge for cancer treatment. A variety of solid cancers show increased IHC expression of PD- 1 and PD-L1. Immunotherapy being a promising and advantageous treatment seems to be an important step in the cancer therapy. PD-1 expressing TILs and PD-L1 in primary tumor cells of primary tumor along with corresponding lymph nodes showed an association with the cancer profiles. In the relevance of PD -L1 and PD - 1 expression, the study done provides helpful data for the new treatment regimens as the results of our study further emphasizes on the expression of PD -1 and PD -L1 in HNSCC and their significant association with nodal metastasis seen. Appropriate and accurate assessment with the diagnosis and successful treatment with the help of these IHC markers would lead to the early and timely diagnosis followed by a positive treatment outcome. The expression of PD-L1 in tumor cells and PD-1 in tumor infiltrating lymphocytes may help the clinician and oncologist in many ways, like better delineation of treatment regimens along with the prognosis of the disease. Also, it will help in the appropriate assessment of the course of disease also at time of the initial diagnosis on incisional biopsies.
IqraaShakeel Malik: Interpretaion of IHC staining and menuscript writing. Muhammad Asif: Study conception and design along with histopathological diagnosis and interpretation. Namrah Bashir : Data Collection. Nighat Ara : Menuscript writing. Farhat Rashid : Review of IHC results. Hafeez Ud Din: Data Analysis. Numrah Shakeel Malik : Menuscript writing. Aimen Bashir : Data Collection
|
Dual-specificity MAP kinase phosphatases in health and disease | 72b36061-5852-4f06-8db0-b20a000d3bf6 | 6227380 | Pathology[mh] | Introduction Mammalian dual-specificity MAP kinase (MAPK) phosphatases (MKPs) comprise a subfamily of 10 catalytically active enzymes with a conserved domain structure. This consists of an amino-terminal non-catalytic domain and a carboxyl-terminal catalytic domain. The former contains the kinase interaction motif (KIM), which determines the specific binding and thus substrate selectivity of the MKP for the different MAP kinase isoforms and can also contain nuclear localisation (NLS) or export (NES) signals, which determine the subcellular localisation of certain MKPs. The catalytic domain carries the highly conserved active site consensus sequence (HCX 5 R) that is characteristic of the larger protein tyrosine phosphatase (PTPase) superfamily. The regulation, structure, catalytic mechanism and substrate selectivity of the MKPs have been extensively reviewed . Briefly, the 10 enzymes can be divided into three subgroups based on amino acid sequence homology, subcellular localisation and substrate specificity. These are the inducible nuclear MKPs, comprising DUSP1 /MKP-1, DUSP2 , DUSP4 /MKP-2 and DUSP5 , the cytoplasmic, extracellular-signal regulated kinase (ERK) -specific MKPs DUSP6 /MKP-3, DUSP7 /MKP-X and DUSP9 /MKP-4 and a group of three MKPs DUSP8 , DUSP10 /MKP-5 and DUSP16 /MKP-7 that are found in both the cytoplasm and cell nucleus and are relatively selective in their ability to dephosphorylate the p38 and c-Jun amino terminal kinases (JNKs), having little or no activity towards the classical extracellular signal-regulated kinase (ERK) MAPKs . Key features and characteristics of each of the 10 MKPs are also summarised . Our understanding of the physiological and pathophysiological roles for the MKPs has largely been driven by the generation of genetically engineered mouse (GEM) models in which individual MKPs have been deleted, either unconditionally, or in a tissue specific manner. This work, combined with studies in other model organisms, cell lines and observations in human cells and tissues has gradually revealed that MKPs play fundamental roles in the regulation of signalling events associated with normal development and homeostasis, but can also modulate a wide range of pathophysiological signalling outcomes with relevance to human disease. In this review we will detail the current level of understanding for each of the MKPs in turn, highlighting recent advances and future perspectives in the field.
The inducible nuclear MKPs 2.1 DUSP1 /MKP-1 DUSP1 /MKP-1 was the first of the dual-specificity MKPs to be characterised and was initially discovered as a growth factor or stress-inducible gene encoding a nuclear protein with homology to VH1, the prototypic dual-specificity protein phosphatase encoded by vaccinia virus . Initially characterised as a phosphatase able to specifically dephosphorylate the threonine and tyrosine residues of the signature T- E -Y motif within the activation loop of the classical MAPK ERK2 in vitro and in vivo it was later realised that DUSP1 /MKP-1 was capable of dephosphorylating all three major classes of MAPK with a distinct preference for the JNK isoforms followed by p38α and ERK1/2 MAPKs . DUSP1 /MKP-1 was also the first gene encoding an MKP to be deleted in the mouse, where no phenotype was initially reported with respect to development, fertility or lifespan and no evidence for deregulated ERK signalling was found in DUSP1 −/− mouse embryo fibroblasts (MEFs) . The failure to detect changes in MAPK activity in cells lacking DUSP1 /MKP-1 was probably due to an initial focus on the ERK1/2 pathway. Subsequent work in MEFs clearly showed that loss of DUSP1 /MKP-1 caused a significant increase in the activities of the stress-induced JNK and p38 MAPKs and revealed that MEFs lacking MKP-1 are acutely sensitive to JNK-mediated apoptosis in response to a wide variety of cellular stresses including UV-radiation, ionising radiation, hydrogen peroxide, anisomycin and cisplatin . Further experiments conducted using DUSP1 −/− mice quickly led to the realisation that this phosphatase regulates a number of physiological and pathophysiological processes including immunity, metabolic homeostasis, cellular responses to anticancer drugs, muscle regeneration, and neuronal function. 2.1.1 DUSP1 /MKP-1 in innate and adaptive immunity Given the wide range of roles that MAPKs perform in the development and function of cells of the immune system it was perhaps no surprise that amongst the first phenotypes detected in DUSP1 −/− mice was a failure to regulate stress-activated JNK and p38 signalling in macrophages and dendritic cells . These cells are key mediators of the innate immune response in which the p38 and JNK MAPKs lie downstream of the toll-like receptors (TLRs), which are activated by a wide variety of pathogen-derived stimuli and act to regulate the expression of both pro and anti-inflammatory cytokines and chemokines . Several groups demonstrated that loss of DUSP1 /MKP-1 led to elevated JNK and p38 activities in macrophages exposed to the bacterial endotoxin lipopolysaccharide (LPS) . This led to an initial increase in the expression of pro-inflammatory cytokines such as tumour necrosis factor alpha (TNFα), interleukin-6 (IL-6), interleukin-12 (IL-12) and interferon-gamma (IFN-γ) while, at later times, levels of the anti-inflammatory mediator interleukin-10 (IL-10) were increased . These cellular effects were accompanied by pathological changes such as inflammatory tissue infiltration, hypotension and multiple organ failure, all of which are markers of the severe septic shock and increased mortality observed in LPS-injected DUSP1 −/− mice when compared to wild type controls. With respect to the above changes in cytokine expression, the regulation of gene transcription by MAPK-regulated transcription factors such as activator protein-1 (AP1), activating transcription factor 1 (ATF-1) and cAMP response element binding protein (CREB) by DUSP1 /MKP-1 was an early focus . However, a major mechanism by which cytokine expression is controlled is via changes in mRNA stability and recent studies have revealed that DUSP1 /MKP-1 modulates cytokine mRNA levels by suppressing the p38 driven MAPK-activated protein kinase 2 (MK2)-dependent phosphorylation of the mRNA destabilising protein tristetraprolin (TTP) . TTP, which recognizes adenosine/uridine-rich elements (AREs) in the 3′ untranslated regions (UTRs) of cytokine mRNAs and recruits components of the cellular mRNA degradation machinery is phosphorylated by MK-2 on two sites (Ser52 and 178), which leads to both inactivation and stabilisation of TTP . Thus, loss of DUSP1 /MKP-1, by promoting p38-MK2-driven phosphorylation of TTP, favours TTP inactivation and cytokine mRNA stabilisation. In an elegant series of experiments, Smallie et al., combined deletion of DUSP1 /MKP-1 with a homozygous knock-in mutant in which the MK2-dependent phosphorylation sites within TTP are ablated and demonstrated that in bone marrow-derived macrophages (BMDMs) derived from the double mutant mice the elevated cytokine mRNA and protein levels seen on deletion of DUSP1 /MKP-1 alone was largely prevented. A similar reversal in the elevated serum levels of cytokines seen in LPS-injected DUSP1 −/− mice was also observed in the double mutant animals and microarray experiments performed using LPS-treated BMDMs, indicate that DUSP1 /MKP-1 regulates more than half of the genome-wide response to LPS, either wholly or partly via the phosphorylation of TTP . A similar approach revealed that production of interferon beta (IFNβ) in response to TLR activation is also mediated in part by DUSP1 /MKP-1-mediated regulation of TTP, but that in the early phase of the response DUSP1 /MKP-1 regulates IFNβ transcription via JNK-mediated phosphorylation of c-jun, which binds to the IFNβ promoter . Taken together, this work demonstrates that TLR mediated expression of DUSP1 /MKP-1 is a key component of a pathway, which acts through regulation of MAPK-dependent transcription factors and TTP to negatively regulate pathological inflammatory responses, to engage the “off phase” of macrophage-mediated responses to pro-inflammatory stimuli and promote the resolution of inflammation. As such, any defects in this pathway would be expected to impede the latter process and contribute to a range of chronic inflammatory diseases, making the DUSP1 /MKP-1-p38-MK2 signalling axis a prime candidate for therapeutic intervention. While innate immunity comprises an acute, non-specific response to foreign antigens, adaptive or acquired immunity is highly specific to a particular antigenic stimulus and comprises a network of specialized, immune cells and processes that either eliminate pathogens or prevent their growth. In addition, by generating immunological memory, this response also provides long-lasting immunity against infection, which is the basis of vaccination, while an abnormal or maladaptive response can result in autoimmune disease. The workhorses of the system are the B and T lymphocytes, which mediate humoral (antibody-mediated) immunity and cell-mediated (cytotoxic or effector cell-mediated) responses. Despite the key role for ERK signalling in thymocytes and the observation that DUSP1 /MKP-1 is expressed at varying levels during T-cell development, mice lacking DUSP1 /MKP-1 do not present with abnormalities in this process and the ratio of CD4 + to CD8 + T-cells following thymic maturation is in the normal range . This possibly reflects either redundancy amongst ERK-specific phosphatases or the fact that ERK is not the preferred target for DUSP1 /MKP-1. However, in mature CD4+ T cells, loss of DUSP1 /MKP-1 seems to impact T cell function with decreased activation and proliferation following exposure to phorbol 12-myristate 13-acetate (PMA) and ionomycin and increased levels of JNK signalling . Furthermore, both CD4+ and CD8+ T cells lacking DUSP1 /MKP-1 showed reduced proliferation and interleukin-2 (IL-2) production after exposure to anti-CD3 antibody to mimic T cell receptor activation, either alone or in combination with anti-CD28. This lack of proliferation correlated with a failure to accumulate nuclear factor of activated T cells c1 (NFATc1) in the cell nucleus and, as this process is negatively regulated by JNK signalling, most probably reflects a failure to restrain JNK activity in these cells . Consistent with this, re-stimulation of activated DUSP1 −/− CD4+ T cells with anti-CD3 also caused an increase in JNK-dependent activation-induced cell death (AICD). The differentiation of effector T cell lineages was also affected by deletion of DUSP1 /MKP-1 with naïve DUSP1 −/− CD4+ T cells showing deficits in effector cytokine producing type-1 helper (Th1) and pro-inflammatory type-17 helper (Th17) cell differentiation and function, while in naïve CD8 + T cells DUSP1 /MKP-1 deficiency resulted in lower production of the CD8 + T cell effector cytokines IFN-γ and TNFα . Finally, DUSP1 /MKP-1was found to be required for anti-influenza T cell responses in infected mice with infected DUSP1 −/− animals showing defective influenza virus-specific CD4 + and CD8 + T cell responses and clear signs of impaired viral clearance. In contrast, mice lacking DUSP1 /MKP-1 were protected from experimentally induced autoimmune encephalitis (EAE) following injection of myelin oligodendrocyte glycoprotein peptide (MOG 35–55 ). This resulted from an intrinsic defect in MKP-1 KO CD4 + T cells, which showed reduced production of IL-17 and IFNγ and demonstrates a key role for DUSP1 /MKP-1 in mediating autoreactive CD4 + T cell responses in vivo . As well as performing key functions in innate immunity, dendritic cells form a bridge between innate and adaptive immune responses by acting as antigen presenting cells for the priming of both CD4+ T helper (Th) and CD8+ cytotoxic T lymphocytes (Tc) . As well as affecting the function of these T cell subsets, it turns out that DUSP1 /MKP-1 also plays a key role in facilitating this crosstalk. Huang et al. used a model in which the immune system in lethally irradiated mice was reconstituted with a mix of bone marrow from DUSP1 −/− /Rag1 −/− and WT mice (5:1 ratio) and compared with mice reconstituted using DUSP1 +/+ /Rag1 −/− and WT (5:1 ratio) bone marrow. In both cohorts the T cells were derived from the WT marrow (Rag1 −/− bone marrow cannot generate mature T and B cells) and thus expressed DUSP1 /MKP-1 while the cells of the innate immune system were either null or WT for DUSP1 /MKP-1. Using two mouse infection models, the Listeria monocytogenes (Th1 biased model) and Candida albicans (Th17 biased model), they found that dendritic cells lacking DUSP1 /MKP-1 exhibited reduced IL-12 production and attenuated IFNγ expression and Th1 responses. In contrast, the production of IL-6 by dendritic cells lacking DUSP1 /MKP-1 was enhanced and this resulted in an exaggerated Th17 response. In addition, DUSP1 /MKP-1 suppressed the release of transforming growth factor β2 (TGFβ2) by dendritic cells, thus inhibiting the development of inducible regulatory T cells (Treg). At the biochemical level, these altered responses were mediated by increased p38 MAPK activity in dendritic cells lacking DUSP1 /MKP-1. In conclusion this work clearly shows that the activity of DUSP1 /MKP-1 in the dendritic cells of the innate immune system is a critical regulator of signals that dictate the course of adaptive immune responses at the immunological synapse . One interesting observation arising from these studies of DUSP1 /MKP-1 in innate and adaptive immunity is that whereas DUSP1 /MKP-1 mainly targets p38 MAPK in macrophages and dendritic cells, the T cell effects of DUSP1 /MKP-1 loss seem to be mediated predominantly by increased JNK activity. This suggests that there is cell type specificity with respect to DUSP1 /MKP-1 activity towards different MAPK isoforms. The mechanism by which this might be achieved is unclear, but may be related to post-translational modification. DUSP1 /MKP-1 is phosphorylated and this is known to modulate its stability . More recently, it was shown that p300 histone acetylase-mediated acetylation of lysine 57, which lies just C-terminal of the KIM within the amino terminal domain of DUSP1 /MKP-1, reinforces its interaction with and ability to dephosphorylate p38 MAPK . This can be opposed by a subset of specific histone deacetylases (HDACs 1–3) in mouse macrophages , suggesting one possible mechanism by which the canonical substrate selectivity of DUSP1 /MKP-1 might be regulated in a cell type specific manner. Finally, given its key role as a critical regulator of innate and adaptive immunity , loss of DUSP1 /MKP-1 was also found to exacerbate a range of inflammatory phenotypes in mouse models including experimental colitis , anaphylaxis and psoriasis , However, for reasons as yet unclear, loss of DUSP1 /MKP-1 did not sensitize mice to the development of spontaneous age-dependent osteoarthritis, despite the involvement of an inflammatory process and mediators such as TNFα and interleukin-1β in this disease . DUSP1 /MKP-1 is also directly targeted by a range of immune modulators. Enhanced expression of DUSP1 /MKP-1 underpins, at least in part, the anti-inflammatory activity of glucocorticoids and is also observed in response to vitamin D and transforming growth factor-beta (TGFβ) both of which are anti-inflammatory. In contrast, pro-inflammatory stimuli such as IFN-γ and interleukin-17A (IL-17A) suppress DUSP1 /MKP-1 expression and thus increase signalling through the p38 and JNK MAPK pathways . 2.1.2 DUSP1 /MKP-1 in metabolic homeostasis The first indication that DUSP1 /MKP-1 might play a role in regulating metabolic homeostasis came with the finding that DUSP1 −/− mice were resistant to diet-induced obesity and that this reflected a higher level of energy expenditure, but not overall activity in the null mice . Surprisingly, despite remaining lean on a high-fat diet (HFD), DUSP1 −/− mice did become glucose intolerant (as would wild type animals), while still being protected from hepatic steatosis. This phenotype correlated with increased levels of JNK, p38 and ERK activity in insulin responsive tissues. However, DUSP1 −/− mice did not show abnormalities in insulin signalling or glucose homeostasis, despite an established role for JNK signalling in promoting insulin resistance . This apparent contradiction was possibly due to the finding that loss of DUSP1 /MKP-1 affects nuclear rather than cytoplasmic JNK activity and that the latter is responsible for JNK-dependent abnormalities in the response to insulin . Subsequent work has revealed mechanistic aspects of the DUSP1 −/− metabolic phenotype. Firstly, mice lacking DUSP1 /MKP-1 are protected from the loss of oxidative (slow-twitch) myofibers in skeletal muscle. The overall effect of this is to favour oxidative over glycolytic metabolism and, because the latter consumes less energy, to protect against diet-induced obesity . This effect seems to be secondary to increased p38 MAPK-mediated phosphorylation and stabilisation of peroxisome proliferator-activated receptor-gamma coactivator 1α (PGC-1α) thus increasing its activity as a regulator of mitochondrial biogenesis and energy expenditure. Experiments in which grossly obese leptin-resistant (db/db) mice were crossed with DUSP1 −/− animals revealed that loss of DUSP1 /MKP-1 protected against hepatic steatosis. By increasing MAPK-dependent phosphorylation of peroxisome proliferator-activated receptor-γ (PPARγ) at a site (Ser112) that negatively regulates its activity, loss of DUSP1 /MKP-1 prevents the PPARγ-dependent expression of lipogenic genes, thus reducing lipid droplet formation in hepatocytes . While this work sheds some light of the functions of DUSP1 /MKP-1 in metabolic homeostasis a severe limitation of these studies is the use of a whole body DUSP1 /MKP-1 knockout. Metabolic control is complex and subject to both central and peripheral regulation . Furthermore, diet-induced obesity has an inflammatory component and the role(s) of DUSP1 /MKP-1 in regulating immune responses may also be a confounding factor. To begin to address this, a conditional DUSP1 fl/fl mouse has now been employed to study the metabolic effects of DUSP1 /MKP-1 deletion in specific tissues. Liver specific knockout of DUSP1 /MKP-1 (MKP-1-LKO) using albumin-Cre (Alb-Cre) resulted in increased hepatic JNK and p38 activation. However, unlike DUSP1 −/− mice MKP-1-LKO animals exhibited increased adiposity, fasting hyperglycaemia and hyperinsulinemia on a normal chow diet, indicating that hepatic DUSP1 /MKP-1 regulates glucose homeostasis . This was confirmed in subsequent experiments using hyperinsulinemic-euglycemic clamps, which demonstrated that MKP-1-LKO mice were hyperglycaemic, glucose intolerant and develop hepatic insulin resistance . While DUSP1 −/− mice were resistant to HFD-induced obesity MKP-1-LKO mice were more susceptible, but were still protected against hepatic steatosis. Furthermore, unlike DUSP1 −/− mice, they showed decreased energy expenditure . Mechanistically, the effects of DUSP1 /MKP-1 deletion on glucose metabolism were found to be secondary to increased hepatic p38 and JNK mediated transcription of gluconeogenic genes, increased p38-dependent phosphorylation of cyclic AMP responsive element binding protein (CREB), which promotes gluconeogenesis through PGC1/PPARγ and decreased activation of Signal Transducer and Activator of Transcription 3 (STAT3), a negative regulator of gluconeogenesis . The latter effect is probably an indirect result of the lower circulating levels of IL-6 in MKP-1-LKO mice, as this metabolic cytokine is a potent inducer of Janus kinase (JAK)-STAT signalling. Finally, the decreased energy expenditure observed in MKP-1-LKO mice may be related to reduced levels of IL-6 and fibroblast growth factor 21 (FGF21). Both factors promote energy expenditure, insulin sensitivity, fatty acid oxidation, and weight loss and their reduction would be expected to impair skeletal muscle oxidative capacity and thus increase susceptibility to diet-induced obesity . Skeletal muscle plays a major role in the regulation of glucose metabolism and metabolic homeostasis. Following on from the liver-specific deletion of DUSP1 /MKP-1, the effects of skeletal muscle specific loss of this phosphatase (MKP-1-MKO), using human α-skeletal actin (HSA-Cre), have now been studied. MKP-1-MKO mice show increased levels of p38 and JNK signalling in skeletal muscle and are significantly protected from HFD-induced weight gain . As was the case in the MKP1 −/− mice, the failure to gain weight was secondary to enhanced energy expenditure when compared with MKP-1 fl/fl controls and no differences in either food intake, or activity between the two genotypes was observed . Interestingly, MKP-1-MKO mice were also resistant to hepatic steatosis, which was consistent with lower levels of hepatic PPARγ and sterol regulatory element-binding protein 1c (SREBP-1c) expression. However, no changes in either p38 or JNK activity were detected in liver tissue. Glucose (GTT) and insulin tolerance (ITT) tests revealed that MKP-1MKO mice on a HFD produced lower levels of circulating insulin and were insulin sensitive, indicating that they are protected from the development of insulin resistance. Biochemically, an unexpected role for increased PI3-kinase-Akt signalling resulting from microRNA-21 (miR-21) dependent down-regulation of phosphatase and tensin homolog (PTEN) in MPK-1-MKO was uncovered and this could contribute to the increased insulin sensitivity observed in MKP-1-MKO mice. Finally, consistent with the results of whole body deletion of DUSP1 /MKP-1, the increased energy expenditure observed in MKP-1-MKO mice was secondary to an increase in the proportion of oxidative myofibers and was reflected in enhanced oxidative capacity and mitochondrial function in skeletal muscle . Taken together, these results begin to unravel some of the complexity and tissue specific interplay of DUSP1 /MKP-1 action in the regulation of metabolic homeostasis and also emphasise the importance of compartmentalised nuclear regulation of p38 and JNK activities in mediating the phenotypes observed. The observation that DUSP1 /MKP-1 is up-regulated in insulin-responsive tissues in response to a HFD in mice and also in obese humans indicates that it forms part of a key stress response that leads to decreased energy expenditure in skeletal muscle, thus contributing to weight gain and may also mediate at least some of the adverse consequences of this disease, including abnormalities in glucose metabolism and hepatosteatosis. The further use of conditional DUSP1 /MKP-1 ablation will reveal the relative importance of MAPK regulation in distinct tissues by this phosphatase in energy homeostasis and, from the information gathered so far, MKP-1/ DUSP1 continues to be a potential pharmacological target for the treatment of metabolic disease. 2.1.3 DUSP1 /MKP-1 in cancer Given the central importance of deregulated MAPK signalling in the initiation and progression of human cancers it is no great surprise that the involvement of MKPs in regulating various aspects of the cancer phenotype has been of widespread interest . Disappointingly, given that DUSP1 /MKP-1 was both the first MKP to be discovered and also the first to be deleted from the mouse genome, there are currently no published studies in which DUSP1 /MKP-1 has been directly implicated in either tumour initiation or progression. It is hoped that the recent development of the conditional DUSP1 /MKP-1 mouse (see 2.1.2.) will facilitate definitive experiments, particularly as this model avoids the potentially confounding effects of the immune and inflammatory abnormalities seen when DUSP1 /MKP-1 is knocked out globally. In contrast, over the 25 or so years since DUSP1 /MKP-1 and its role in regulating MAP kinase signalling were discovered, there have been numerous publications reporting either increased or reduced expression of DUSP1 /MKP-1 in a wide range of human tumours including breast, pancreas, gastric, ovary, lung, skin and prostate . In addition, a number of studies have relied on ectopic overexpression of DUSP1 /MKP-1 in normal and cancer cell lines to study its possible role in modulating oncogenic signalling. These studies have been extensively reviewed elsewhere and as they have often yielded equivocal or even contradictory information regarding the role of DUSP1 /MKP-1 in cancer, it is not proposed to list or discuss them further here. One aspect of cancer biology in which DUSP1 /MKP-1 does appear to play an important role is in the response of normal and cancer cells to a range of chemical and physical insults including modalities used in cancer chemotherapy . Soon after it became clear that the p38 and JNK MAPKs were the preferred substrates for DUSP1 /MKP-1, it was observed that the overexpression of this phosphatase enhanced cellular resistance to both UV-radiation and the chemotherapeutic drug cisplatin and that this was related to the suppression of JNK-mediated apoptosis . That DUSP1 /MKP-1 played a crucial role in modulating sensitivity to these insults was confirmed when MEFs derived from DUSP1 −/− mice were found to be sensitive to UV-radiation, cisplatin, hydrogen peroxide and anisomycin . In normal cells, DUSP1 /MKP-1 expression is induced by UV and cisplatin via activation of p38 MAPK, whereas it is the suppression of JNK activity by DUSP1 /MKP-1 that modulates cell death. This indicates that DUSP1 /MKP-1 mediated crosstalk between these two distinct MAPK pathways regulates cellular sensitivity . Thus it is likely that elevated expression of DUSP1 /MKP-1 in tumours can mediate chemoresistance and this is supported by studies in non-small cell lung cancer (NSCLC) where overexpression of DUSP1 /MKP-1 is observed and patients become resistant to treatment with cisplatin. In NSCLC cancer cell lines where DUSP1 /MKP-1 was constitutively expressed, siRNA knockdown increased cisplatin sensitivity some 10 fold, reduced the growth of these cell lines in nude mice and rendered the resulting tumours cisplatin sensitive . In lung cancer patients dexamethasone is also often co-administered with cisplatin to ameliorate the undesirable side effects of treatment. However, glucocorticoids are known to upregulate DUSP1 /MKP-1 expression and not surprisingly dexamethasone effectively suppressed cisplatin-induced apoptosis in a lung adenocarcinoma cell line indicating that DUSP1 /MKP-1 plays a key role in this potentially undesirable drug-drug interaction . The role of DUSP1 /MKP-1 in mediating resistance to chemotherapy appears not to be restricted to cisplatin, as the ability of this phosphatase to inhibit JNK-mediated apoptosis has also been implicated in the resistance of pancreatic cancer cells to gemcitabine , multidrug resistance in glioblastoma , resistance to doxorubicin and taxanes in breast cancer and resistance to the proteasome inhibitor bortezomib . Finally, a recent paper has implicated DUSP1 /MKP-1 in a growth factor-dependent pathway, which promotes intrinsic resistance to the tyrosine kinase inhibitors (TKI) used to treat chronic myeloid leukemias (CML). In mouse pro-B BaF3 cells engineered to express the breakpoint cluster region (BCR)-Abl tyrosine kinase fusion, which drives CML, Kesarwani et al. found that DUSP1 /MKP-1 along with FBJ osteosarcoma oncogene (Fos) were responsible for resistance to the Abl TKI inhibitor imatinib (Gleevec). Genetic deletion or pharmacological inhibition of Fos and DUSP1 /MKP-1 eradicated minimal residual disease (MRD) in multiple in vivo models as well as in patient-derived mouse xenografts. Mechanistically, DUSP1 /MKP-1 seems to influence TKI sensitivity via its ability to suppress p38 MAPK activity and modulate AP1-dependent transcriptional networks. The latter hypothesis is supported by the finding that SB202190, a specific p38 MAPK inhibitor, also conferred imatinib resistance. While these results are potentially exciting, some caution is necessary in the interpretation of the data. BCI (2-benzylidene-3-(cyclohexylamino)-1-Indanone hydrochloride), the “specific” DUSP1 inhibitor used to treat mice and reverse disease in a retroviral bone marrow transduction transplantation leukemogenesis model is both highly toxic and relatively non-specific . 2.1.4 DUSP1 /MKP-1 function in other tissues Given the key role that MAPK signalling plays in aspects of brain development and function it is unsurprising that MKPs have been implicated in the regulation of these processes. Indeed DUSP1 /MKP-1 plays important roles in neural cell development, neuronal cell survival and death, glial cell function and events, which underpin learning and memory (reviewed in ). In terms of pathophysiology, an important observation was that DUSP1 /MKP-1 levels were elevated in the hippocampal region of post-mortem brain from patients who had been diagnosed with major depressive disorder (MDD) . MDD is characterised by chronic or episodic depression and carries a significant (2–7%) risk of suicide. Duric et al. found that DUSP1 /MKP-1 was also elevated in the hippocampus of rats exposed to chronic unpredictable stress (CUS) an effect that was attenuated by treatment with the antidepressant drug fluoxetine (Prozac) a selective serotonin reuptake inhibitor. Furthermore, adenoviral-mediated expression of DUSP1 /MKP-1 in the hippocampus caused anhedonia (an inability to experience pleasure), as assessed by a reduced preference for sucrose over water, and these animals displayed other surrogates of depressive behaviour or helplessness such as disturbed feeding and increased immobility in the forced swim test, all of which were also seen in the CUS exposed rats. Interestingly, all of the latter endpoints were suppressed in CUS exposed DUSP1 −/− mice when compared to wild type controls . Mechanistically, these changes were associated with a reduction in phospho-ERK1/2 levels in CUS exposed wild type mice, which was not observed in DUSP1 −/− mice. A result, which led the authors to conclude that ERK was the relevant DUSP1 /MKP-1 target. This finding is somewhat surprising in the light of our knowledge that JNK and p38, but not ERK, are the preferred substrates for this phosphatase and also conflicts with a previous study in which a reduction in hippocampal phospho-JNK but not phospho-ERK was observed in rats exposed to CUS . Finally, a recent study has identified similar changes in DUSP1 /MKP-1 levels in the anterior cingulate cortex (ACC) of mice exposed to neurophathic pain and CUS, which were again reversed by fluoxetine . While not shedding new light on the biochemical mechanisms involved, this latter study does implicate the regulation of MAPK signalling by DUSP1 /MKP-1 in another brain region tightly associated with regulating mood-related functions. With respect to neurodegenerative disorders, DUSP1 /MKP-1 has been reported to mediate neuroprotective effects in both in vitro and in vivo models of Huntington's disease through its ability to suppress polyglutamine-expanded huntingtin-induced activation of c-Jun N-terminal kinases (JNKs) and p38 MAPKs . Finally, by suppressing p38 MAPK activity, DUSP1 /MKP-1 has been reported to protect dopaminergic neurons from the toxic effects of 6-hydroxydopamine (6-OHDA) suggesting that strategies aimed at either increasing MKP-1 expression or activity might be a viable strategy in the treatment of Parkinson's disease . DUSP1 /MKP-1 has also been implicated in muscle regeneration as DUSP1 −/− mice are impaired in their ability to recover from experimental muscle injury and, when crossed into a mouse model of Duchenne's muscular dystrophy (the mdx dystrophin null), they display exacerbated muscular dystrophinopathy . Interestingly, this is exactly the reciprocal of the phenotype observed after deletion of DUSP10 /MKP-5 (see ) . More recently, the study of DUSP1 −/− / DUSP10 −/− double knockout (DKO) mice revealed a severe impairment in muscle regeneration. Satellite cells, the precursors of muscle cells, were less proliferative and DKO mice had increased inflammation at sites of injury suggesting that the positive regulation of myogenesis by DUSP1 /MKP-1 is dominant over negative regulation by DUSP10 /MKP-5 . Despite the fact that they share common substrates in JNK and p38 it is clear that these two MKPs regulate distinct signalling events. This may be related to the fact that while DUSP1 /MKP-1 regulates nuclear MAPK activity, DUSP10 /MKP-5 can impinge on cytosolic signalling and thus the two MKPs may regulate quite distinct sets of MAPK substrates. 2.2 DUSP2 DUSP2 (also known as PAC-1) was first identified as a mitogen-inducible gene in human T-cells and is most closely related to DUSP1 /MKP-1 and DUSP4 /MKP-2, sharing 71% and 68% amino acid identity, respectively . Mainly expressed in hematopoietic tissue, DUSP2 transcription is induced by activation of the ERK1/2 signalling pathway . When expressed in mammalian cells, DUSP2 favours dephosphorylation of ERK1/2 and p38 MAPKs, being less able to inactivate JNK . Its lack of activity against JNK was later suggested to be a result of the relative inability of this MAPK to cause catalytic activation of DUSP2 when compared with ERK2 . In a recent twist, DUSP2 was found to be unique amongst the 10 mammalian MKPs in being able to bind to and dephosphorylate the “atypical” MAPK kinases ERK3 and ERK4 . In both ERK3 and 4 the classical T-X-Y motif in the activation loop is replaced by S- E -G, in which the serine residue is the sole phospho-acceptor and DUSP2 efficiently dephosphorylates this residue in cultured cells. 2.2.1 DUSP2 in innate and adaptive immunity DUSP2 expression is restricted to thymus, spleen and lymph nodes. However, DUSP2 −/− mice develop normally and show no abnormalities in the numbers of lymphocytes in blood and bone marrow. Granulocyte numbers and lymphoid tissue development are also normal, indicating that DUSP2 is not required for immune system development . However, using the K/B x N model of inflammatory arthritis, wild type mice injected with arthritogenic K/BxN serum containing autoantibodies to glucose-6-phosphate isomerase (GPI) developed peripheral inflammatory arthritis within 2 days while DUSP2 −/− mice were protected. Further analysis showed that DUSP2 −/− animals had impaired effector responses such as inflammatory mediator production by macrophages and mast cells and decreased mast cell survival . Taken together, these results demonstrate an unexpected role for DUSP2 as a positive mediator of inflammation. Puzzlingly, stimulated mast cells and macrophages lacking DUSP2 displayed decreased ERK1/2, and p38 MAPK phosphorylation and increased JNK phosphorylation, which is exactly the opposite of the result predicted by prior biochemical studies . No compensatory changes in the expression of other MKPs was observed and the authors invoke pathway crosstalk, postulating that the increase in JNK activity on DUSP2 deletion resulted in suppression of ERK activity. More recently, Lu et al., have studied the role of DUSP2 in T cell development and differentiation and found that loss of this phosphatase has a profound effect on the differentiation of naive T cells in vitro by favouring Th17 differentiation, while inhibiting the production of into Treg cells . Using the dextran sodium sulfate (DSS)-induced model of intestinal inflammation and colitis, they further show that DUSP2 −/− mice exhibit more severe disease when compared to wild type, as evidenced by increased mucosal hyperemia and colonic ulceration. Consistent with the in vitro results, this pathology is accompanied by higher levels of Th17 cells in DSS-treated DUSP2 −/− colon and increased levels of pro-inflammatory cytokines including IL-6, IL-17, TNFα and interleukin-1beta (IL-1β) . Mechanistically, while levels of phospho-ERK and phospho-p38 were higher in untreated DUSP2 −/− colon compared to wild type, no differences were seen in DSS-treated colon from the two genotypes. However, higher levels of phospho-STAT3 were consistently seen in mice lacking DUSP2 and the authors hypothesise that this transcription factor is a direct DUSP2 substrate in vivo. However, as JAK/STAT signalling is potently activated in response to IL-6 and this cytokine is overproduced in response to DUSP2 deletion some caution must be attached to this interpretation, particularly as DUSP2 (like DUSP6/MKP-3 see ) undergoes catalytic activation by bound ERK2, implying that its full activity as a protein phosphatase is dependent on binding to a MAPK substrate . Taken together, these results demonstrate that DUSP2 plays key roles in both the innate and adaptive immune systems, which have implications for the initiation and progression of pathology in murine models of human inflammatory disease . However, at present it is unclear whether or not these relate to the direct activity of this phosphatase in modulating MAPK signalling or may involve other relevant targets. Clearly more work is required to reconcile the in vivo observations with precise molecular mechanism. 2.2.2 DUSP2 in cancer Thus far DUSP2 −/− mice have not been crossed into any of the commonly used murine cancer models and reports of the involvement of DUSP2 in cancer are relatively scant . Down-regulation of DUSP2 has been reported in a number of solid tumours, where its expression level was inversely proportional to that of the hypoxia-inducible transcription factor HIF-1α and its loss seemed to mediate increased ERK activation and chemoresistance in cancer cell lines and to contribute to colon cancer “stemness” . Given its expression in hematopoietic tissues, there are also a number of studies linking DUSP2 with blood cell cancers. Down-regulation of DUSP2 in acute myeloid leukemia (AML) is associated with constitutive ERK activation , while recent data from cancer genome sequencing of Diffuse Large B-cell lymphomas (DLBCL), the major form of non-Hodgkin's lymphoma, reveals that DUSP2 is one of the most frequently mutated genes in this disease . The observation that DUSP2 expression is highly inducible upon stimulation of B-cell lymphoma cell lines suggests that mutations in DUSP2 may have the potential to modify MAPK signalling in DLBCL. It will be vital to determine the effects of these mutations on the localisation or activity of DUSP2 in order to explore the possible contribution of this phosphatase to the initiation and/or progression of disease. 2.3 DUSP4 /MKP-2 DUSP4 /MKP-2 was amongst the very earliest of the MKPs to be characterised and is most closely related to DUSP1 /MKP-1 , sharing 58.8% identity at the amino-acid sequence level. Although it is not as widely studied, DUSP4 /MKP-2 shares many features with its nearest relative including transcriptional regulation in response to growth factors, an ability to dephosphorylate ERK, JNK and p38 MAPKs and regulation of DUSP4 /MKP-2 protein stability by the phosphorylation of its C-terminus . The generation of knockout mice has now advanced our knowledge of DUSP4 /MKP-2 function in a number of areas. 2.3.1 DUSP4 /MKP-2 in innate and adaptive immunity The earliest reports using the DUSP4 −/− mice centred on its possible function as a regulator of innate immunity and inflammation. BMDMs from DUSP4 −/− mice showed increased levels of both JNK and p38 but not ERK signalling in response to LPS. This correlated with a potentiation of LPS-stimulated induction of the pro-inflammatory cytokines, IL-6, IL-12Beta (IL-12p40), TNFα, and also cyclooxygenase-2 (COX-2) derived prostaglandin E2 (PGE 2 ) production . However, IL-10 was suppressed, as was inducible nitric oxide synthase (iNOS) expression while arginase-1 levels were increased. The reciprocal changes in iNOS/arginase-1 levels would tend to suppress nitric oxide (NO) production as arginase-1 competes with iNOS for the same substrate. Following infection with the intracellular parasite Leishmania Mexicana , mice lacking DUSP4 /MKP-2 were found to be more susceptible to infection, with an increased parasite burden and lesion size and this was accompanied by a suppression of Th1 and/or increased Th2 responses. The reason for the increased susceptibility to Leishmania Mexicana infection was due to decreased iNOS and increased expression and function of arginase-1 rather than any modulation of cytokine synthesis . Taken together these results suggest that DUSP4 /MKP-2 does not display simple functional redundancy with respect to its near relative, but instead is protective against Leishmania Mexicana infection due to up-regulation of iNOS and suppression of arginase-1 expression, thus promoting NO-mediated parasite death. This mechanism was also found to account for the protective effects of DUSP4 /MKP-2 against Leishmania donovani the causative agent of visceral leishmaniasis and Toxoplasma gondii , which causes toxoplasmosis . Differences between DUSP4 /MKP-2 and DUSP1 /MKP-1 were further underlined in studies of the response of DUSP4 −/− mice to experimental LPS-induced sepsis. The first major surprise came with the discovery that, in contrast to mice lacking DUSP1 /MKP-1, mice lacking DUSP4 /MKP-2 were more resistant to endotoxic shock and also had lower levels of circulating IL-1β, IL-6, and TNFα . Furthermore, LPS-stimulated BMDMs derived from DUSP4 −/− mice produced significantly less TNFα and IL-10 when compared to wild type cells and this was associated with increased levels of phosphorylated (active) ERK, but decreased levels of phospho-JNK and p38 . It is unclear why there is a discrepancy between these results in LPS-stimulated BMDMs and those obtained by Al-Mutairi et al. , but they went on to show that elevated ERK2 signalling led to induction of DUSP1 /MKP-1 in the DUSP4 −/− macrophages and that this MKP was responsible for the reduction in JNK and p38 signalling and reduced cytokine production . This supports a model in which ERK-mediated cross talk between MKP-2 and MKP-1 acts to regulate cytokine production in response to LPS, a view supported by the observation that siRNA mediated knockdown of DUSP1 /MKP-1 increased the production of TNFα by DUSP4 −/− BMDMs . In the adaptive immune system deletion of DUSP4 /MKP-2, like deletion of DUSP1 /MKP-1 , does not affect thymocyte maturation and positive selection. Furthermore, no enhanced ERK, JNK, or p38 phosphorylation was observed in either activated or phorbol-12-myristate-13-acetate (PMA)-treated DUSP4 −/− T cells . However, CD4+, but not CD8+, T cells did show higher rates of proliferation without affecting differentiated Th1 and Th2 T-cell functions in vivo. The proliferative change in CD4+ T cells lacking DUSP4/MKP-2 was associated with increased STAT5 phosphorylation and interleukin 2 receptor alpha (CD25) expression . Subsequent work showed that DUSP4 /MKP-2 decreases both the transcriptional activity and stability of STAT5 and both in vitro and in vivo data showed that DUSP4 /MKP-4 deletion enhanced iTreg and reduced Th17 polarisation while DUSP4 -deficient mice were somewhat more resistant to the induction of autoimmune encephalitis . Finally, increased DUSP4 /MKP-2 expression has been implicated in age-dependent defective adaptive immunity. Increased expression of DUSP4 /MKP-2 in CD4+ memory T cells from older (>65 years) individuals inhibits the ERK and JNK-dependent expression of CD40L and reduces the production of the cytokines interleukin-4 (IL-4) and interleukin-21 (IL-21) by follicular helper cells, thus impairing T cell-dependent B cell responses . These results suggest that specific inhibition of DUSP4 /MKP-2 activity might form part of a strategy to combat increased morbidity from infections in the elderly. Taken together these studies clearly implicate DUSP4 /MKP-2 in the regulation of inmate and adaptive immunity . However, despite assumptions that its close relationship with the prototypic MKP DUSP1 /MKP-1 might indicate overlapping or identical functions this enzyme seems to play a distinct role in regulating immune function. In macrophages there is some debate as to the effects of DUSP4 /MKP-2 deletion on the activity of specific MAPK isoforms with discordant results in LPS-treated BMDMs . In T cells it seems to mediate its effects via modulation of STAT5 and, as no perturbation in MAPK signalling was observed in cells lacking DUSP4 /MKP-2, this has been taken as evidence of a MAPK-independent function for this enzyme . 2.3.2 DUSP4 /MKP-2 in cancer Although less well studied than DUSP1 /MKP-1, there have nevertheless been numerous reports of either increased or reduced expression of DUSP4 /MKP-2 in a wide variety of human cancer cell lines and primary tumours including pancreatic, lung, ovarian, breast, liver, thyroid and colon (reviewed in ). However the majority of these studies have relied on either association and/or correlation with clinical outcome/tumour subtype or ectopic expression of this MKP in cancer cell lines. Thus far the DUSP4 −/− knockout mice have not been crossed into or utilised in any of the well-characterised murine cancer models and there is no direct evidence of a role for DUSP4 /MKP-2 in the initiation or progression of tumours. Furthermore, the fact that DUSP4 /MKP-2 has a proven role in immune regulation indicates that a conditionally targeted allele of DUSP4 /MKP-2 would be required to avoid the confounding effects of deleting this MKP in immune cells on any cancer phenotypes observed. Despite this there have been some recent reports that indicate a role for this MKP in human cancers . DUSP4 /MKP-4 was found to be epigenetically silenced in some 75% of >200 cases of diffuse B cell lymphoma (DBCL) and a lack of DUSP4 /MKP-2 was a negative prognostic factor in three independent cohorts of DBCL patients. Mechanistically, this cancer appears to be dependent on JNK signalling for continued survival and loss of DUSP4 /MKP-2 contributes to cancer progression by augmenting JNK activity. This is consistent with the results of ectopic expression of DUSP4 /MKP-2, which ablates JNK activity and induces apoptosis in DBCL cells while dominant negative interference with JNK also restricts survival . There are also indications that DUSP 2/MKP-4 is implicated in resistance to chemotherapy in both gastric cancer, where resistance to doxorubicin is associated with DUSP4 /MKP-2 driven epithelial-mesenchymal transition (EMT) and in Her2 positive breast cancer where DUSP4 /MKP-2 is associated with resistance to the anti-Her2 humanised monoclonal antibody Trastuzumab and siRNA mediated silencing of DUSP4 /MKP-2 re-sensitised breast cancer cell lines with an amplified Her2 oncogene to this agent . 2.3.3 DUSP4 /MKP-2 in other tissues DUSP4 /MKP-2 is expressed at increasing levels during the process of neuronal differentiation in neural progenitors derived from retinoic acid (RA)-treated murine embryonic stem cells (mESCs) and lentiviral DUSP4 /MKP-2 siRNA knockdown significantly retarded this process. Importantly, this phenotype could be rescued with siRNA-resistant wild type but not a catalytically inactive mutant of DUSP4 /MKP-2 and loss of DUSP4 /MKP-2 resulted in increased levels of phospho-ERK, but not JNK or p38 MAPKs, indicating that this is the relevant target. Overall this data indicates that DUSP4 /MKP-2 plays a role in both the neural commitment of mESCs and neuronal differentiation and may point to a wider role for this MKP in brain function and pathology . More recently direct evidence of a role for DUSP4 /MKP-2 in the brain has come from a study of hippocampal neuronal excitability, synaptic plasticity and behaviour in DUSP4 −/− mice. Long-term potentiation (LTP) was found to be impaired in MKP-2 −/− mice and the frequency of excitatory postsynaptic current (EPSC) was also increased in both hippocampal slices and hippocampal cultures. Finally, whereas locomotor activity and anxiety-like behaviour were normal in DUSP4 −/− mice, hippocampal-dependent spatial reference and working memory were both somewhat impaired . Surprisingly, given the established role of ERK signalling in LTP no abnormalities in ERK signalling were observed in either DUSP4 −/− brain tissue or in primary hippocampal cultures. However, JNK or p38 activation was not studied and the former also play a role in memory formation and synaptic plasticity . 2.4 DUSP5 DUSP5 was first identified as a growth factor and heat shock-inducible nuclear MKP and is closely related to both DUSP1 /MKP-1 and DUSP2 /MKP-4 . Despite its early discovery and characterisation as an MKP, little attention was paid to DUSP5 , presumably on the assumption that it would share many of the properties of its nearest relatives with respect to a broad activity towards ERK, JNK and 38 MAPKs. However, it was later shown that DUSP5 is unique amongst the four inducible nuclear MKPs in being absolutely specific for ERK1/2 . Furthermore, growth factor-inducible expression of DUSP5 is mediated by ERK activity making it a classical negative feedback regulator of this signalling pathway and DUSP5 binds tightly to its substrate and is able to anchor inactive ERK in the cell nucleus . Together, these properties define DUSP5 as the nuclear counterpart of the inducible cytoplasmic ERK specific phosphatase DUSP6 /MKP-3 (see ). 2.4.1 DUSP5 in innate and adaptive immunity An early indication that DUSP5 might play a role adaptive immunity came with the observation that it was highly induced following IL-2 stimulation of T-cells . This idea was seemingly reinforced by the finding that transgenic expression of DUSP5 in lymphoid cells arrested thymocyte development at the CD4+/CD8+ (double positive) stage and caused autoimmune symptoms in these animals . However, these results illustrate the limitations of overexpression experiments and probably reflect the function of ERK itself, rather than endogenous DUSP5 in regulating immune cell development. This has been confirmed by more recent experiments utilising DUSP5 −/− mice where global deletion had no effect on innate or adaptive immune cell numbers in the bone marrow, spleen or lymph nodes under homeostatic conditions . However, subjecting DUSP5 −/− mice to acute immune challenges has revealed more subtle phenotypes that are modulated in a DUSP5 -dependent manner . Thus DUSP5 has been shown to be highly expressed in eosinophils where it negatively regulates IL-33 mediated survival, via the suppression of interleukin-33 (IL-33)-induced ERK-activity and down-regulation of the anti-apoptotic protein B-cell lymphoma-extra large (BCL x L). Consequently, DUSP5 −/− mice challenged by helminth infection display prolonged eosinophil survival, enhanced eosinophil effector functions and were able to clear their parasite burden more efficiently following infection . More recently Kutty et al., while confirming that T cell development is normal in DUSP5 −/− mice, have shown that in response to acute infection with lymphocytic choriomeningitis virus (LCMV) these animals have decreased numbers of short-lived effector cells (SLECs) and increased proportions of memory precursor effector cells (MPECs). Both cell types are derived from effector CD8+ T cells in response to acute infection with SLEC being highly cytotoxic, cells that readily undergo apoptosis while MPECs retain the ability to proliferate and eventually develop into mature memory T-cells. This defect was intrinsic to T cells as bone marrow chimeric mice in which CD8 + T cells were reconstituted from DUSP5 −/− donors showed an identical phenotype and this study clearly indicates that DUSP5 plays an essential role in regulating the survival of SLECs. However, the precise mechanism(s) by which DUSP5 affects the balance of differentiating and maintaining SLEC and MPEC populations and its dependence on ERK activity are as yet uncertain . 2.4.2 DUSP5 and cancer The canonical Ras-ERK MAPK signalling pathway is frequently deregulated in human cancers with activating mutations found in upstream components of the pathway including receptor tyrosine kinases (RTKs), Ras GTPases, the MAPK kinase kinase Braf and MAPK kinase (MEK) . The observation that Braf is mutated in 40–60% of malignant melanomas and in tumours of the thyroid, colon and lung underscores the importance of the Ras-ERK pathway in malignant disease, making it an intense focus of anticancer drug discovery . In common with the cytoplasmic ERK-specific phosphatase DUSP6 /MKP-3 (see ), elevated DUSP5 expression is observed in a range of Ras or Braf mutant cancer cells where it is presumed to suppress oncogenic ERK activation. DUSP5 has also been reported to be subject to epigenetic silencing in gastric cancers and this correlated with poorer patient survival . More recently, DUSP5 down-regulation and promoter hypermethylation has been identified in colorectal tumour samples and cell lines. However, DUSP5 knockdown in colorectal cancer cell lines displayed limited effects on phospho-ERK levels and did not increase proliferation. Furthermore, a transgenic mouse overexpressing DUSP5 in the intestinal epithelium displayed no alterations in ERK signalling, intestinal homeostasis or adenoma formation and the authors concluded that DUSP5 does not regulate intestinal development or tumourigenesis . Although surprising, given the demonstrable effects of DUSP5 overexpression on ERK activation in vitro , these results should be interpreted with a degree of caution as the constitutive transgene used here cannot recapitulate the transcriptional dynamics inherent in feedback control exerted by endogenous DUSP5 . In contrast Rushworth et al. demonstrated that DUSP5 loss sensitised mice to HRas Q61L -driven skin papilloma formation in the well-established DMBA/TPA (7,12-dimethylbenz[ a ]anthracene/12-O-tetra-decanoylphorbol-13-acetate)-inducible skin carcinogenesis model. Furthermore, in vitro experiments in DUSP5 −/− MEFs revealed an essential non-redundant function for this MKP in suppressing nuclear ERK activity following acute pathway stimulation. Loss of DUSP5 −/− also provoked the upregulation of a cohort of ERK-dependent genes including SerpinB2 in TPA stimulated MEFs . SerpinB2 had previously been identified as a susceptibility gene in this model of skin carcinogenesis and concomitant deletion of SerpinB2 reversed the sensitivity of DUSP5 −/− mice to DMBA/TPA-induced papilloma formation identifying DUSP5 as a bona fide tumour suppressor by virtue of its ability to suppress SerpinB2 expression in this animal model of Ras-induced cancer . More recently, experiments using wild type and DUSP5 −/− MEFs have demonstrated that DUSP5 function is dependent on the nature of the oncogenic driver. Thus while loss of this MKP in the context of mutant Ras is compatible with continued cell proliferation, its deletion in cells expressing mutant BRaf V600E causes ERK-dependent cell cycle arrest and senescence and prevents cell transformation by this oncogene in vitro . This latter study supports the idea that MKPs might either suppress or promote carcinogenesis depending on the oncogenic and tissue context and it will be interesting to see the results of DUSP5 ablation in other, more clinically relevant, murine models of Ras- and Braf-driven cancer. 2.4.3 DUSP5 in other tissues DUSP5 is implicated in cardiovascular development, where it is expressed in angioblasts and mature vasculature in zebrafish and DUSP5 knockdown increased the etsrp + (ets1-related protein) angioblast population during early embryonic development. DUSP5 overexpression also antagonised the function of a serine threonine kinase, Snrk-1, which promotes angioblast development . DUSP5 has also been shown to act as a regulator of cardiac fibroblast proliferation and cardiac hypertrophy. Ferguson et al., demonstrated that the anti-hypertrophic activity of class I histone deacetylase (HDAC) inhibitors is mediated by their ability to increase DUSP5 gene expression, thus inhibiting both ERK activity and cardiac myocyte proliferation. Ectopic DUSP5 expression phenocopied the effects of HDAC inhibition while DUSP5 knockdown rescued endogenous phospho-ERK levels following HDAC inhibition . A subsequent study has revealed that HDAC3 specific inhibitors also induce DUSP5 expression in mouse models of diabetic cardiomyopathy and that this is associated with reduced hypertrophy and fibrosis . DUSP5 expression is also suppressed by another epigenetic regulator Methyl-CpG-binding protein 2 (MeCP2) and high MeCP2 expression was associated with cardiac fibrosis and MeCP2 knockdown in cardiac fibroblasts, increased DUSP5 expression and reduced both ERK activity and proliferation . Finally, in adult rats DUSP5 deletion has been shown to modulate the myogenic response of cerebral arteries and autoregulation of cerebral blood flow, indicating that DUSP5 also plays a physiological role in vascular reactivity .
DUSP1 /MKP-1 DUSP1 /MKP-1 was the first of the dual-specificity MKPs to be characterised and was initially discovered as a growth factor or stress-inducible gene encoding a nuclear protein with homology to VH1, the prototypic dual-specificity protein phosphatase encoded by vaccinia virus . Initially characterised as a phosphatase able to specifically dephosphorylate the threonine and tyrosine residues of the signature T- E -Y motif within the activation loop of the classical MAPK ERK2 in vitro and in vivo it was later realised that DUSP1 /MKP-1 was capable of dephosphorylating all three major classes of MAPK with a distinct preference for the JNK isoforms followed by p38α and ERK1/2 MAPKs . DUSP1 /MKP-1 was also the first gene encoding an MKP to be deleted in the mouse, where no phenotype was initially reported with respect to development, fertility or lifespan and no evidence for deregulated ERK signalling was found in DUSP1 −/− mouse embryo fibroblasts (MEFs) . The failure to detect changes in MAPK activity in cells lacking DUSP1 /MKP-1 was probably due to an initial focus on the ERK1/2 pathway. Subsequent work in MEFs clearly showed that loss of DUSP1 /MKP-1 caused a significant increase in the activities of the stress-induced JNK and p38 MAPKs and revealed that MEFs lacking MKP-1 are acutely sensitive to JNK-mediated apoptosis in response to a wide variety of cellular stresses including UV-radiation, ionising radiation, hydrogen peroxide, anisomycin and cisplatin . Further experiments conducted using DUSP1 −/− mice quickly led to the realisation that this phosphatase regulates a number of physiological and pathophysiological processes including immunity, metabolic homeostasis, cellular responses to anticancer drugs, muscle regeneration, and neuronal function. 2.1.1 DUSP1 /MKP-1 in innate and adaptive immunity Given the wide range of roles that MAPKs perform in the development and function of cells of the immune system it was perhaps no surprise that amongst the first phenotypes detected in DUSP1 −/− mice was a failure to regulate stress-activated JNK and p38 signalling in macrophages and dendritic cells . These cells are key mediators of the innate immune response in which the p38 and JNK MAPKs lie downstream of the toll-like receptors (TLRs), which are activated by a wide variety of pathogen-derived stimuli and act to regulate the expression of both pro and anti-inflammatory cytokines and chemokines . Several groups demonstrated that loss of DUSP1 /MKP-1 led to elevated JNK and p38 activities in macrophages exposed to the bacterial endotoxin lipopolysaccharide (LPS) . This led to an initial increase in the expression of pro-inflammatory cytokines such as tumour necrosis factor alpha (TNFα), interleukin-6 (IL-6), interleukin-12 (IL-12) and interferon-gamma (IFN-γ) while, at later times, levels of the anti-inflammatory mediator interleukin-10 (IL-10) were increased . These cellular effects were accompanied by pathological changes such as inflammatory tissue infiltration, hypotension and multiple organ failure, all of which are markers of the severe septic shock and increased mortality observed in LPS-injected DUSP1 −/− mice when compared to wild type controls. With respect to the above changes in cytokine expression, the regulation of gene transcription by MAPK-regulated transcription factors such as activator protein-1 (AP1), activating transcription factor 1 (ATF-1) and cAMP response element binding protein (CREB) by DUSP1 /MKP-1 was an early focus . However, a major mechanism by which cytokine expression is controlled is via changes in mRNA stability and recent studies have revealed that DUSP1 /MKP-1 modulates cytokine mRNA levels by suppressing the p38 driven MAPK-activated protein kinase 2 (MK2)-dependent phosphorylation of the mRNA destabilising protein tristetraprolin (TTP) . TTP, which recognizes adenosine/uridine-rich elements (AREs) in the 3′ untranslated regions (UTRs) of cytokine mRNAs and recruits components of the cellular mRNA degradation machinery is phosphorylated by MK-2 on two sites (Ser52 and 178), which leads to both inactivation and stabilisation of TTP . Thus, loss of DUSP1 /MKP-1, by promoting p38-MK2-driven phosphorylation of TTP, favours TTP inactivation and cytokine mRNA stabilisation. In an elegant series of experiments, Smallie et al., combined deletion of DUSP1 /MKP-1 with a homozygous knock-in mutant in which the MK2-dependent phosphorylation sites within TTP are ablated and demonstrated that in bone marrow-derived macrophages (BMDMs) derived from the double mutant mice the elevated cytokine mRNA and protein levels seen on deletion of DUSP1 /MKP-1 alone was largely prevented. A similar reversal in the elevated serum levels of cytokines seen in LPS-injected DUSP1 −/− mice was also observed in the double mutant animals and microarray experiments performed using LPS-treated BMDMs, indicate that DUSP1 /MKP-1 regulates more than half of the genome-wide response to LPS, either wholly or partly via the phosphorylation of TTP . A similar approach revealed that production of interferon beta (IFNβ) in response to TLR activation is also mediated in part by DUSP1 /MKP-1-mediated regulation of TTP, but that in the early phase of the response DUSP1 /MKP-1 regulates IFNβ transcription via JNK-mediated phosphorylation of c-jun, which binds to the IFNβ promoter . Taken together, this work demonstrates that TLR mediated expression of DUSP1 /MKP-1 is a key component of a pathway, which acts through regulation of MAPK-dependent transcription factors and TTP to negatively regulate pathological inflammatory responses, to engage the “off phase” of macrophage-mediated responses to pro-inflammatory stimuli and promote the resolution of inflammation. As such, any defects in this pathway would be expected to impede the latter process and contribute to a range of chronic inflammatory diseases, making the DUSP1 /MKP-1-p38-MK2 signalling axis a prime candidate for therapeutic intervention. While innate immunity comprises an acute, non-specific response to foreign antigens, adaptive or acquired immunity is highly specific to a particular antigenic stimulus and comprises a network of specialized, immune cells and processes that either eliminate pathogens or prevent their growth. In addition, by generating immunological memory, this response also provides long-lasting immunity against infection, which is the basis of vaccination, while an abnormal or maladaptive response can result in autoimmune disease. The workhorses of the system are the B and T lymphocytes, which mediate humoral (antibody-mediated) immunity and cell-mediated (cytotoxic or effector cell-mediated) responses. Despite the key role for ERK signalling in thymocytes and the observation that DUSP1 /MKP-1 is expressed at varying levels during T-cell development, mice lacking DUSP1 /MKP-1 do not present with abnormalities in this process and the ratio of CD4 + to CD8 + T-cells following thymic maturation is in the normal range . This possibly reflects either redundancy amongst ERK-specific phosphatases or the fact that ERK is not the preferred target for DUSP1 /MKP-1. However, in mature CD4+ T cells, loss of DUSP1 /MKP-1 seems to impact T cell function with decreased activation and proliferation following exposure to phorbol 12-myristate 13-acetate (PMA) and ionomycin and increased levels of JNK signalling . Furthermore, both CD4+ and CD8+ T cells lacking DUSP1 /MKP-1 showed reduced proliferation and interleukin-2 (IL-2) production after exposure to anti-CD3 antibody to mimic T cell receptor activation, either alone or in combination with anti-CD28. This lack of proliferation correlated with a failure to accumulate nuclear factor of activated T cells c1 (NFATc1) in the cell nucleus and, as this process is negatively regulated by JNK signalling, most probably reflects a failure to restrain JNK activity in these cells . Consistent with this, re-stimulation of activated DUSP1 −/− CD4+ T cells with anti-CD3 also caused an increase in JNK-dependent activation-induced cell death (AICD). The differentiation of effector T cell lineages was also affected by deletion of DUSP1 /MKP-1 with naïve DUSP1 −/− CD4+ T cells showing deficits in effector cytokine producing type-1 helper (Th1) and pro-inflammatory type-17 helper (Th17) cell differentiation and function, while in naïve CD8 + T cells DUSP1 /MKP-1 deficiency resulted in lower production of the CD8 + T cell effector cytokines IFN-γ and TNFα . Finally, DUSP1 /MKP-1was found to be required for anti-influenza T cell responses in infected mice with infected DUSP1 −/− animals showing defective influenza virus-specific CD4 + and CD8 + T cell responses and clear signs of impaired viral clearance. In contrast, mice lacking DUSP1 /MKP-1 were protected from experimentally induced autoimmune encephalitis (EAE) following injection of myelin oligodendrocyte glycoprotein peptide (MOG 35–55 ). This resulted from an intrinsic defect in MKP-1 KO CD4 + T cells, which showed reduced production of IL-17 and IFNγ and demonstrates a key role for DUSP1 /MKP-1 in mediating autoreactive CD4 + T cell responses in vivo . As well as performing key functions in innate immunity, dendritic cells form a bridge between innate and adaptive immune responses by acting as antigen presenting cells for the priming of both CD4+ T helper (Th) and CD8+ cytotoxic T lymphocytes (Tc) . As well as affecting the function of these T cell subsets, it turns out that DUSP1 /MKP-1 also plays a key role in facilitating this crosstalk. Huang et al. used a model in which the immune system in lethally irradiated mice was reconstituted with a mix of bone marrow from DUSP1 −/− /Rag1 −/− and WT mice (5:1 ratio) and compared with mice reconstituted using DUSP1 +/+ /Rag1 −/− and WT (5:1 ratio) bone marrow. In both cohorts the T cells were derived from the WT marrow (Rag1 −/− bone marrow cannot generate mature T and B cells) and thus expressed DUSP1 /MKP-1 while the cells of the innate immune system were either null or WT for DUSP1 /MKP-1. Using two mouse infection models, the Listeria monocytogenes (Th1 biased model) and Candida albicans (Th17 biased model), they found that dendritic cells lacking DUSP1 /MKP-1 exhibited reduced IL-12 production and attenuated IFNγ expression and Th1 responses. In contrast, the production of IL-6 by dendritic cells lacking DUSP1 /MKP-1 was enhanced and this resulted in an exaggerated Th17 response. In addition, DUSP1 /MKP-1 suppressed the release of transforming growth factor β2 (TGFβ2) by dendritic cells, thus inhibiting the development of inducible regulatory T cells (Treg). At the biochemical level, these altered responses were mediated by increased p38 MAPK activity in dendritic cells lacking DUSP1 /MKP-1. In conclusion this work clearly shows that the activity of DUSP1 /MKP-1 in the dendritic cells of the innate immune system is a critical regulator of signals that dictate the course of adaptive immune responses at the immunological synapse . One interesting observation arising from these studies of DUSP1 /MKP-1 in innate and adaptive immunity is that whereas DUSP1 /MKP-1 mainly targets p38 MAPK in macrophages and dendritic cells, the T cell effects of DUSP1 /MKP-1 loss seem to be mediated predominantly by increased JNK activity. This suggests that there is cell type specificity with respect to DUSP1 /MKP-1 activity towards different MAPK isoforms. The mechanism by which this might be achieved is unclear, but may be related to post-translational modification. DUSP1 /MKP-1 is phosphorylated and this is known to modulate its stability . More recently, it was shown that p300 histone acetylase-mediated acetylation of lysine 57, which lies just C-terminal of the KIM within the amino terminal domain of DUSP1 /MKP-1, reinforces its interaction with and ability to dephosphorylate p38 MAPK . This can be opposed by a subset of specific histone deacetylases (HDACs 1–3) in mouse macrophages , suggesting one possible mechanism by which the canonical substrate selectivity of DUSP1 /MKP-1 might be regulated in a cell type specific manner. Finally, given its key role as a critical regulator of innate and adaptive immunity , loss of DUSP1 /MKP-1 was also found to exacerbate a range of inflammatory phenotypes in mouse models including experimental colitis , anaphylaxis and psoriasis , However, for reasons as yet unclear, loss of DUSP1 /MKP-1 did not sensitize mice to the development of spontaneous age-dependent osteoarthritis, despite the involvement of an inflammatory process and mediators such as TNFα and interleukin-1β in this disease . DUSP1 /MKP-1 is also directly targeted by a range of immune modulators. Enhanced expression of DUSP1 /MKP-1 underpins, at least in part, the anti-inflammatory activity of glucocorticoids and is also observed in response to vitamin D and transforming growth factor-beta (TGFβ) both of which are anti-inflammatory. In contrast, pro-inflammatory stimuli such as IFN-γ and interleukin-17A (IL-17A) suppress DUSP1 /MKP-1 expression and thus increase signalling through the p38 and JNK MAPK pathways . 2.1.2 DUSP1 /MKP-1 in metabolic homeostasis The first indication that DUSP1 /MKP-1 might play a role in regulating metabolic homeostasis came with the finding that DUSP1 −/− mice were resistant to diet-induced obesity and that this reflected a higher level of energy expenditure, but not overall activity in the null mice . Surprisingly, despite remaining lean on a high-fat diet (HFD), DUSP1 −/− mice did become glucose intolerant (as would wild type animals), while still being protected from hepatic steatosis. This phenotype correlated with increased levels of JNK, p38 and ERK activity in insulin responsive tissues. However, DUSP1 −/− mice did not show abnormalities in insulin signalling or glucose homeostasis, despite an established role for JNK signalling in promoting insulin resistance . This apparent contradiction was possibly due to the finding that loss of DUSP1 /MKP-1 affects nuclear rather than cytoplasmic JNK activity and that the latter is responsible for JNK-dependent abnormalities in the response to insulin . Subsequent work has revealed mechanistic aspects of the DUSP1 −/− metabolic phenotype. Firstly, mice lacking DUSP1 /MKP-1 are protected from the loss of oxidative (slow-twitch) myofibers in skeletal muscle. The overall effect of this is to favour oxidative over glycolytic metabolism and, because the latter consumes less energy, to protect against diet-induced obesity . This effect seems to be secondary to increased p38 MAPK-mediated phosphorylation and stabilisation of peroxisome proliferator-activated receptor-gamma coactivator 1α (PGC-1α) thus increasing its activity as a regulator of mitochondrial biogenesis and energy expenditure. Experiments in which grossly obese leptin-resistant (db/db) mice were crossed with DUSP1 −/− animals revealed that loss of DUSP1 /MKP-1 protected against hepatic steatosis. By increasing MAPK-dependent phosphorylation of peroxisome proliferator-activated receptor-γ (PPARγ) at a site (Ser112) that negatively regulates its activity, loss of DUSP1 /MKP-1 prevents the PPARγ-dependent expression of lipogenic genes, thus reducing lipid droplet formation in hepatocytes . While this work sheds some light of the functions of DUSP1 /MKP-1 in metabolic homeostasis a severe limitation of these studies is the use of a whole body DUSP1 /MKP-1 knockout. Metabolic control is complex and subject to both central and peripheral regulation . Furthermore, diet-induced obesity has an inflammatory component and the role(s) of DUSP1 /MKP-1 in regulating immune responses may also be a confounding factor. To begin to address this, a conditional DUSP1 fl/fl mouse has now been employed to study the metabolic effects of DUSP1 /MKP-1 deletion in specific tissues. Liver specific knockout of DUSP1 /MKP-1 (MKP-1-LKO) using albumin-Cre (Alb-Cre) resulted in increased hepatic JNK and p38 activation. However, unlike DUSP1 −/− mice MKP-1-LKO animals exhibited increased adiposity, fasting hyperglycaemia and hyperinsulinemia on a normal chow diet, indicating that hepatic DUSP1 /MKP-1 regulates glucose homeostasis . This was confirmed in subsequent experiments using hyperinsulinemic-euglycemic clamps, which demonstrated that MKP-1-LKO mice were hyperglycaemic, glucose intolerant and develop hepatic insulin resistance . While DUSP1 −/− mice were resistant to HFD-induced obesity MKP-1-LKO mice were more susceptible, but were still protected against hepatic steatosis. Furthermore, unlike DUSP1 −/− mice, they showed decreased energy expenditure . Mechanistically, the effects of DUSP1 /MKP-1 deletion on glucose metabolism were found to be secondary to increased hepatic p38 and JNK mediated transcription of gluconeogenic genes, increased p38-dependent phosphorylation of cyclic AMP responsive element binding protein (CREB), which promotes gluconeogenesis through PGC1/PPARγ and decreased activation of Signal Transducer and Activator of Transcription 3 (STAT3), a negative regulator of gluconeogenesis . The latter effect is probably an indirect result of the lower circulating levels of IL-6 in MKP-1-LKO mice, as this metabolic cytokine is a potent inducer of Janus kinase (JAK)-STAT signalling. Finally, the decreased energy expenditure observed in MKP-1-LKO mice may be related to reduced levels of IL-6 and fibroblast growth factor 21 (FGF21). Both factors promote energy expenditure, insulin sensitivity, fatty acid oxidation, and weight loss and their reduction would be expected to impair skeletal muscle oxidative capacity and thus increase susceptibility to diet-induced obesity . Skeletal muscle plays a major role in the regulation of glucose metabolism and metabolic homeostasis. Following on from the liver-specific deletion of DUSP1 /MKP-1, the effects of skeletal muscle specific loss of this phosphatase (MKP-1-MKO), using human α-skeletal actin (HSA-Cre), have now been studied. MKP-1-MKO mice show increased levels of p38 and JNK signalling in skeletal muscle and are significantly protected from HFD-induced weight gain . As was the case in the MKP1 −/− mice, the failure to gain weight was secondary to enhanced energy expenditure when compared with MKP-1 fl/fl controls and no differences in either food intake, or activity between the two genotypes was observed . Interestingly, MKP-1-MKO mice were also resistant to hepatic steatosis, which was consistent with lower levels of hepatic PPARγ and sterol regulatory element-binding protein 1c (SREBP-1c) expression. However, no changes in either p38 or JNK activity were detected in liver tissue. Glucose (GTT) and insulin tolerance (ITT) tests revealed that MKP-1MKO mice on a HFD produced lower levels of circulating insulin and were insulin sensitive, indicating that they are protected from the development of insulin resistance. Biochemically, an unexpected role for increased PI3-kinase-Akt signalling resulting from microRNA-21 (miR-21) dependent down-regulation of phosphatase and tensin homolog (PTEN) in MPK-1-MKO was uncovered and this could contribute to the increased insulin sensitivity observed in MKP-1-MKO mice. Finally, consistent with the results of whole body deletion of DUSP1 /MKP-1, the increased energy expenditure observed in MKP-1-MKO mice was secondary to an increase in the proportion of oxidative myofibers and was reflected in enhanced oxidative capacity and mitochondrial function in skeletal muscle . Taken together, these results begin to unravel some of the complexity and tissue specific interplay of DUSP1 /MKP-1 action in the regulation of metabolic homeostasis and also emphasise the importance of compartmentalised nuclear regulation of p38 and JNK activities in mediating the phenotypes observed. The observation that DUSP1 /MKP-1 is up-regulated in insulin-responsive tissues in response to a HFD in mice and also in obese humans indicates that it forms part of a key stress response that leads to decreased energy expenditure in skeletal muscle, thus contributing to weight gain and may also mediate at least some of the adverse consequences of this disease, including abnormalities in glucose metabolism and hepatosteatosis. The further use of conditional DUSP1 /MKP-1 ablation will reveal the relative importance of MAPK regulation in distinct tissues by this phosphatase in energy homeostasis and, from the information gathered so far, MKP-1/ DUSP1 continues to be a potential pharmacological target for the treatment of metabolic disease. 2.1.3 DUSP1 /MKP-1 in cancer Given the central importance of deregulated MAPK signalling in the initiation and progression of human cancers it is no great surprise that the involvement of MKPs in regulating various aspects of the cancer phenotype has been of widespread interest . Disappointingly, given that DUSP1 /MKP-1 was both the first MKP to be discovered and also the first to be deleted from the mouse genome, there are currently no published studies in which DUSP1 /MKP-1 has been directly implicated in either tumour initiation or progression. It is hoped that the recent development of the conditional DUSP1 /MKP-1 mouse (see 2.1.2.) will facilitate definitive experiments, particularly as this model avoids the potentially confounding effects of the immune and inflammatory abnormalities seen when DUSP1 /MKP-1 is knocked out globally. In contrast, over the 25 or so years since DUSP1 /MKP-1 and its role in regulating MAP kinase signalling were discovered, there have been numerous publications reporting either increased or reduced expression of DUSP1 /MKP-1 in a wide range of human tumours including breast, pancreas, gastric, ovary, lung, skin and prostate . In addition, a number of studies have relied on ectopic overexpression of DUSP1 /MKP-1 in normal and cancer cell lines to study its possible role in modulating oncogenic signalling. These studies have been extensively reviewed elsewhere and as they have often yielded equivocal or even contradictory information regarding the role of DUSP1 /MKP-1 in cancer, it is not proposed to list or discuss them further here. One aspect of cancer biology in which DUSP1 /MKP-1 does appear to play an important role is in the response of normal and cancer cells to a range of chemical and physical insults including modalities used in cancer chemotherapy . Soon after it became clear that the p38 and JNK MAPKs were the preferred substrates for DUSP1 /MKP-1, it was observed that the overexpression of this phosphatase enhanced cellular resistance to both UV-radiation and the chemotherapeutic drug cisplatin and that this was related to the suppression of JNK-mediated apoptosis . That DUSP1 /MKP-1 played a crucial role in modulating sensitivity to these insults was confirmed when MEFs derived from DUSP1 −/− mice were found to be sensitive to UV-radiation, cisplatin, hydrogen peroxide and anisomycin . In normal cells, DUSP1 /MKP-1 expression is induced by UV and cisplatin via activation of p38 MAPK, whereas it is the suppression of JNK activity by DUSP1 /MKP-1 that modulates cell death. This indicates that DUSP1 /MKP-1 mediated crosstalk between these two distinct MAPK pathways regulates cellular sensitivity . Thus it is likely that elevated expression of DUSP1 /MKP-1 in tumours can mediate chemoresistance and this is supported by studies in non-small cell lung cancer (NSCLC) where overexpression of DUSP1 /MKP-1 is observed and patients become resistant to treatment with cisplatin. In NSCLC cancer cell lines where DUSP1 /MKP-1 was constitutively expressed, siRNA knockdown increased cisplatin sensitivity some 10 fold, reduced the growth of these cell lines in nude mice and rendered the resulting tumours cisplatin sensitive . In lung cancer patients dexamethasone is also often co-administered with cisplatin to ameliorate the undesirable side effects of treatment. However, glucocorticoids are known to upregulate DUSP1 /MKP-1 expression and not surprisingly dexamethasone effectively suppressed cisplatin-induced apoptosis in a lung adenocarcinoma cell line indicating that DUSP1 /MKP-1 plays a key role in this potentially undesirable drug-drug interaction . The role of DUSP1 /MKP-1 in mediating resistance to chemotherapy appears not to be restricted to cisplatin, as the ability of this phosphatase to inhibit JNK-mediated apoptosis has also been implicated in the resistance of pancreatic cancer cells to gemcitabine , multidrug resistance in glioblastoma , resistance to doxorubicin and taxanes in breast cancer and resistance to the proteasome inhibitor bortezomib . Finally, a recent paper has implicated DUSP1 /MKP-1 in a growth factor-dependent pathway, which promotes intrinsic resistance to the tyrosine kinase inhibitors (TKI) used to treat chronic myeloid leukemias (CML). In mouse pro-B BaF3 cells engineered to express the breakpoint cluster region (BCR)-Abl tyrosine kinase fusion, which drives CML, Kesarwani et al. found that DUSP1 /MKP-1 along with FBJ osteosarcoma oncogene (Fos) were responsible for resistance to the Abl TKI inhibitor imatinib (Gleevec). Genetic deletion or pharmacological inhibition of Fos and DUSP1 /MKP-1 eradicated minimal residual disease (MRD) in multiple in vivo models as well as in patient-derived mouse xenografts. Mechanistically, DUSP1 /MKP-1 seems to influence TKI sensitivity via its ability to suppress p38 MAPK activity and modulate AP1-dependent transcriptional networks. The latter hypothesis is supported by the finding that SB202190, a specific p38 MAPK inhibitor, also conferred imatinib resistance. While these results are potentially exciting, some caution is necessary in the interpretation of the data. BCI (2-benzylidene-3-(cyclohexylamino)-1-Indanone hydrochloride), the “specific” DUSP1 inhibitor used to treat mice and reverse disease in a retroviral bone marrow transduction transplantation leukemogenesis model is both highly toxic and relatively non-specific . 2.1.4 DUSP1 /MKP-1 function in other tissues Given the key role that MAPK signalling plays in aspects of brain development and function it is unsurprising that MKPs have been implicated in the regulation of these processes. Indeed DUSP1 /MKP-1 plays important roles in neural cell development, neuronal cell survival and death, glial cell function and events, which underpin learning and memory (reviewed in ). In terms of pathophysiology, an important observation was that DUSP1 /MKP-1 levels were elevated in the hippocampal region of post-mortem brain from patients who had been diagnosed with major depressive disorder (MDD) . MDD is characterised by chronic or episodic depression and carries a significant (2–7%) risk of suicide. Duric et al. found that DUSP1 /MKP-1 was also elevated in the hippocampus of rats exposed to chronic unpredictable stress (CUS) an effect that was attenuated by treatment with the antidepressant drug fluoxetine (Prozac) a selective serotonin reuptake inhibitor. Furthermore, adenoviral-mediated expression of DUSP1 /MKP-1 in the hippocampus caused anhedonia (an inability to experience pleasure), as assessed by a reduced preference for sucrose over water, and these animals displayed other surrogates of depressive behaviour or helplessness such as disturbed feeding and increased immobility in the forced swim test, all of which were also seen in the CUS exposed rats. Interestingly, all of the latter endpoints were suppressed in CUS exposed DUSP1 −/− mice when compared to wild type controls . Mechanistically, these changes were associated with a reduction in phospho-ERK1/2 levels in CUS exposed wild type mice, which was not observed in DUSP1 −/− mice. A result, which led the authors to conclude that ERK was the relevant DUSP1 /MKP-1 target. This finding is somewhat surprising in the light of our knowledge that JNK and p38, but not ERK, are the preferred substrates for this phosphatase and also conflicts with a previous study in which a reduction in hippocampal phospho-JNK but not phospho-ERK was observed in rats exposed to CUS . Finally, a recent study has identified similar changes in DUSP1 /MKP-1 levels in the anterior cingulate cortex (ACC) of mice exposed to neurophathic pain and CUS, which were again reversed by fluoxetine . While not shedding new light on the biochemical mechanisms involved, this latter study does implicate the regulation of MAPK signalling by DUSP1 /MKP-1 in another brain region tightly associated with regulating mood-related functions. With respect to neurodegenerative disorders, DUSP1 /MKP-1 has been reported to mediate neuroprotective effects in both in vitro and in vivo models of Huntington's disease through its ability to suppress polyglutamine-expanded huntingtin-induced activation of c-Jun N-terminal kinases (JNKs) and p38 MAPKs . Finally, by suppressing p38 MAPK activity, DUSP1 /MKP-1 has been reported to protect dopaminergic neurons from the toxic effects of 6-hydroxydopamine (6-OHDA) suggesting that strategies aimed at either increasing MKP-1 expression or activity might be a viable strategy in the treatment of Parkinson's disease . DUSP1 /MKP-1 has also been implicated in muscle regeneration as DUSP1 −/− mice are impaired in their ability to recover from experimental muscle injury and, when crossed into a mouse model of Duchenne's muscular dystrophy (the mdx dystrophin null), they display exacerbated muscular dystrophinopathy . Interestingly, this is exactly the reciprocal of the phenotype observed after deletion of DUSP10 /MKP-5 (see ) . More recently, the study of DUSP1 −/− / DUSP10 −/− double knockout (DKO) mice revealed a severe impairment in muscle regeneration. Satellite cells, the precursors of muscle cells, were less proliferative and DKO mice had increased inflammation at sites of injury suggesting that the positive regulation of myogenesis by DUSP1 /MKP-1 is dominant over negative regulation by DUSP10 /MKP-5 . Despite the fact that they share common substrates in JNK and p38 it is clear that these two MKPs regulate distinct signalling events. This may be related to the fact that while DUSP1 /MKP-1 regulates nuclear MAPK activity, DUSP10 /MKP-5 can impinge on cytosolic signalling and thus the two MKPs may regulate quite distinct sets of MAPK substrates.
DUSP1 /MKP-1 in innate and adaptive immunity Given the wide range of roles that MAPKs perform in the development and function of cells of the immune system it was perhaps no surprise that amongst the first phenotypes detected in DUSP1 −/− mice was a failure to regulate stress-activated JNK and p38 signalling in macrophages and dendritic cells . These cells are key mediators of the innate immune response in which the p38 and JNK MAPKs lie downstream of the toll-like receptors (TLRs), which are activated by a wide variety of pathogen-derived stimuli and act to regulate the expression of both pro and anti-inflammatory cytokines and chemokines . Several groups demonstrated that loss of DUSP1 /MKP-1 led to elevated JNK and p38 activities in macrophages exposed to the bacterial endotoxin lipopolysaccharide (LPS) . This led to an initial increase in the expression of pro-inflammatory cytokines such as tumour necrosis factor alpha (TNFα), interleukin-6 (IL-6), interleukin-12 (IL-12) and interferon-gamma (IFN-γ) while, at later times, levels of the anti-inflammatory mediator interleukin-10 (IL-10) were increased . These cellular effects were accompanied by pathological changes such as inflammatory tissue infiltration, hypotension and multiple organ failure, all of which are markers of the severe septic shock and increased mortality observed in LPS-injected DUSP1 −/− mice when compared to wild type controls. With respect to the above changes in cytokine expression, the regulation of gene transcription by MAPK-regulated transcription factors such as activator protein-1 (AP1), activating transcription factor 1 (ATF-1) and cAMP response element binding protein (CREB) by DUSP1 /MKP-1 was an early focus . However, a major mechanism by which cytokine expression is controlled is via changes in mRNA stability and recent studies have revealed that DUSP1 /MKP-1 modulates cytokine mRNA levels by suppressing the p38 driven MAPK-activated protein kinase 2 (MK2)-dependent phosphorylation of the mRNA destabilising protein tristetraprolin (TTP) . TTP, which recognizes adenosine/uridine-rich elements (AREs) in the 3′ untranslated regions (UTRs) of cytokine mRNAs and recruits components of the cellular mRNA degradation machinery is phosphorylated by MK-2 on two sites (Ser52 and 178), which leads to both inactivation and stabilisation of TTP . Thus, loss of DUSP1 /MKP-1, by promoting p38-MK2-driven phosphorylation of TTP, favours TTP inactivation and cytokine mRNA stabilisation. In an elegant series of experiments, Smallie et al., combined deletion of DUSP1 /MKP-1 with a homozygous knock-in mutant in which the MK2-dependent phosphorylation sites within TTP are ablated and demonstrated that in bone marrow-derived macrophages (BMDMs) derived from the double mutant mice the elevated cytokine mRNA and protein levels seen on deletion of DUSP1 /MKP-1 alone was largely prevented. A similar reversal in the elevated serum levels of cytokines seen in LPS-injected DUSP1 −/− mice was also observed in the double mutant animals and microarray experiments performed using LPS-treated BMDMs, indicate that DUSP1 /MKP-1 regulates more than half of the genome-wide response to LPS, either wholly or partly via the phosphorylation of TTP . A similar approach revealed that production of interferon beta (IFNβ) in response to TLR activation is also mediated in part by DUSP1 /MKP-1-mediated regulation of TTP, but that in the early phase of the response DUSP1 /MKP-1 regulates IFNβ transcription via JNK-mediated phosphorylation of c-jun, which binds to the IFNβ promoter . Taken together, this work demonstrates that TLR mediated expression of DUSP1 /MKP-1 is a key component of a pathway, which acts through regulation of MAPK-dependent transcription factors and TTP to negatively regulate pathological inflammatory responses, to engage the “off phase” of macrophage-mediated responses to pro-inflammatory stimuli and promote the resolution of inflammation. As such, any defects in this pathway would be expected to impede the latter process and contribute to a range of chronic inflammatory diseases, making the DUSP1 /MKP-1-p38-MK2 signalling axis a prime candidate for therapeutic intervention. While innate immunity comprises an acute, non-specific response to foreign antigens, adaptive or acquired immunity is highly specific to a particular antigenic stimulus and comprises a network of specialized, immune cells and processes that either eliminate pathogens or prevent their growth. In addition, by generating immunological memory, this response also provides long-lasting immunity against infection, which is the basis of vaccination, while an abnormal or maladaptive response can result in autoimmune disease. The workhorses of the system are the B and T lymphocytes, which mediate humoral (antibody-mediated) immunity and cell-mediated (cytotoxic or effector cell-mediated) responses. Despite the key role for ERK signalling in thymocytes and the observation that DUSP1 /MKP-1 is expressed at varying levels during T-cell development, mice lacking DUSP1 /MKP-1 do not present with abnormalities in this process and the ratio of CD4 + to CD8 + T-cells following thymic maturation is in the normal range . This possibly reflects either redundancy amongst ERK-specific phosphatases or the fact that ERK is not the preferred target for DUSP1 /MKP-1. However, in mature CD4+ T cells, loss of DUSP1 /MKP-1 seems to impact T cell function with decreased activation and proliferation following exposure to phorbol 12-myristate 13-acetate (PMA) and ionomycin and increased levels of JNK signalling . Furthermore, both CD4+ and CD8+ T cells lacking DUSP1 /MKP-1 showed reduced proliferation and interleukin-2 (IL-2) production after exposure to anti-CD3 antibody to mimic T cell receptor activation, either alone or in combination with anti-CD28. This lack of proliferation correlated with a failure to accumulate nuclear factor of activated T cells c1 (NFATc1) in the cell nucleus and, as this process is negatively regulated by JNK signalling, most probably reflects a failure to restrain JNK activity in these cells . Consistent with this, re-stimulation of activated DUSP1 −/− CD4+ T cells with anti-CD3 also caused an increase in JNK-dependent activation-induced cell death (AICD). The differentiation of effector T cell lineages was also affected by deletion of DUSP1 /MKP-1 with naïve DUSP1 −/− CD4+ T cells showing deficits in effector cytokine producing type-1 helper (Th1) and pro-inflammatory type-17 helper (Th17) cell differentiation and function, while in naïve CD8 + T cells DUSP1 /MKP-1 deficiency resulted in lower production of the CD8 + T cell effector cytokines IFN-γ and TNFα . Finally, DUSP1 /MKP-1was found to be required for anti-influenza T cell responses in infected mice with infected DUSP1 −/− animals showing defective influenza virus-specific CD4 + and CD8 + T cell responses and clear signs of impaired viral clearance. In contrast, mice lacking DUSP1 /MKP-1 were protected from experimentally induced autoimmune encephalitis (EAE) following injection of myelin oligodendrocyte glycoprotein peptide (MOG 35–55 ). This resulted from an intrinsic defect in MKP-1 KO CD4 + T cells, which showed reduced production of IL-17 and IFNγ and demonstrates a key role for DUSP1 /MKP-1 in mediating autoreactive CD4 + T cell responses in vivo . As well as performing key functions in innate immunity, dendritic cells form a bridge between innate and adaptive immune responses by acting as antigen presenting cells for the priming of both CD4+ T helper (Th) and CD8+ cytotoxic T lymphocytes (Tc) . As well as affecting the function of these T cell subsets, it turns out that DUSP1 /MKP-1 also plays a key role in facilitating this crosstalk. Huang et al. used a model in which the immune system in lethally irradiated mice was reconstituted with a mix of bone marrow from DUSP1 −/− /Rag1 −/− and WT mice (5:1 ratio) and compared with mice reconstituted using DUSP1 +/+ /Rag1 −/− and WT (5:1 ratio) bone marrow. In both cohorts the T cells were derived from the WT marrow (Rag1 −/− bone marrow cannot generate mature T and B cells) and thus expressed DUSP1 /MKP-1 while the cells of the innate immune system were either null or WT for DUSP1 /MKP-1. Using two mouse infection models, the Listeria monocytogenes (Th1 biased model) and Candida albicans (Th17 biased model), they found that dendritic cells lacking DUSP1 /MKP-1 exhibited reduced IL-12 production and attenuated IFNγ expression and Th1 responses. In contrast, the production of IL-6 by dendritic cells lacking DUSP1 /MKP-1 was enhanced and this resulted in an exaggerated Th17 response. In addition, DUSP1 /MKP-1 suppressed the release of transforming growth factor β2 (TGFβ2) by dendritic cells, thus inhibiting the development of inducible regulatory T cells (Treg). At the biochemical level, these altered responses were mediated by increased p38 MAPK activity in dendritic cells lacking DUSP1 /MKP-1. In conclusion this work clearly shows that the activity of DUSP1 /MKP-1 in the dendritic cells of the innate immune system is a critical regulator of signals that dictate the course of adaptive immune responses at the immunological synapse . One interesting observation arising from these studies of DUSP1 /MKP-1 in innate and adaptive immunity is that whereas DUSP1 /MKP-1 mainly targets p38 MAPK in macrophages and dendritic cells, the T cell effects of DUSP1 /MKP-1 loss seem to be mediated predominantly by increased JNK activity. This suggests that there is cell type specificity with respect to DUSP1 /MKP-1 activity towards different MAPK isoforms. The mechanism by which this might be achieved is unclear, but may be related to post-translational modification. DUSP1 /MKP-1 is phosphorylated and this is known to modulate its stability . More recently, it was shown that p300 histone acetylase-mediated acetylation of lysine 57, which lies just C-terminal of the KIM within the amino terminal domain of DUSP1 /MKP-1, reinforces its interaction with and ability to dephosphorylate p38 MAPK . This can be opposed by a subset of specific histone deacetylases (HDACs 1–3) in mouse macrophages , suggesting one possible mechanism by which the canonical substrate selectivity of DUSP1 /MKP-1 might be regulated in a cell type specific manner. Finally, given its key role as a critical regulator of innate and adaptive immunity , loss of DUSP1 /MKP-1 was also found to exacerbate a range of inflammatory phenotypes in mouse models including experimental colitis , anaphylaxis and psoriasis , However, for reasons as yet unclear, loss of DUSP1 /MKP-1 did not sensitize mice to the development of spontaneous age-dependent osteoarthritis, despite the involvement of an inflammatory process and mediators such as TNFα and interleukin-1β in this disease . DUSP1 /MKP-1 is also directly targeted by a range of immune modulators. Enhanced expression of DUSP1 /MKP-1 underpins, at least in part, the anti-inflammatory activity of glucocorticoids and is also observed in response to vitamin D and transforming growth factor-beta (TGFβ) both of which are anti-inflammatory. In contrast, pro-inflammatory stimuli such as IFN-γ and interleukin-17A (IL-17A) suppress DUSP1 /MKP-1 expression and thus increase signalling through the p38 and JNK MAPK pathways .
DUSP1 /MKP-1 in metabolic homeostasis The first indication that DUSP1 /MKP-1 might play a role in regulating metabolic homeostasis came with the finding that DUSP1 −/− mice were resistant to diet-induced obesity and that this reflected a higher level of energy expenditure, but not overall activity in the null mice . Surprisingly, despite remaining lean on a high-fat diet (HFD), DUSP1 −/− mice did become glucose intolerant (as would wild type animals), while still being protected from hepatic steatosis. This phenotype correlated with increased levels of JNK, p38 and ERK activity in insulin responsive tissues. However, DUSP1 −/− mice did not show abnormalities in insulin signalling or glucose homeostasis, despite an established role for JNK signalling in promoting insulin resistance . This apparent contradiction was possibly due to the finding that loss of DUSP1 /MKP-1 affects nuclear rather than cytoplasmic JNK activity and that the latter is responsible for JNK-dependent abnormalities in the response to insulin . Subsequent work has revealed mechanistic aspects of the DUSP1 −/− metabolic phenotype. Firstly, mice lacking DUSP1 /MKP-1 are protected from the loss of oxidative (slow-twitch) myofibers in skeletal muscle. The overall effect of this is to favour oxidative over glycolytic metabolism and, because the latter consumes less energy, to protect against diet-induced obesity . This effect seems to be secondary to increased p38 MAPK-mediated phosphorylation and stabilisation of peroxisome proliferator-activated receptor-gamma coactivator 1α (PGC-1α) thus increasing its activity as a regulator of mitochondrial biogenesis and energy expenditure. Experiments in which grossly obese leptin-resistant (db/db) mice were crossed with DUSP1 −/− animals revealed that loss of DUSP1 /MKP-1 protected against hepatic steatosis. By increasing MAPK-dependent phosphorylation of peroxisome proliferator-activated receptor-γ (PPARγ) at a site (Ser112) that negatively regulates its activity, loss of DUSP1 /MKP-1 prevents the PPARγ-dependent expression of lipogenic genes, thus reducing lipid droplet formation in hepatocytes . While this work sheds some light of the functions of DUSP1 /MKP-1 in metabolic homeostasis a severe limitation of these studies is the use of a whole body DUSP1 /MKP-1 knockout. Metabolic control is complex and subject to both central and peripheral regulation . Furthermore, diet-induced obesity has an inflammatory component and the role(s) of DUSP1 /MKP-1 in regulating immune responses may also be a confounding factor. To begin to address this, a conditional DUSP1 fl/fl mouse has now been employed to study the metabolic effects of DUSP1 /MKP-1 deletion in specific tissues. Liver specific knockout of DUSP1 /MKP-1 (MKP-1-LKO) using albumin-Cre (Alb-Cre) resulted in increased hepatic JNK and p38 activation. However, unlike DUSP1 −/− mice MKP-1-LKO animals exhibited increased adiposity, fasting hyperglycaemia and hyperinsulinemia on a normal chow diet, indicating that hepatic DUSP1 /MKP-1 regulates glucose homeostasis . This was confirmed in subsequent experiments using hyperinsulinemic-euglycemic clamps, which demonstrated that MKP-1-LKO mice were hyperglycaemic, glucose intolerant and develop hepatic insulin resistance . While DUSP1 −/− mice were resistant to HFD-induced obesity MKP-1-LKO mice were more susceptible, but were still protected against hepatic steatosis. Furthermore, unlike DUSP1 −/− mice, they showed decreased energy expenditure . Mechanistically, the effects of DUSP1 /MKP-1 deletion on glucose metabolism were found to be secondary to increased hepatic p38 and JNK mediated transcription of gluconeogenic genes, increased p38-dependent phosphorylation of cyclic AMP responsive element binding protein (CREB), which promotes gluconeogenesis through PGC1/PPARγ and decreased activation of Signal Transducer and Activator of Transcription 3 (STAT3), a negative regulator of gluconeogenesis . The latter effect is probably an indirect result of the lower circulating levels of IL-6 in MKP-1-LKO mice, as this metabolic cytokine is a potent inducer of Janus kinase (JAK)-STAT signalling. Finally, the decreased energy expenditure observed in MKP-1-LKO mice may be related to reduced levels of IL-6 and fibroblast growth factor 21 (FGF21). Both factors promote energy expenditure, insulin sensitivity, fatty acid oxidation, and weight loss and their reduction would be expected to impair skeletal muscle oxidative capacity and thus increase susceptibility to diet-induced obesity . Skeletal muscle plays a major role in the regulation of glucose metabolism and metabolic homeostasis. Following on from the liver-specific deletion of DUSP1 /MKP-1, the effects of skeletal muscle specific loss of this phosphatase (MKP-1-MKO), using human α-skeletal actin (HSA-Cre), have now been studied. MKP-1-MKO mice show increased levels of p38 and JNK signalling in skeletal muscle and are significantly protected from HFD-induced weight gain . As was the case in the MKP1 −/− mice, the failure to gain weight was secondary to enhanced energy expenditure when compared with MKP-1 fl/fl controls and no differences in either food intake, or activity between the two genotypes was observed . Interestingly, MKP-1-MKO mice were also resistant to hepatic steatosis, which was consistent with lower levels of hepatic PPARγ and sterol regulatory element-binding protein 1c (SREBP-1c) expression. However, no changes in either p38 or JNK activity were detected in liver tissue. Glucose (GTT) and insulin tolerance (ITT) tests revealed that MKP-1MKO mice on a HFD produced lower levels of circulating insulin and were insulin sensitive, indicating that they are protected from the development of insulin resistance. Biochemically, an unexpected role for increased PI3-kinase-Akt signalling resulting from microRNA-21 (miR-21) dependent down-regulation of phosphatase and tensin homolog (PTEN) in MPK-1-MKO was uncovered and this could contribute to the increased insulin sensitivity observed in MKP-1-MKO mice. Finally, consistent with the results of whole body deletion of DUSP1 /MKP-1, the increased energy expenditure observed in MKP-1-MKO mice was secondary to an increase in the proportion of oxidative myofibers and was reflected in enhanced oxidative capacity and mitochondrial function in skeletal muscle . Taken together, these results begin to unravel some of the complexity and tissue specific interplay of DUSP1 /MKP-1 action in the regulation of metabolic homeostasis and also emphasise the importance of compartmentalised nuclear regulation of p38 and JNK activities in mediating the phenotypes observed. The observation that DUSP1 /MKP-1 is up-regulated in insulin-responsive tissues in response to a HFD in mice and also in obese humans indicates that it forms part of a key stress response that leads to decreased energy expenditure in skeletal muscle, thus contributing to weight gain and may also mediate at least some of the adverse consequences of this disease, including abnormalities in glucose metabolism and hepatosteatosis. The further use of conditional DUSP1 /MKP-1 ablation will reveal the relative importance of MAPK regulation in distinct tissues by this phosphatase in energy homeostasis and, from the information gathered so far, MKP-1/ DUSP1 continues to be a potential pharmacological target for the treatment of metabolic disease.
DUSP1 /MKP-1 in cancer Given the central importance of deregulated MAPK signalling in the initiation and progression of human cancers it is no great surprise that the involvement of MKPs in regulating various aspects of the cancer phenotype has been of widespread interest . Disappointingly, given that DUSP1 /MKP-1 was both the first MKP to be discovered and also the first to be deleted from the mouse genome, there are currently no published studies in which DUSP1 /MKP-1 has been directly implicated in either tumour initiation or progression. It is hoped that the recent development of the conditional DUSP1 /MKP-1 mouse (see 2.1.2.) will facilitate definitive experiments, particularly as this model avoids the potentially confounding effects of the immune and inflammatory abnormalities seen when DUSP1 /MKP-1 is knocked out globally. In contrast, over the 25 or so years since DUSP1 /MKP-1 and its role in regulating MAP kinase signalling were discovered, there have been numerous publications reporting either increased or reduced expression of DUSP1 /MKP-1 in a wide range of human tumours including breast, pancreas, gastric, ovary, lung, skin and prostate . In addition, a number of studies have relied on ectopic overexpression of DUSP1 /MKP-1 in normal and cancer cell lines to study its possible role in modulating oncogenic signalling. These studies have been extensively reviewed elsewhere and as they have often yielded equivocal or even contradictory information regarding the role of DUSP1 /MKP-1 in cancer, it is not proposed to list or discuss them further here. One aspect of cancer biology in which DUSP1 /MKP-1 does appear to play an important role is in the response of normal and cancer cells to a range of chemical and physical insults including modalities used in cancer chemotherapy . Soon after it became clear that the p38 and JNK MAPKs were the preferred substrates for DUSP1 /MKP-1, it was observed that the overexpression of this phosphatase enhanced cellular resistance to both UV-radiation and the chemotherapeutic drug cisplatin and that this was related to the suppression of JNK-mediated apoptosis . That DUSP1 /MKP-1 played a crucial role in modulating sensitivity to these insults was confirmed when MEFs derived from DUSP1 −/− mice were found to be sensitive to UV-radiation, cisplatin, hydrogen peroxide and anisomycin . In normal cells, DUSP1 /MKP-1 expression is induced by UV and cisplatin via activation of p38 MAPK, whereas it is the suppression of JNK activity by DUSP1 /MKP-1 that modulates cell death. This indicates that DUSP1 /MKP-1 mediated crosstalk between these two distinct MAPK pathways regulates cellular sensitivity . Thus it is likely that elevated expression of DUSP1 /MKP-1 in tumours can mediate chemoresistance and this is supported by studies in non-small cell lung cancer (NSCLC) where overexpression of DUSP1 /MKP-1 is observed and patients become resistant to treatment with cisplatin. In NSCLC cancer cell lines where DUSP1 /MKP-1 was constitutively expressed, siRNA knockdown increased cisplatin sensitivity some 10 fold, reduced the growth of these cell lines in nude mice and rendered the resulting tumours cisplatin sensitive . In lung cancer patients dexamethasone is also often co-administered with cisplatin to ameliorate the undesirable side effects of treatment. However, glucocorticoids are known to upregulate DUSP1 /MKP-1 expression and not surprisingly dexamethasone effectively suppressed cisplatin-induced apoptosis in a lung adenocarcinoma cell line indicating that DUSP1 /MKP-1 plays a key role in this potentially undesirable drug-drug interaction . The role of DUSP1 /MKP-1 in mediating resistance to chemotherapy appears not to be restricted to cisplatin, as the ability of this phosphatase to inhibit JNK-mediated apoptosis has also been implicated in the resistance of pancreatic cancer cells to gemcitabine , multidrug resistance in glioblastoma , resistance to doxorubicin and taxanes in breast cancer and resistance to the proteasome inhibitor bortezomib . Finally, a recent paper has implicated DUSP1 /MKP-1 in a growth factor-dependent pathway, which promotes intrinsic resistance to the tyrosine kinase inhibitors (TKI) used to treat chronic myeloid leukemias (CML). In mouse pro-B BaF3 cells engineered to express the breakpoint cluster region (BCR)-Abl tyrosine kinase fusion, which drives CML, Kesarwani et al. found that DUSP1 /MKP-1 along with FBJ osteosarcoma oncogene (Fos) were responsible for resistance to the Abl TKI inhibitor imatinib (Gleevec). Genetic deletion or pharmacological inhibition of Fos and DUSP1 /MKP-1 eradicated minimal residual disease (MRD) in multiple in vivo models as well as in patient-derived mouse xenografts. Mechanistically, DUSP1 /MKP-1 seems to influence TKI sensitivity via its ability to suppress p38 MAPK activity and modulate AP1-dependent transcriptional networks. The latter hypothesis is supported by the finding that SB202190, a specific p38 MAPK inhibitor, also conferred imatinib resistance. While these results are potentially exciting, some caution is necessary in the interpretation of the data. BCI (2-benzylidene-3-(cyclohexylamino)-1-Indanone hydrochloride), the “specific” DUSP1 inhibitor used to treat mice and reverse disease in a retroviral bone marrow transduction transplantation leukemogenesis model is both highly toxic and relatively non-specific .
DUSP1 /MKP-1 function in other tissues Given the key role that MAPK signalling plays in aspects of brain development and function it is unsurprising that MKPs have been implicated in the regulation of these processes. Indeed DUSP1 /MKP-1 plays important roles in neural cell development, neuronal cell survival and death, glial cell function and events, which underpin learning and memory (reviewed in ). In terms of pathophysiology, an important observation was that DUSP1 /MKP-1 levels were elevated in the hippocampal region of post-mortem brain from patients who had been diagnosed with major depressive disorder (MDD) . MDD is characterised by chronic or episodic depression and carries a significant (2–7%) risk of suicide. Duric et al. found that DUSP1 /MKP-1 was also elevated in the hippocampus of rats exposed to chronic unpredictable stress (CUS) an effect that was attenuated by treatment with the antidepressant drug fluoxetine (Prozac) a selective serotonin reuptake inhibitor. Furthermore, adenoviral-mediated expression of DUSP1 /MKP-1 in the hippocampus caused anhedonia (an inability to experience pleasure), as assessed by a reduced preference for sucrose over water, and these animals displayed other surrogates of depressive behaviour or helplessness such as disturbed feeding and increased immobility in the forced swim test, all of which were also seen in the CUS exposed rats. Interestingly, all of the latter endpoints were suppressed in CUS exposed DUSP1 −/− mice when compared to wild type controls . Mechanistically, these changes were associated with a reduction in phospho-ERK1/2 levels in CUS exposed wild type mice, which was not observed in DUSP1 −/− mice. A result, which led the authors to conclude that ERK was the relevant DUSP1 /MKP-1 target. This finding is somewhat surprising in the light of our knowledge that JNK and p38, but not ERK, are the preferred substrates for this phosphatase and also conflicts with a previous study in which a reduction in hippocampal phospho-JNK but not phospho-ERK was observed in rats exposed to CUS . Finally, a recent study has identified similar changes in DUSP1 /MKP-1 levels in the anterior cingulate cortex (ACC) of mice exposed to neurophathic pain and CUS, which were again reversed by fluoxetine . While not shedding new light on the biochemical mechanisms involved, this latter study does implicate the regulation of MAPK signalling by DUSP1 /MKP-1 in another brain region tightly associated with regulating mood-related functions. With respect to neurodegenerative disorders, DUSP1 /MKP-1 has been reported to mediate neuroprotective effects in both in vitro and in vivo models of Huntington's disease through its ability to suppress polyglutamine-expanded huntingtin-induced activation of c-Jun N-terminal kinases (JNKs) and p38 MAPKs . Finally, by suppressing p38 MAPK activity, DUSP1 /MKP-1 has been reported to protect dopaminergic neurons from the toxic effects of 6-hydroxydopamine (6-OHDA) suggesting that strategies aimed at either increasing MKP-1 expression or activity might be a viable strategy in the treatment of Parkinson's disease . DUSP1 /MKP-1 has also been implicated in muscle regeneration as DUSP1 −/− mice are impaired in their ability to recover from experimental muscle injury and, when crossed into a mouse model of Duchenne's muscular dystrophy (the mdx dystrophin null), they display exacerbated muscular dystrophinopathy . Interestingly, this is exactly the reciprocal of the phenotype observed after deletion of DUSP10 /MKP-5 (see ) . More recently, the study of DUSP1 −/− / DUSP10 −/− double knockout (DKO) mice revealed a severe impairment in muscle regeneration. Satellite cells, the precursors of muscle cells, were less proliferative and DKO mice had increased inflammation at sites of injury suggesting that the positive regulation of myogenesis by DUSP1 /MKP-1 is dominant over negative regulation by DUSP10 /MKP-5 . Despite the fact that they share common substrates in JNK and p38 it is clear that these two MKPs regulate distinct signalling events. This may be related to the fact that while DUSP1 /MKP-1 regulates nuclear MAPK activity, DUSP10 /MKP-5 can impinge on cytosolic signalling and thus the two MKPs may regulate quite distinct sets of MAPK substrates.
DUSP2 DUSP2 (also known as PAC-1) was first identified as a mitogen-inducible gene in human T-cells and is most closely related to DUSP1 /MKP-1 and DUSP4 /MKP-2, sharing 71% and 68% amino acid identity, respectively . Mainly expressed in hematopoietic tissue, DUSP2 transcription is induced by activation of the ERK1/2 signalling pathway . When expressed in mammalian cells, DUSP2 favours dephosphorylation of ERK1/2 and p38 MAPKs, being less able to inactivate JNK . Its lack of activity against JNK was later suggested to be a result of the relative inability of this MAPK to cause catalytic activation of DUSP2 when compared with ERK2 . In a recent twist, DUSP2 was found to be unique amongst the 10 mammalian MKPs in being able to bind to and dephosphorylate the “atypical” MAPK kinases ERK3 and ERK4 . In both ERK3 and 4 the classical T-X-Y motif in the activation loop is replaced by S- E -G, in which the serine residue is the sole phospho-acceptor and DUSP2 efficiently dephosphorylates this residue in cultured cells. 2.2.1 DUSP2 in innate and adaptive immunity DUSP2 expression is restricted to thymus, spleen and lymph nodes. However, DUSP2 −/− mice develop normally and show no abnormalities in the numbers of lymphocytes in blood and bone marrow. Granulocyte numbers and lymphoid tissue development are also normal, indicating that DUSP2 is not required for immune system development . However, using the K/B x N model of inflammatory arthritis, wild type mice injected with arthritogenic K/BxN serum containing autoantibodies to glucose-6-phosphate isomerase (GPI) developed peripheral inflammatory arthritis within 2 days while DUSP2 −/− mice were protected. Further analysis showed that DUSP2 −/− animals had impaired effector responses such as inflammatory mediator production by macrophages and mast cells and decreased mast cell survival . Taken together, these results demonstrate an unexpected role for DUSP2 as a positive mediator of inflammation. Puzzlingly, stimulated mast cells and macrophages lacking DUSP2 displayed decreased ERK1/2, and p38 MAPK phosphorylation and increased JNK phosphorylation, which is exactly the opposite of the result predicted by prior biochemical studies . No compensatory changes in the expression of other MKPs was observed and the authors invoke pathway crosstalk, postulating that the increase in JNK activity on DUSP2 deletion resulted in suppression of ERK activity. More recently, Lu et al., have studied the role of DUSP2 in T cell development and differentiation and found that loss of this phosphatase has a profound effect on the differentiation of naive T cells in vitro by favouring Th17 differentiation, while inhibiting the production of into Treg cells . Using the dextran sodium sulfate (DSS)-induced model of intestinal inflammation and colitis, they further show that DUSP2 −/− mice exhibit more severe disease when compared to wild type, as evidenced by increased mucosal hyperemia and colonic ulceration. Consistent with the in vitro results, this pathology is accompanied by higher levels of Th17 cells in DSS-treated DUSP2 −/− colon and increased levels of pro-inflammatory cytokines including IL-6, IL-17, TNFα and interleukin-1beta (IL-1β) . Mechanistically, while levels of phospho-ERK and phospho-p38 were higher in untreated DUSP2 −/− colon compared to wild type, no differences were seen in DSS-treated colon from the two genotypes. However, higher levels of phospho-STAT3 were consistently seen in mice lacking DUSP2 and the authors hypothesise that this transcription factor is a direct DUSP2 substrate in vivo. However, as JAK/STAT signalling is potently activated in response to IL-6 and this cytokine is overproduced in response to DUSP2 deletion some caution must be attached to this interpretation, particularly as DUSP2 (like DUSP6/MKP-3 see ) undergoes catalytic activation by bound ERK2, implying that its full activity as a protein phosphatase is dependent on binding to a MAPK substrate . Taken together, these results demonstrate that DUSP2 plays key roles in both the innate and adaptive immune systems, which have implications for the initiation and progression of pathology in murine models of human inflammatory disease . However, at present it is unclear whether or not these relate to the direct activity of this phosphatase in modulating MAPK signalling or may involve other relevant targets. Clearly more work is required to reconcile the in vivo observations with precise molecular mechanism. 2.2.2 DUSP2 in cancer Thus far DUSP2 −/− mice have not been crossed into any of the commonly used murine cancer models and reports of the involvement of DUSP2 in cancer are relatively scant . Down-regulation of DUSP2 has been reported in a number of solid tumours, where its expression level was inversely proportional to that of the hypoxia-inducible transcription factor HIF-1α and its loss seemed to mediate increased ERK activation and chemoresistance in cancer cell lines and to contribute to colon cancer “stemness” . Given its expression in hematopoietic tissues, there are also a number of studies linking DUSP2 with blood cell cancers. Down-regulation of DUSP2 in acute myeloid leukemia (AML) is associated with constitutive ERK activation , while recent data from cancer genome sequencing of Diffuse Large B-cell lymphomas (DLBCL), the major form of non-Hodgkin's lymphoma, reveals that DUSP2 is one of the most frequently mutated genes in this disease . The observation that DUSP2 expression is highly inducible upon stimulation of B-cell lymphoma cell lines suggests that mutations in DUSP2 may have the potential to modify MAPK signalling in DLBCL. It will be vital to determine the effects of these mutations on the localisation or activity of DUSP2 in order to explore the possible contribution of this phosphatase to the initiation and/or progression of disease.
DUSP2 in innate and adaptive immunity DUSP2 expression is restricted to thymus, spleen and lymph nodes. However, DUSP2 −/− mice develop normally and show no abnormalities in the numbers of lymphocytes in blood and bone marrow. Granulocyte numbers and lymphoid tissue development are also normal, indicating that DUSP2 is not required for immune system development . However, using the K/B x N model of inflammatory arthritis, wild type mice injected with arthritogenic K/BxN serum containing autoantibodies to glucose-6-phosphate isomerase (GPI) developed peripheral inflammatory arthritis within 2 days while DUSP2 −/− mice were protected. Further analysis showed that DUSP2 −/− animals had impaired effector responses such as inflammatory mediator production by macrophages and mast cells and decreased mast cell survival . Taken together, these results demonstrate an unexpected role for DUSP2 as a positive mediator of inflammation. Puzzlingly, stimulated mast cells and macrophages lacking DUSP2 displayed decreased ERK1/2, and p38 MAPK phosphorylation and increased JNK phosphorylation, which is exactly the opposite of the result predicted by prior biochemical studies . No compensatory changes in the expression of other MKPs was observed and the authors invoke pathway crosstalk, postulating that the increase in JNK activity on DUSP2 deletion resulted in suppression of ERK activity. More recently, Lu et al., have studied the role of DUSP2 in T cell development and differentiation and found that loss of this phosphatase has a profound effect on the differentiation of naive T cells in vitro by favouring Th17 differentiation, while inhibiting the production of into Treg cells . Using the dextran sodium sulfate (DSS)-induced model of intestinal inflammation and colitis, they further show that DUSP2 −/− mice exhibit more severe disease when compared to wild type, as evidenced by increased mucosal hyperemia and colonic ulceration. Consistent with the in vitro results, this pathology is accompanied by higher levels of Th17 cells in DSS-treated DUSP2 −/− colon and increased levels of pro-inflammatory cytokines including IL-6, IL-17, TNFα and interleukin-1beta (IL-1β) . Mechanistically, while levels of phospho-ERK and phospho-p38 were higher in untreated DUSP2 −/− colon compared to wild type, no differences were seen in DSS-treated colon from the two genotypes. However, higher levels of phospho-STAT3 were consistently seen in mice lacking DUSP2 and the authors hypothesise that this transcription factor is a direct DUSP2 substrate in vivo. However, as JAK/STAT signalling is potently activated in response to IL-6 and this cytokine is overproduced in response to DUSP2 deletion some caution must be attached to this interpretation, particularly as DUSP2 (like DUSP6/MKP-3 see ) undergoes catalytic activation by bound ERK2, implying that its full activity as a protein phosphatase is dependent on binding to a MAPK substrate . Taken together, these results demonstrate that DUSP2 plays key roles in both the innate and adaptive immune systems, which have implications for the initiation and progression of pathology in murine models of human inflammatory disease . However, at present it is unclear whether or not these relate to the direct activity of this phosphatase in modulating MAPK signalling or may involve other relevant targets. Clearly more work is required to reconcile the in vivo observations with precise molecular mechanism.
DUSP2 in cancer Thus far DUSP2 −/− mice have not been crossed into any of the commonly used murine cancer models and reports of the involvement of DUSP2 in cancer are relatively scant . Down-regulation of DUSP2 has been reported in a number of solid tumours, where its expression level was inversely proportional to that of the hypoxia-inducible transcription factor HIF-1α and its loss seemed to mediate increased ERK activation and chemoresistance in cancer cell lines and to contribute to colon cancer “stemness” . Given its expression in hematopoietic tissues, there are also a number of studies linking DUSP2 with blood cell cancers. Down-regulation of DUSP2 in acute myeloid leukemia (AML) is associated with constitutive ERK activation , while recent data from cancer genome sequencing of Diffuse Large B-cell lymphomas (DLBCL), the major form of non-Hodgkin's lymphoma, reveals that DUSP2 is one of the most frequently mutated genes in this disease . The observation that DUSP2 expression is highly inducible upon stimulation of B-cell lymphoma cell lines suggests that mutations in DUSP2 may have the potential to modify MAPK signalling in DLBCL. It will be vital to determine the effects of these mutations on the localisation or activity of DUSP2 in order to explore the possible contribution of this phosphatase to the initiation and/or progression of disease.
DUSP4 /MKP-2 DUSP4 /MKP-2 was amongst the very earliest of the MKPs to be characterised and is most closely related to DUSP1 /MKP-1 , sharing 58.8% identity at the amino-acid sequence level. Although it is not as widely studied, DUSP4 /MKP-2 shares many features with its nearest relative including transcriptional regulation in response to growth factors, an ability to dephosphorylate ERK, JNK and p38 MAPKs and regulation of DUSP4 /MKP-2 protein stability by the phosphorylation of its C-terminus . The generation of knockout mice has now advanced our knowledge of DUSP4 /MKP-2 function in a number of areas. 2.3.1 DUSP4 /MKP-2 in innate and adaptive immunity The earliest reports using the DUSP4 −/− mice centred on its possible function as a regulator of innate immunity and inflammation. BMDMs from DUSP4 −/− mice showed increased levels of both JNK and p38 but not ERK signalling in response to LPS. This correlated with a potentiation of LPS-stimulated induction of the pro-inflammatory cytokines, IL-6, IL-12Beta (IL-12p40), TNFα, and also cyclooxygenase-2 (COX-2) derived prostaglandin E2 (PGE 2 ) production . However, IL-10 was suppressed, as was inducible nitric oxide synthase (iNOS) expression while arginase-1 levels were increased. The reciprocal changes in iNOS/arginase-1 levels would tend to suppress nitric oxide (NO) production as arginase-1 competes with iNOS for the same substrate. Following infection with the intracellular parasite Leishmania Mexicana , mice lacking DUSP4 /MKP-2 were found to be more susceptible to infection, with an increased parasite burden and lesion size and this was accompanied by a suppression of Th1 and/or increased Th2 responses. The reason for the increased susceptibility to Leishmania Mexicana infection was due to decreased iNOS and increased expression and function of arginase-1 rather than any modulation of cytokine synthesis . Taken together these results suggest that DUSP4 /MKP-2 does not display simple functional redundancy with respect to its near relative, but instead is protective against Leishmania Mexicana infection due to up-regulation of iNOS and suppression of arginase-1 expression, thus promoting NO-mediated parasite death. This mechanism was also found to account for the protective effects of DUSP4 /MKP-2 against Leishmania donovani the causative agent of visceral leishmaniasis and Toxoplasma gondii , which causes toxoplasmosis . Differences between DUSP4 /MKP-2 and DUSP1 /MKP-1 were further underlined in studies of the response of DUSP4 −/− mice to experimental LPS-induced sepsis. The first major surprise came with the discovery that, in contrast to mice lacking DUSP1 /MKP-1, mice lacking DUSP4 /MKP-2 were more resistant to endotoxic shock and also had lower levels of circulating IL-1β, IL-6, and TNFα . Furthermore, LPS-stimulated BMDMs derived from DUSP4 −/− mice produced significantly less TNFα and IL-10 when compared to wild type cells and this was associated with increased levels of phosphorylated (active) ERK, but decreased levels of phospho-JNK and p38 . It is unclear why there is a discrepancy between these results in LPS-stimulated BMDMs and those obtained by Al-Mutairi et al. , but they went on to show that elevated ERK2 signalling led to induction of DUSP1 /MKP-1 in the DUSP4 −/− macrophages and that this MKP was responsible for the reduction in JNK and p38 signalling and reduced cytokine production . This supports a model in which ERK-mediated cross talk between MKP-2 and MKP-1 acts to regulate cytokine production in response to LPS, a view supported by the observation that siRNA mediated knockdown of DUSP1 /MKP-1 increased the production of TNFα by DUSP4 −/− BMDMs . In the adaptive immune system deletion of DUSP4 /MKP-2, like deletion of DUSP1 /MKP-1 , does not affect thymocyte maturation and positive selection. Furthermore, no enhanced ERK, JNK, or p38 phosphorylation was observed in either activated or phorbol-12-myristate-13-acetate (PMA)-treated DUSP4 −/− T cells . However, CD4+, but not CD8+, T cells did show higher rates of proliferation without affecting differentiated Th1 and Th2 T-cell functions in vivo. The proliferative change in CD4+ T cells lacking DUSP4/MKP-2 was associated with increased STAT5 phosphorylation and interleukin 2 receptor alpha (CD25) expression . Subsequent work showed that DUSP4 /MKP-2 decreases both the transcriptional activity and stability of STAT5 and both in vitro and in vivo data showed that DUSP4 /MKP-4 deletion enhanced iTreg and reduced Th17 polarisation while DUSP4 -deficient mice were somewhat more resistant to the induction of autoimmune encephalitis . Finally, increased DUSP4 /MKP-2 expression has been implicated in age-dependent defective adaptive immunity. Increased expression of DUSP4 /MKP-2 in CD4+ memory T cells from older (>65 years) individuals inhibits the ERK and JNK-dependent expression of CD40L and reduces the production of the cytokines interleukin-4 (IL-4) and interleukin-21 (IL-21) by follicular helper cells, thus impairing T cell-dependent B cell responses . These results suggest that specific inhibition of DUSP4 /MKP-2 activity might form part of a strategy to combat increased morbidity from infections in the elderly. Taken together these studies clearly implicate DUSP4 /MKP-2 in the regulation of inmate and adaptive immunity . However, despite assumptions that its close relationship with the prototypic MKP DUSP1 /MKP-1 might indicate overlapping or identical functions this enzyme seems to play a distinct role in regulating immune function. In macrophages there is some debate as to the effects of DUSP4 /MKP-2 deletion on the activity of specific MAPK isoforms with discordant results in LPS-treated BMDMs . In T cells it seems to mediate its effects via modulation of STAT5 and, as no perturbation in MAPK signalling was observed in cells lacking DUSP4 /MKP-2, this has been taken as evidence of a MAPK-independent function for this enzyme . 2.3.2 DUSP4 /MKP-2 in cancer Although less well studied than DUSP1 /MKP-1, there have nevertheless been numerous reports of either increased or reduced expression of DUSP4 /MKP-2 in a wide variety of human cancer cell lines and primary tumours including pancreatic, lung, ovarian, breast, liver, thyroid and colon (reviewed in ). However the majority of these studies have relied on either association and/or correlation with clinical outcome/tumour subtype or ectopic expression of this MKP in cancer cell lines. Thus far the DUSP4 −/− knockout mice have not been crossed into or utilised in any of the well-characterised murine cancer models and there is no direct evidence of a role for DUSP4 /MKP-2 in the initiation or progression of tumours. Furthermore, the fact that DUSP4 /MKP-2 has a proven role in immune regulation indicates that a conditionally targeted allele of DUSP4 /MKP-2 would be required to avoid the confounding effects of deleting this MKP in immune cells on any cancer phenotypes observed. Despite this there have been some recent reports that indicate a role for this MKP in human cancers . DUSP4 /MKP-4 was found to be epigenetically silenced in some 75% of >200 cases of diffuse B cell lymphoma (DBCL) and a lack of DUSP4 /MKP-2 was a negative prognostic factor in three independent cohorts of DBCL patients. Mechanistically, this cancer appears to be dependent on JNK signalling for continued survival and loss of DUSP4 /MKP-2 contributes to cancer progression by augmenting JNK activity. This is consistent with the results of ectopic expression of DUSP4 /MKP-2, which ablates JNK activity and induces apoptosis in DBCL cells while dominant negative interference with JNK also restricts survival . There are also indications that DUSP 2/MKP-4 is implicated in resistance to chemotherapy in both gastric cancer, where resistance to doxorubicin is associated with DUSP4 /MKP-2 driven epithelial-mesenchymal transition (EMT) and in Her2 positive breast cancer where DUSP4 /MKP-2 is associated with resistance to the anti-Her2 humanised monoclonal antibody Trastuzumab and siRNA mediated silencing of DUSP4 /MKP-2 re-sensitised breast cancer cell lines with an amplified Her2 oncogene to this agent . 2.3.3 DUSP4 /MKP-2 in other tissues DUSP4 /MKP-2 is expressed at increasing levels during the process of neuronal differentiation in neural progenitors derived from retinoic acid (RA)-treated murine embryonic stem cells (mESCs) and lentiviral DUSP4 /MKP-2 siRNA knockdown significantly retarded this process. Importantly, this phenotype could be rescued with siRNA-resistant wild type but not a catalytically inactive mutant of DUSP4 /MKP-2 and loss of DUSP4 /MKP-2 resulted in increased levels of phospho-ERK, but not JNK or p38 MAPKs, indicating that this is the relevant target. Overall this data indicates that DUSP4 /MKP-2 plays a role in both the neural commitment of mESCs and neuronal differentiation and may point to a wider role for this MKP in brain function and pathology . More recently direct evidence of a role for DUSP4 /MKP-2 in the brain has come from a study of hippocampal neuronal excitability, synaptic plasticity and behaviour in DUSP4 −/− mice. Long-term potentiation (LTP) was found to be impaired in MKP-2 −/− mice and the frequency of excitatory postsynaptic current (EPSC) was also increased in both hippocampal slices and hippocampal cultures. Finally, whereas locomotor activity and anxiety-like behaviour were normal in DUSP4 −/− mice, hippocampal-dependent spatial reference and working memory were both somewhat impaired . Surprisingly, given the established role of ERK signalling in LTP no abnormalities in ERK signalling were observed in either DUSP4 −/− brain tissue or in primary hippocampal cultures. However, JNK or p38 activation was not studied and the former also play a role in memory formation and synaptic plasticity .
DUSP4 /MKP-2 in innate and adaptive immunity The earliest reports using the DUSP4 −/− mice centred on its possible function as a regulator of innate immunity and inflammation. BMDMs from DUSP4 −/− mice showed increased levels of both JNK and p38 but not ERK signalling in response to LPS. This correlated with a potentiation of LPS-stimulated induction of the pro-inflammatory cytokines, IL-6, IL-12Beta (IL-12p40), TNFα, and also cyclooxygenase-2 (COX-2) derived prostaglandin E2 (PGE 2 ) production . However, IL-10 was suppressed, as was inducible nitric oxide synthase (iNOS) expression while arginase-1 levels were increased. The reciprocal changes in iNOS/arginase-1 levels would tend to suppress nitric oxide (NO) production as arginase-1 competes with iNOS for the same substrate. Following infection with the intracellular parasite Leishmania Mexicana , mice lacking DUSP4 /MKP-2 were found to be more susceptible to infection, with an increased parasite burden and lesion size and this was accompanied by a suppression of Th1 and/or increased Th2 responses. The reason for the increased susceptibility to Leishmania Mexicana infection was due to decreased iNOS and increased expression and function of arginase-1 rather than any modulation of cytokine synthesis . Taken together these results suggest that DUSP4 /MKP-2 does not display simple functional redundancy with respect to its near relative, but instead is protective against Leishmania Mexicana infection due to up-regulation of iNOS and suppression of arginase-1 expression, thus promoting NO-mediated parasite death. This mechanism was also found to account for the protective effects of DUSP4 /MKP-2 against Leishmania donovani the causative agent of visceral leishmaniasis and Toxoplasma gondii , which causes toxoplasmosis . Differences between DUSP4 /MKP-2 and DUSP1 /MKP-1 were further underlined in studies of the response of DUSP4 −/− mice to experimental LPS-induced sepsis. The first major surprise came with the discovery that, in contrast to mice lacking DUSP1 /MKP-1, mice lacking DUSP4 /MKP-2 were more resistant to endotoxic shock and also had lower levels of circulating IL-1β, IL-6, and TNFα . Furthermore, LPS-stimulated BMDMs derived from DUSP4 −/− mice produced significantly less TNFα and IL-10 when compared to wild type cells and this was associated with increased levels of phosphorylated (active) ERK, but decreased levels of phospho-JNK and p38 . It is unclear why there is a discrepancy between these results in LPS-stimulated BMDMs and those obtained by Al-Mutairi et al. , but they went on to show that elevated ERK2 signalling led to induction of DUSP1 /MKP-1 in the DUSP4 −/− macrophages and that this MKP was responsible for the reduction in JNK and p38 signalling and reduced cytokine production . This supports a model in which ERK-mediated cross talk between MKP-2 and MKP-1 acts to regulate cytokine production in response to LPS, a view supported by the observation that siRNA mediated knockdown of DUSP1 /MKP-1 increased the production of TNFα by DUSP4 −/− BMDMs . In the adaptive immune system deletion of DUSP4 /MKP-2, like deletion of DUSP1 /MKP-1 , does not affect thymocyte maturation and positive selection. Furthermore, no enhanced ERK, JNK, or p38 phosphorylation was observed in either activated or phorbol-12-myristate-13-acetate (PMA)-treated DUSP4 −/− T cells . However, CD4+, but not CD8+, T cells did show higher rates of proliferation without affecting differentiated Th1 and Th2 T-cell functions in vivo. The proliferative change in CD4+ T cells lacking DUSP4/MKP-2 was associated with increased STAT5 phosphorylation and interleukin 2 receptor alpha (CD25) expression . Subsequent work showed that DUSP4 /MKP-2 decreases both the transcriptional activity and stability of STAT5 and both in vitro and in vivo data showed that DUSP4 /MKP-4 deletion enhanced iTreg and reduced Th17 polarisation while DUSP4 -deficient mice were somewhat more resistant to the induction of autoimmune encephalitis . Finally, increased DUSP4 /MKP-2 expression has been implicated in age-dependent defective adaptive immunity. Increased expression of DUSP4 /MKP-2 in CD4+ memory T cells from older (>65 years) individuals inhibits the ERK and JNK-dependent expression of CD40L and reduces the production of the cytokines interleukin-4 (IL-4) and interleukin-21 (IL-21) by follicular helper cells, thus impairing T cell-dependent B cell responses . These results suggest that specific inhibition of DUSP4 /MKP-2 activity might form part of a strategy to combat increased morbidity from infections in the elderly. Taken together these studies clearly implicate DUSP4 /MKP-2 in the regulation of inmate and adaptive immunity . However, despite assumptions that its close relationship with the prototypic MKP DUSP1 /MKP-1 might indicate overlapping or identical functions this enzyme seems to play a distinct role in regulating immune function. In macrophages there is some debate as to the effects of DUSP4 /MKP-2 deletion on the activity of specific MAPK isoforms with discordant results in LPS-treated BMDMs . In T cells it seems to mediate its effects via modulation of STAT5 and, as no perturbation in MAPK signalling was observed in cells lacking DUSP4 /MKP-2, this has been taken as evidence of a MAPK-independent function for this enzyme .
DUSP4 /MKP-2 in cancer Although less well studied than DUSP1 /MKP-1, there have nevertheless been numerous reports of either increased or reduced expression of DUSP4 /MKP-2 in a wide variety of human cancer cell lines and primary tumours including pancreatic, lung, ovarian, breast, liver, thyroid and colon (reviewed in ). However the majority of these studies have relied on either association and/or correlation with clinical outcome/tumour subtype or ectopic expression of this MKP in cancer cell lines. Thus far the DUSP4 −/− knockout mice have not been crossed into or utilised in any of the well-characterised murine cancer models and there is no direct evidence of a role for DUSP4 /MKP-2 in the initiation or progression of tumours. Furthermore, the fact that DUSP4 /MKP-2 has a proven role in immune regulation indicates that a conditionally targeted allele of DUSP4 /MKP-2 would be required to avoid the confounding effects of deleting this MKP in immune cells on any cancer phenotypes observed. Despite this there have been some recent reports that indicate a role for this MKP in human cancers . DUSP4 /MKP-4 was found to be epigenetically silenced in some 75% of >200 cases of diffuse B cell lymphoma (DBCL) and a lack of DUSP4 /MKP-2 was a negative prognostic factor in three independent cohorts of DBCL patients. Mechanistically, this cancer appears to be dependent on JNK signalling for continued survival and loss of DUSP4 /MKP-2 contributes to cancer progression by augmenting JNK activity. This is consistent with the results of ectopic expression of DUSP4 /MKP-2, which ablates JNK activity and induces apoptosis in DBCL cells while dominant negative interference with JNK also restricts survival . There are also indications that DUSP 2/MKP-4 is implicated in resistance to chemotherapy in both gastric cancer, where resistance to doxorubicin is associated with DUSP4 /MKP-2 driven epithelial-mesenchymal transition (EMT) and in Her2 positive breast cancer where DUSP4 /MKP-2 is associated with resistance to the anti-Her2 humanised monoclonal antibody Trastuzumab and siRNA mediated silencing of DUSP4 /MKP-2 re-sensitised breast cancer cell lines with an amplified Her2 oncogene to this agent .
DUSP4 /MKP-2 in other tissues DUSP4 /MKP-2 is expressed at increasing levels during the process of neuronal differentiation in neural progenitors derived from retinoic acid (RA)-treated murine embryonic stem cells (mESCs) and lentiviral DUSP4 /MKP-2 siRNA knockdown significantly retarded this process. Importantly, this phenotype could be rescued with siRNA-resistant wild type but not a catalytically inactive mutant of DUSP4 /MKP-2 and loss of DUSP4 /MKP-2 resulted in increased levels of phospho-ERK, but not JNK or p38 MAPKs, indicating that this is the relevant target. Overall this data indicates that DUSP4 /MKP-2 plays a role in both the neural commitment of mESCs and neuronal differentiation and may point to a wider role for this MKP in brain function and pathology . More recently direct evidence of a role for DUSP4 /MKP-2 in the brain has come from a study of hippocampal neuronal excitability, synaptic plasticity and behaviour in DUSP4 −/− mice. Long-term potentiation (LTP) was found to be impaired in MKP-2 −/− mice and the frequency of excitatory postsynaptic current (EPSC) was also increased in both hippocampal slices and hippocampal cultures. Finally, whereas locomotor activity and anxiety-like behaviour were normal in DUSP4 −/− mice, hippocampal-dependent spatial reference and working memory were both somewhat impaired . Surprisingly, given the established role of ERK signalling in LTP no abnormalities in ERK signalling were observed in either DUSP4 −/− brain tissue or in primary hippocampal cultures. However, JNK or p38 activation was not studied and the former also play a role in memory formation and synaptic plasticity .
DUSP5 DUSP5 was first identified as a growth factor and heat shock-inducible nuclear MKP and is closely related to both DUSP1 /MKP-1 and DUSP2 /MKP-4 . Despite its early discovery and characterisation as an MKP, little attention was paid to DUSP5 , presumably on the assumption that it would share many of the properties of its nearest relatives with respect to a broad activity towards ERK, JNK and 38 MAPKs. However, it was later shown that DUSP5 is unique amongst the four inducible nuclear MKPs in being absolutely specific for ERK1/2 . Furthermore, growth factor-inducible expression of DUSP5 is mediated by ERK activity making it a classical negative feedback regulator of this signalling pathway and DUSP5 binds tightly to its substrate and is able to anchor inactive ERK in the cell nucleus . Together, these properties define DUSP5 as the nuclear counterpart of the inducible cytoplasmic ERK specific phosphatase DUSP6 /MKP-3 (see ). 2.4.1 DUSP5 in innate and adaptive immunity An early indication that DUSP5 might play a role adaptive immunity came with the observation that it was highly induced following IL-2 stimulation of T-cells . This idea was seemingly reinforced by the finding that transgenic expression of DUSP5 in lymphoid cells arrested thymocyte development at the CD4+/CD8+ (double positive) stage and caused autoimmune symptoms in these animals . However, these results illustrate the limitations of overexpression experiments and probably reflect the function of ERK itself, rather than endogenous DUSP5 in regulating immune cell development. This has been confirmed by more recent experiments utilising DUSP5 −/− mice where global deletion had no effect on innate or adaptive immune cell numbers in the bone marrow, spleen or lymph nodes under homeostatic conditions . However, subjecting DUSP5 −/− mice to acute immune challenges has revealed more subtle phenotypes that are modulated in a DUSP5 -dependent manner . Thus DUSP5 has been shown to be highly expressed in eosinophils where it negatively regulates IL-33 mediated survival, via the suppression of interleukin-33 (IL-33)-induced ERK-activity and down-regulation of the anti-apoptotic protein B-cell lymphoma-extra large (BCL x L). Consequently, DUSP5 −/− mice challenged by helminth infection display prolonged eosinophil survival, enhanced eosinophil effector functions and were able to clear their parasite burden more efficiently following infection . More recently Kutty et al., while confirming that T cell development is normal in DUSP5 −/− mice, have shown that in response to acute infection with lymphocytic choriomeningitis virus (LCMV) these animals have decreased numbers of short-lived effector cells (SLECs) and increased proportions of memory precursor effector cells (MPECs). Both cell types are derived from effector CD8+ T cells in response to acute infection with SLEC being highly cytotoxic, cells that readily undergo apoptosis while MPECs retain the ability to proliferate and eventually develop into mature memory T-cells. This defect was intrinsic to T cells as bone marrow chimeric mice in which CD8 + T cells were reconstituted from DUSP5 −/− donors showed an identical phenotype and this study clearly indicates that DUSP5 plays an essential role in regulating the survival of SLECs. However, the precise mechanism(s) by which DUSP5 affects the balance of differentiating and maintaining SLEC and MPEC populations and its dependence on ERK activity are as yet uncertain . 2.4.2 DUSP5 and cancer The canonical Ras-ERK MAPK signalling pathway is frequently deregulated in human cancers with activating mutations found in upstream components of the pathway including receptor tyrosine kinases (RTKs), Ras GTPases, the MAPK kinase kinase Braf and MAPK kinase (MEK) . The observation that Braf is mutated in 40–60% of malignant melanomas and in tumours of the thyroid, colon and lung underscores the importance of the Ras-ERK pathway in malignant disease, making it an intense focus of anticancer drug discovery . In common with the cytoplasmic ERK-specific phosphatase DUSP6 /MKP-3 (see ), elevated DUSP5 expression is observed in a range of Ras or Braf mutant cancer cells where it is presumed to suppress oncogenic ERK activation. DUSP5 has also been reported to be subject to epigenetic silencing in gastric cancers and this correlated with poorer patient survival . More recently, DUSP5 down-regulation and promoter hypermethylation has been identified in colorectal tumour samples and cell lines. However, DUSP5 knockdown in colorectal cancer cell lines displayed limited effects on phospho-ERK levels and did not increase proliferation. Furthermore, a transgenic mouse overexpressing DUSP5 in the intestinal epithelium displayed no alterations in ERK signalling, intestinal homeostasis or adenoma formation and the authors concluded that DUSP5 does not regulate intestinal development or tumourigenesis . Although surprising, given the demonstrable effects of DUSP5 overexpression on ERK activation in vitro , these results should be interpreted with a degree of caution as the constitutive transgene used here cannot recapitulate the transcriptional dynamics inherent in feedback control exerted by endogenous DUSP5 . In contrast Rushworth et al. demonstrated that DUSP5 loss sensitised mice to HRas Q61L -driven skin papilloma formation in the well-established DMBA/TPA (7,12-dimethylbenz[ a ]anthracene/12-O-tetra-decanoylphorbol-13-acetate)-inducible skin carcinogenesis model. Furthermore, in vitro experiments in DUSP5 −/− MEFs revealed an essential non-redundant function for this MKP in suppressing nuclear ERK activity following acute pathway stimulation. Loss of DUSP5 −/− also provoked the upregulation of a cohort of ERK-dependent genes including SerpinB2 in TPA stimulated MEFs . SerpinB2 had previously been identified as a susceptibility gene in this model of skin carcinogenesis and concomitant deletion of SerpinB2 reversed the sensitivity of DUSP5 −/− mice to DMBA/TPA-induced papilloma formation identifying DUSP5 as a bona fide tumour suppressor by virtue of its ability to suppress SerpinB2 expression in this animal model of Ras-induced cancer . More recently, experiments using wild type and DUSP5 −/− MEFs have demonstrated that DUSP5 function is dependent on the nature of the oncogenic driver. Thus while loss of this MKP in the context of mutant Ras is compatible with continued cell proliferation, its deletion in cells expressing mutant BRaf V600E causes ERK-dependent cell cycle arrest and senescence and prevents cell transformation by this oncogene in vitro . This latter study supports the idea that MKPs might either suppress or promote carcinogenesis depending on the oncogenic and tissue context and it will be interesting to see the results of DUSP5 ablation in other, more clinically relevant, murine models of Ras- and Braf-driven cancer. 2.4.3 DUSP5 in other tissues DUSP5 is implicated in cardiovascular development, where it is expressed in angioblasts and mature vasculature in zebrafish and DUSP5 knockdown increased the etsrp + (ets1-related protein) angioblast population during early embryonic development. DUSP5 overexpression also antagonised the function of a serine threonine kinase, Snrk-1, which promotes angioblast development . DUSP5 has also been shown to act as a regulator of cardiac fibroblast proliferation and cardiac hypertrophy. Ferguson et al., demonstrated that the anti-hypertrophic activity of class I histone deacetylase (HDAC) inhibitors is mediated by their ability to increase DUSP5 gene expression, thus inhibiting both ERK activity and cardiac myocyte proliferation. Ectopic DUSP5 expression phenocopied the effects of HDAC inhibition while DUSP5 knockdown rescued endogenous phospho-ERK levels following HDAC inhibition . A subsequent study has revealed that HDAC3 specific inhibitors also induce DUSP5 expression in mouse models of diabetic cardiomyopathy and that this is associated with reduced hypertrophy and fibrosis . DUSP5 expression is also suppressed by another epigenetic regulator Methyl-CpG-binding protein 2 (MeCP2) and high MeCP2 expression was associated with cardiac fibrosis and MeCP2 knockdown in cardiac fibroblasts, increased DUSP5 expression and reduced both ERK activity and proliferation . Finally, in adult rats DUSP5 deletion has been shown to modulate the myogenic response of cerebral arteries and autoregulation of cerebral blood flow, indicating that DUSP5 also plays a physiological role in vascular reactivity .
DUSP5 in innate and adaptive immunity An early indication that DUSP5 might play a role adaptive immunity came with the observation that it was highly induced following IL-2 stimulation of T-cells . This idea was seemingly reinforced by the finding that transgenic expression of DUSP5 in lymphoid cells arrested thymocyte development at the CD4+/CD8+ (double positive) stage and caused autoimmune symptoms in these animals . However, these results illustrate the limitations of overexpression experiments and probably reflect the function of ERK itself, rather than endogenous DUSP5 in regulating immune cell development. This has been confirmed by more recent experiments utilising DUSP5 −/− mice where global deletion had no effect on innate or adaptive immune cell numbers in the bone marrow, spleen or lymph nodes under homeostatic conditions . However, subjecting DUSP5 −/− mice to acute immune challenges has revealed more subtle phenotypes that are modulated in a DUSP5 -dependent manner . Thus DUSP5 has been shown to be highly expressed in eosinophils where it negatively regulates IL-33 mediated survival, via the suppression of interleukin-33 (IL-33)-induced ERK-activity and down-regulation of the anti-apoptotic protein B-cell lymphoma-extra large (BCL x L). Consequently, DUSP5 −/− mice challenged by helminth infection display prolonged eosinophil survival, enhanced eosinophil effector functions and were able to clear their parasite burden more efficiently following infection . More recently Kutty et al., while confirming that T cell development is normal in DUSP5 −/− mice, have shown that in response to acute infection with lymphocytic choriomeningitis virus (LCMV) these animals have decreased numbers of short-lived effector cells (SLECs) and increased proportions of memory precursor effector cells (MPECs). Both cell types are derived from effector CD8+ T cells in response to acute infection with SLEC being highly cytotoxic, cells that readily undergo apoptosis while MPECs retain the ability to proliferate and eventually develop into mature memory T-cells. This defect was intrinsic to T cells as bone marrow chimeric mice in which CD8 + T cells were reconstituted from DUSP5 −/− donors showed an identical phenotype and this study clearly indicates that DUSP5 plays an essential role in regulating the survival of SLECs. However, the precise mechanism(s) by which DUSP5 affects the balance of differentiating and maintaining SLEC and MPEC populations and its dependence on ERK activity are as yet uncertain .
DUSP5 and cancer The canonical Ras-ERK MAPK signalling pathway is frequently deregulated in human cancers with activating mutations found in upstream components of the pathway including receptor tyrosine kinases (RTKs), Ras GTPases, the MAPK kinase kinase Braf and MAPK kinase (MEK) . The observation that Braf is mutated in 40–60% of malignant melanomas and in tumours of the thyroid, colon and lung underscores the importance of the Ras-ERK pathway in malignant disease, making it an intense focus of anticancer drug discovery . In common with the cytoplasmic ERK-specific phosphatase DUSP6 /MKP-3 (see ), elevated DUSP5 expression is observed in a range of Ras or Braf mutant cancer cells where it is presumed to suppress oncogenic ERK activation. DUSP5 has also been reported to be subject to epigenetic silencing in gastric cancers and this correlated with poorer patient survival . More recently, DUSP5 down-regulation and promoter hypermethylation has been identified in colorectal tumour samples and cell lines. However, DUSP5 knockdown in colorectal cancer cell lines displayed limited effects on phospho-ERK levels and did not increase proliferation. Furthermore, a transgenic mouse overexpressing DUSP5 in the intestinal epithelium displayed no alterations in ERK signalling, intestinal homeostasis or adenoma formation and the authors concluded that DUSP5 does not regulate intestinal development or tumourigenesis . Although surprising, given the demonstrable effects of DUSP5 overexpression on ERK activation in vitro , these results should be interpreted with a degree of caution as the constitutive transgene used here cannot recapitulate the transcriptional dynamics inherent in feedback control exerted by endogenous DUSP5 . In contrast Rushworth et al. demonstrated that DUSP5 loss sensitised mice to HRas Q61L -driven skin papilloma formation in the well-established DMBA/TPA (7,12-dimethylbenz[ a ]anthracene/12-O-tetra-decanoylphorbol-13-acetate)-inducible skin carcinogenesis model. Furthermore, in vitro experiments in DUSP5 −/− MEFs revealed an essential non-redundant function for this MKP in suppressing nuclear ERK activity following acute pathway stimulation. Loss of DUSP5 −/− also provoked the upregulation of a cohort of ERK-dependent genes including SerpinB2 in TPA stimulated MEFs . SerpinB2 had previously been identified as a susceptibility gene in this model of skin carcinogenesis and concomitant deletion of SerpinB2 reversed the sensitivity of DUSP5 −/− mice to DMBA/TPA-induced papilloma formation identifying DUSP5 as a bona fide tumour suppressor by virtue of its ability to suppress SerpinB2 expression in this animal model of Ras-induced cancer . More recently, experiments using wild type and DUSP5 −/− MEFs have demonstrated that DUSP5 function is dependent on the nature of the oncogenic driver. Thus while loss of this MKP in the context of mutant Ras is compatible with continued cell proliferation, its deletion in cells expressing mutant BRaf V600E causes ERK-dependent cell cycle arrest and senescence and prevents cell transformation by this oncogene in vitro . This latter study supports the idea that MKPs might either suppress or promote carcinogenesis depending on the oncogenic and tissue context and it will be interesting to see the results of DUSP5 ablation in other, more clinically relevant, murine models of Ras- and Braf-driven cancer.
DUSP5 in other tissues DUSP5 is implicated in cardiovascular development, where it is expressed in angioblasts and mature vasculature in zebrafish and DUSP5 knockdown increased the etsrp + (ets1-related protein) angioblast population during early embryonic development. DUSP5 overexpression also antagonised the function of a serine threonine kinase, Snrk-1, which promotes angioblast development . DUSP5 has also been shown to act as a regulator of cardiac fibroblast proliferation and cardiac hypertrophy. Ferguson et al., demonstrated that the anti-hypertrophic activity of class I histone deacetylase (HDAC) inhibitors is mediated by their ability to increase DUSP5 gene expression, thus inhibiting both ERK activity and cardiac myocyte proliferation. Ectopic DUSP5 expression phenocopied the effects of HDAC inhibition while DUSP5 knockdown rescued endogenous phospho-ERK levels following HDAC inhibition . A subsequent study has revealed that HDAC3 specific inhibitors also induce DUSP5 expression in mouse models of diabetic cardiomyopathy and that this is associated with reduced hypertrophy and fibrosis . DUSP5 expression is also suppressed by another epigenetic regulator Methyl-CpG-binding protein 2 (MeCP2) and high MeCP2 expression was associated with cardiac fibrosis and MeCP2 knockdown in cardiac fibroblasts, increased DUSP5 expression and reduced both ERK activity and proliferation . Finally, in adult rats DUSP5 deletion has been shown to modulate the myogenic response of cerebral arteries and autoregulation of cerebral blood flow, indicating that DUSP5 also plays a physiological role in vascular reactivity .
The cytoplasmic ERK-specific MKPs 3.1 DUSP6 /MKP-3 First characterised as an inducible cytoplasmic MKP, which is prototypic of a subfamily of 3 highly related enzymes, DUSP6 /MKP-3 was subsequently found to display absolute substrate specificity for ERK1 and ERK2 having no significant activity towards JNK, p38 or ERK5 . This selectivity is mediated by high affinity binding of ERK to the KIM within the amino-terminal domain of DUSP6 /MKP-3 and underpinned by catalytic activation of DUSP6 /MKP-3 involving a conformational change within the PTPase domain that repositions key active site residues and greatly increases enzyme activity . The cytoplasmic localisation of DUSP6 /MKP-3 is mediated by a leucine-rich nuclear export signal (NES) within the amino terminal domain and the tight binding of ERK to the KIM also indicates a role for this MKP as a cytoplasmic anchor for inactive ERK . Studies of the pattern of DUSP6 /MKP-3 expression during early mouse development established a link with sites of fibroblast growth factor (FGF) signalling and subsequent work in the chicken embryo, DUSP6 −/− mice and cultured cells established that DUSP6 /MKP-3 is transcriptionally induced in response to FGF-mediated ERK signalling and acts as a classical negative feedback regulator of ERK activity during early development . 3.1.1 DUSP6 /MKP-3 in immunity and inflammation Relatively few studies have addressed a role for DUSP6 /MKP-3 in immune regulation. DUSP6 −/− mice are reported to be indistinguishable from their wild type littermates in terms of the total number of cells and proportions of CD4 + , CD8 + , and CD4/CD8 double-positive cells in spleen, mesenchymal lymph nodes (MLN) and thymus . However, T cell receptor (TCR) stimulation of DUSP6 −/− CD4 + T cells resulted in higher levels of phospho-ERK1/2, but not of JNK or p38 and anti-CD3/28 stimulated CD4 + T cells harvested from spleen and MLN produced higher amounts of IFN-γ and lower amounts of IL-17A when compared to wild type controls. Activated DUSP6 −/− CD4 + T cells also displayed increased proliferation in vitro , but this was accompanied by increased levels of activation-induced cell death (AICD), perhaps explaining why lymphoid cellularity remains unchanged. In DUSP6 −/− CD8 + T cells the expression of CD107a or lysosomal associated membrane protein 1; (LAMP-1) a marker of lymphocyte degranulation, was reduced suggesting that DUSP6 /MKP-3 may also regulate the cytotoxic activity of CD8 + T cells. The changes in cytokine production by CD4+ T cells after DUSP6 /MKP-3 deletion are suggestive of a role in T cell polarisation. When subjected to Th1 polarizing conditions a larger number of DUSP6 −/− CD4 + T cells produced IFN-γ, while under Th17 polarizing, conditions DUSP6 −/− CD4 + T cells gave rise to fewer IL-17A producing cells. Taken together these results indicate that DUSP6 /MKP-3 regulates the polarisation of CD4 + T cell subsets by inhibiting Th1 differentiation and favouring Th17 differentiation. Furthermore, to assess the function of DUSP6 −/− regulatory T cells (Treg) these were isolated and co-cultured with naïve CD4 + T cells from WT mice and stimulated with anti-CD3/CD28 for 72 h before assessing CD4+ T cell proliferation. DUSP6 −/− Treg cells consistently showed a lower capacity to inhibit the proliferation of naïve CD4 + T cells indicating that DUSP6 /MKP-3 is required for suppressive Treg function. Finally, to assess DUSP6 /MKP-3 function in vivo DUSP6 −/− mice were crossed with IL-10 −/− mice and assessed for the development of intestinal colitis. The double knockout (DKO) mice consistently developed severe inflammation with epithelial crypt hyperplasia, loss of goblet cells, and immune cell infiltration into colonic connective tissue while DKO colonic explants produced increased levels of IFN-γ and TNFα, but lower levels of IL-17A when compared to IL-10 −/− tissues. Satisfyingly, administration of PD0325901, a specific MEK inhibitor, both ameliorated and reversed the inflammatory phenotype seen in the DKO mice demonstrating that this is a direct result of increased levels of ERK1/2 signalling in the absence of DUSP6 /MKP-3 . In a recent study of the role of DUSP6 in endothelial cell inflammation Hsu et al. found that tail vein injection of TNFα caused elevated expression of Intercellular Adhesion Molecule 1 (ICAM1) in wild type but not DUSP6 −/− animals, suggesting that DUSP6 would modulate neutrophil recruitment and transendothelial migration at sites of acute inflammation. In agreement with this hypothesis, DUSP6 −/− animals injected intraperitoneally with either TNFα or LPS showed lower levels of pulmonary neutrophil infiltration and lung injury and adoptive transfer of wild type neutrophils into DUSP6 −/− mice revealed that the defect was intrinsic to endothelial cells . Surprisingly, in vitro experiments in primary human umbilical vein endothelial cells (HUVECs) revealed that the role of DUSP6 /MKP-3 in regulating TNFα-induced expression of ICAM1 involved activation of canonical nuclear factor (NF)-κB but was not dependent on its ability to dephosphorylate ERK MAPK . Taken together, these studies indicate that DUSP6 /MKP-3 may play complex and tissue specific roles in immune cell function and inflammatory processes and further work will be required to delineate the precise nature of the signalling events involved and the tissue specificity of DUSP6 /MKP-3 functions. 3.1.2 DUSP6 /MKP-3 in metabolic homeostasis The first indication that DUSP6 /MKP-3 might be involved in metabolic control was the finding that its expression was able to prevent the suppression of phosphoenolpyruvate carboxylase (PEPCK) gene expression by insulin. DUSP6 /MKP-3 was also expressed in insulin-responsive tissues and expression levels were markedly elevated in the livers of insulin-resistant genetically obese (db/db) mice . Subsequent work showed that expression of DUSP6 /MKP-3 is also increased in the livers of HFD-induced obese mice and that adenovirus-mediated DUSP6 /MKP-3 expression in lean mice promoted gluconeogenesis and increased levels of fasting blood glucose. In contrast, shRNA knockdown of DUSP6 /MKP-3 in both lean and obese mice resulted in decreased fasting blood glucose levels . Mechanistically the transcriptional upregulation of gluconeogenic genes such as PEPCK that underpinned these effects was mediated by the dephosphorylation and nuclear translocation of Forkhead box protein O1 (FOXO1). Surprisingly, the effects of DUSP6 /MKP-3 on FOXO1 were postulated to be direct, via protein-protein interaction and dephosphorylation of this transcription factor . However, this finding is extremely difficult to reconcile with the known biochemical properties of DUSP6 /MKP-3 and in particular the requirement for ERK2 binding to achieve catalytic activation of this phosphatase . Experiments utilising DUSP6 −/− mice to study metabolism have now been performed with the finding that mice lacking DUSP6 /MKP-3 are somewhat protected from diet-induced obesity . However, quite different conclusions were reached regarding the underlying mechanism. Feng et al. reported that DUSP6 −/− mice are protected against both HFD-induced weight gain and hepatosteatosis and that these effects were accompanied by reduced liver Triglyceride (TG) levels and adiposity. DUSP6 −/− mice also exhibited increased energy expenditure, enhanced peripheral glucose disposal, and improved systemic insulin sensitivity . Phosphoproteomic analyses in cultured Hepa1–6 cells +/− siRNAs targeting DUSP6 /MKP-3 and comparing liver lysates from DUSP6 −/− and DUSP6 +/+ mice revealed significant increases in the phosphorylation of HDAC1 and 2. Pharmacological inhibition or combined knockdown of these enzymes in primary hepatocytes from DUSP6 −/− mice was able to reverse the protective effects of DUSP6 /MKP-3 deletion by raising the expression levels of several lipogenic genes, indicating that these enzymes may be the relevant in vivo targets . Ruan et al. also reported protection against HFD-induced weight gain in DUSP6 −/− mice together with improved glucose tolerance, increased insulin sensitivity and protection against hepatosteatosis . However, faecal transplantation from HFD-fed DUSP6 −/− mice into germ-free animals phenocopied this resistance and following studies of DUSP6 /MKP-3-dependant changes in gut microbiota, intestinal barrier function and the gut transcriptome they concluded that DUSP6 /MKP-3 loss protects the intestinal epithelial barrier from HFD-induced interruption and subsequent remodelling of the gut flora to maintain a lean-associated microflora. They conclude that DUSP6 /MKP-3 regulates homeostasis between the gut epithelium, mucosal immunity and microbiota . In contrast, a recent study observed comparable body weight, fat and lean mass in DUSP6 −/− and DUSP6 +/+ mice after 26 weeks on a HFD. However, glucose tolerance was somewhat abnormal in both lean and obese DUSP6 −/− mice when compared to controls . At present it is unclear why there is a discrepancy between the latter study and the previous two, but the finding that variations in the gut microbiota can have profound consequences for sensitivity to HFD coupled with differences in the genetic backgrounds used in these studies (pure C57Bl/6 J vs . mixed 129 x C57Bl/6 J) may account for this. As was the case for studies of DUSP1 /MKP-1, the use of an unconditional (whole body) knockout of DUSP6 /MKP-3 also makes the interpretation of these studies more difficult and future experiments using conditional ablation of DUSP6 /MKP-3, will be required to address complexity and tissue specific interplay in the regulation of metabolic homeostasis by this phosphatase. 3.1.3 DUSP6 /MKP-3 and cancer Given the direct involvement of Ras/ERK signalling in human cancer a possible role for DUSP6 /MKP-3 has been explored in some depth . As was the case for the inducible nuclear ERK-specific phosphatase DUSP5 , increased DUSP6 /MKP-3 expression has been observed in primary tumours and cancer cell lines, which harbor mutations in either Ras or Braf, where its role as a negative feedback regulator of ERK activity has led to the suggestion that it may act as a tumour suppressor . Thus DUSP6 expression is initially elevated and then epigenetically silenced during the progression of mutant Kras-driven pancreatic ductal adenocarcinoma, with the lowest levels detected in the most invasive and poorly differentiated tumours . In a similar vein, loss of DUSP6 /MKP-3 expression in mutant Kras-driven lung tumours is associated with increased disease severity and histological grade and loss of heterozygosity at the DUSP6 locus occurs in almost 20% of patients . However, as is the case for many MKPs much of the mechanistic data obtained so far involves the reversal of cancer-associated phenotypes by ectopic expression of DUSP6 /MKP-3 in cancer cell lines and must therefore be treated with a degree of caution. No direct evidence of a tumour suppressor function for this MKP has yet been obtained by crossing DUSP6 −/− mice into established murine cancer models. In contrast, two recent reports contain evidence that DUSP6 /MKP-3 may actually be an oncogenic driver in certain human cancers. Firstly, Shojaee et al., found that the acute oncogenic activation of BCR-Abl and Nras G12D in human pre-B cells was invariably lethal. However, any surviving cells were transformed and displayed increased expression of negative regulators of ERK signalling including DUSP6/MKP-3. Furthermore, this upregulation of DUSP6 /MKP-3 was also seen in pre-B ALL cells, where it was driven by both Abl and ERK activity and high DUSP6/MKP-3 mRNA levels in patients with Philadelphia chromosome positive (Ph + ) (BRC-Abl-driven) ALL was associated with shorter survival . To gain mechanistic insight, Shojaee et al. transformed bone marrow-derived B cell lineage progenitor cells from DUSP6 −/− mice and wild-type controls with BCR-Abl1. Interestingly, the survival of the DUSP6 −/− cells was significantly reduced and conditional expression of Nras G12D in pre-B cells was only able to transform wild type but not DUSP6 −/− cells. siRNA-mediated knockdown of DUSP6 /MKP-3 in human pre-B ALL cells also reduced survival. All of these observations strongly indicate that pre-B ALL cells are dependent on DUSP6/MKP-3-mediated negative feedback control of ERK signalling for continued survival and growth . In support of this conclusion, BCI, a compound identified as an allosteric inhibitor of DUSP6 /MKP-3, caused a rapid increase in ERK activity in patient-derived Ph + ALL cells. Mouse xenograft experiments using Ph + ALL cells derived from patients after relapse during ongoing therapy with the Abl inhibitor Imatinib (Gleevec) showed these to be resistant to tyrosine kinase inhibition, but sensitive to treatment with BCI, indicating that this drug might be used to treat TKI-resistant Ph + ALL . While these results are very provocative, they must be treated with a degree of caution in that BCI, as mentioned previously in the context of its use as an inhibitor of DUSP1 /MKP-1, is known to be relatively non-specific and displays considerable off-target toxicity . In a recent genetic screen Wittig-Blaich et al., identified DUSP6 /MKP-3 amongst a set of genes with growth suppressive properties consistent with tumour suppressor function. However, specifically in the context of mutant Braf–driven melanoma they found that siRNA-mediated knockdown of DUSP6 /MKP-3 caused apoptosis. They speculate that DUSP6 /MKP-3 might be required to prevent Braf V600E hyperactivation from triggering cell death via ERK1/2 downstream substrates and suggest that it might represent a synthetic lethal drug target in this subset of melanoma patients . The idea that MKPs can intervene to prevent the engagement of tumour suppressive pathways has been suggested previously, mainly in the context of ERK-dependent oncogene-induced senescence (OIS) and, as discussed previously, there is some support for differential outcomes in terms of cell proliferation and senescence after deletion of the inducible nuclear ERK-specific MKP DUSP5 in cells expressing either activated Ras or Braf . In support of the idea that DUSP6 /MKP-3 may be pro-oncogenic in certain cancer types DUSP6/MKP-3 is upregulated in human glioblastoma cell lines and mouse xenograft experiments showed that tumours arising from glioblastoma cells expressing DUSP6/MKP-3 grew significantly faster than non-expressing controls . The overexpression of DUSP6/MKP-3 in papillary thyroid carcinoma (PTC) cell lines is also associated with increased cell migration and invasion . Finally, although DUSP1 /MKP-1 has been widely studied as a modulator of drug responses in cancer chemotherapy (see ) less attention has been paid to the ERK-specific MKPs. However, Phuchareon et al. recently identified the down-regulation of DUSP6 /MKP-3 as a contributing factor to the reactivation of Ras-ERK signalling and drug resistance in epidermal growth factor receptor (EGFR) mutant lung cancer cell lines exposed to the TKI's gefitinib (Iressa) and erlotinib (Tarceva) . Resistance was mediated by increased ERK-dependent phosphorylation and degradation of the extra-long isoform of the pro-apoptotic B-cell lymphoma-2 (Bcl-2) family protein Bim (BimEL) thus promoting cancer cell survival. Interestingly, DUSP6 /MKP-3 down-regulation has also been implicated in mediating reactivation of Ras-ERK signalling and drug resistance in lung cancer cells harbouring the echinoderm microtubule-associated protein-like 4 -anaplastic lymphoma kinase (ELM4-ALK) fusion protein exposed to the TKI crizotinib (Xalkori) . As one of these two driver mutations are present in almost 20% of non-small cell lung cancers (NSCLC), targeting ERK signalling in combination with the use of TKI's may be a viable strategy to forestall or prevent TKI resistance and loss of DUSP6 /MKP-3 is worthy of wider study as a possible ERK-mediated drug resistance mechanism in other tumour types driven by abnormal tyrosine kinase activity. In conclusion it is perhaps no surprise that DUSP6 /MKP-3 is implicated in the regulation of oncogenic signalling through the Ras/ERK pathway and recent work indicates that the effects of altered expression levels may depend on both the oncogenic background the tissue(s) involved . Further studies will be required, preferably using conditional deletion of this MKP in murine models to dissect out its precise role in carcinogenesis, particularly given its emerging role in immune regulation and inflammatory processes, both of which may have a bearing on tumour initiation and development. 3.2 DUSP7 /MKP-X DUSP7 /MKP-X most closely related to DUSP6 /MKP-3 and shares many properties, including cytoplasmic localisation, a high degree of substrate specificity for the ERK1/2 MAPKs and substrate-induced catalytic activation but, in contrast to DUSP6 /MKP-3, almost nothing is known about its physiological function(s) or association with human disease. However, DUSP7 /MKP-X was identified in an siRNA based phenotypic screen for genes involved in meiotic progression in mouse oocytes and more recent work has shown that DUSP7 -depleted oocytes either fail or are significantly delayed in resuming meiosis and that cyclin-dependent kinase-1/cyclin B (Cdk1/CycB) activity drops below the critical level required to reinitiate meiosis. Once in meiosis DUSP7 /MKP-X depleted oocytes also had severe chromosome alignment defects and progressed into anaphase prematurely . These effects were judged to be secondary to a failure to dephosphorylate and inhibit protein kinase C isoforms, a prerequisite for the timely activation of Cdk1/CycB and are likely to be significant as both male and female mice lacking DUSP7 /MKP-X have been reported as viable but infertile ( http://www.mousephenotype.org/data/genes/MGI:2387100 ). As failure to resume oocyte meiosis is a contributing factor to female infertility in humans it will be important to establish whether this signalling pathway is conserved. 3.3 DUSP9 /MKP-4 MKP-4, the third member of this group of cytoplasmic phosphatases, is encoded by the X-linked DUSP9 gene and has properties in common with DUSP6 /MKP-3 and DUSP7 /MKP-X, although its substrate specificity is somewhat more relaxed with respect to binding and inactivation of JNK and p38 MAPKs . DUSP9 /MKP-4 is also somewhat unusual in that it is not transcriptionally regulated in response to either growth factor simulation or stress, but instead seems to be constitutively expressed and regulated via phosphorylation of a conserved serine residue adjacent to the KIM, which abrogates substrate binding . Unconditional deletion of DUSP9 results in embryonic lethality due to placental insufficiency, but tetraploid rescue experiments demonstrated that it is otherwise dispensable for normal embryonic development . Information about possible physiological roles for DUSP9 and links to human disease is relatively scant. Selective expression of DUSP9 expression in Plasmacytoid dendritic cells (pDCs), but not conventional dendritic cells (cDCs), suggested a possible role in determining this phenotype. However, conditional deletion of DUSP9 in pDCs did not increase ERK activation after TLR9 stimulation and only weakly affected IFN-β and IL-12Beta (IL-12p40) production by these cells indicating that this MKP is not essential for the high level production of IFN-β, which is characteristic of pDCs . However, recent work now indicates a link between DUSP9 /MKP-4 and metabolic homeostasis . In obese and insulin resistant mouse models DUSP9 /MKP-4, protein levels are reported to be elevated in insulin responsive tissues and expression of DUSP9 /MKP-4 caused increased expression of PEPCK. Overexpression of DUSP9 /MKP-4 in adipocytes also blocked insulin-stimulated glucose uptake, again suggesting that this enzyme antagonises the effects of insulin in responsive cells and tissues . However, in a stress-induced in vitro model of insulin resistance and following adenoviral-mediated overexpression of DUSP9 /MKP-4 in the livers of genetically obese (ob/ob) mice Emanuelli et al. reported that DUSP9 /MKP-4 expression improved glucose intolerance, decreased the expression of gluconeogenic genes and reduced hepatic steatosis . Despite these apparently contradictory results, a recent study in which conditional liver-specific deletion of DUSP9 /MKP-4 was used to study the response to a high fat diet has demonstrated that loss of DUSP9 /MKP-4 in the liver sensitises animals to hepatic steatosis and inflammatory responses and that DUSP9 deficiency aggravated high fat, high cholesterol (HFHC)-induced liver fibrosis . In this context it will be interesting to compare the effects of whole body DUSP9 KO and also loss of this MKP in other insulin responsive tissues as studies of DUSP1 /MKP-1 function in metabolic homeostasis have revealed that the liver phenotype may not be dominant over the effects of loss of function in other tissues . Finally, a genetic variant mapping near to the DUSP 9 gene locus has repeatedly been detected in genome-wide association studies (GWAS) as a risk factor for the development of Type-2 diabetes across different ethnicities in human populations, further reinforcing a link between DUSP9 /MKP-4 and metabolic control . The availability of a conditionally targeted allele for DUSP9 should greatly accelerate future work on the possible role of this MKP, both in metabolic disease and in other pathologies.
DUSP6 /MKP-3 First characterised as an inducible cytoplasmic MKP, which is prototypic of a subfamily of 3 highly related enzymes, DUSP6 /MKP-3 was subsequently found to display absolute substrate specificity for ERK1 and ERK2 having no significant activity towards JNK, p38 or ERK5 . This selectivity is mediated by high affinity binding of ERK to the KIM within the amino-terminal domain of DUSP6 /MKP-3 and underpinned by catalytic activation of DUSP6 /MKP-3 involving a conformational change within the PTPase domain that repositions key active site residues and greatly increases enzyme activity . The cytoplasmic localisation of DUSP6 /MKP-3 is mediated by a leucine-rich nuclear export signal (NES) within the amino terminal domain and the tight binding of ERK to the KIM also indicates a role for this MKP as a cytoplasmic anchor for inactive ERK . Studies of the pattern of DUSP6 /MKP-3 expression during early mouse development established a link with sites of fibroblast growth factor (FGF) signalling and subsequent work in the chicken embryo, DUSP6 −/− mice and cultured cells established that DUSP6 /MKP-3 is transcriptionally induced in response to FGF-mediated ERK signalling and acts as a classical negative feedback regulator of ERK activity during early development . 3.1.1 DUSP6 /MKP-3 in immunity and inflammation Relatively few studies have addressed a role for DUSP6 /MKP-3 in immune regulation. DUSP6 −/− mice are reported to be indistinguishable from their wild type littermates in terms of the total number of cells and proportions of CD4 + , CD8 + , and CD4/CD8 double-positive cells in spleen, mesenchymal lymph nodes (MLN) and thymus . However, T cell receptor (TCR) stimulation of DUSP6 −/− CD4 + T cells resulted in higher levels of phospho-ERK1/2, but not of JNK or p38 and anti-CD3/28 stimulated CD4 + T cells harvested from spleen and MLN produced higher amounts of IFN-γ and lower amounts of IL-17A when compared to wild type controls. Activated DUSP6 −/− CD4 + T cells also displayed increased proliferation in vitro , but this was accompanied by increased levels of activation-induced cell death (AICD), perhaps explaining why lymphoid cellularity remains unchanged. In DUSP6 −/− CD8 + T cells the expression of CD107a or lysosomal associated membrane protein 1; (LAMP-1) a marker of lymphocyte degranulation, was reduced suggesting that DUSP6 /MKP-3 may also regulate the cytotoxic activity of CD8 + T cells. The changes in cytokine production by CD4+ T cells after DUSP6 /MKP-3 deletion are suggestive of a role in T cell polarisation. When subjected to Th1 polarizing conditions a larger number of DUSP6 −/− CD4 + T cells produced IFN-γ, while under Th17 polarizing, conditions DUSP6 −/− CD4 + T cells gave rise to fewer IL-17A producing cells. Taken together these results indicate that DUSP6 /MKP-3 regulates the polarisation of CD4 + T cell subsets by inhibiting Th1 differentiation and favouring Th17 differentiation. Furthermore, to assess the function of DUSP6 −/− regulatory T cells (Treg) these were isolated and co-cultured with naïve CD4 + T cells from WT mice and stimulated with anti-CD3/CD28 for 72 h before assessing CD4+ T cell proliferation. DUSP6 −/− Treg cells consistently showed a lower capacity to inhibit the proliferation of naïve CD4 + T cells indicating that DUSP6 /MKP-3 is required for suppressive Treg function. Finally, to assess DUSP6 /MKP-3 function in vivo DUSP6 −/− mice were crossed with IL-10 −/− mice and assessed for the development of intestinal colitis. The double knockout (DKO) mice consistently developed severe inflammation with epithelial crypt hyperplasia, loss of goblet cells, and immune cell infiltration into colonic connective tissue while DKO colonic explants produced increased levels of IFN-γ and TNFα, but lower levels of IL-17A when compared to IL-10 −/− tissues. Satisfyingly, administration of PD0325901, a specific MEK inhibitor, both ameliorated and reversed the inflammatory phenotype seen in the DKO mice demonstrating that this is a direct result of increased levels of ERK1/2 signalling in the absence of DUSP6 /MKP-3 . In a recent study of the role of DUSP6 in endothelial cell inflammation Hsu et al. found that tail vein injection of TNFα caused elevated expression of Intercellular Adhesion Molecule 1 (ICAM1) in wild type but not DUSP6 −/− animals, suggesting that DUSP6 would modulate neutrophil recruitment and transendothelial migration at sites of acute inflammation. In agreement with this hypothesis, DUSP6 −/− animals injected intraperitoneally with either TNFα or LPS showed lower levels of pulmonary neutrophil infiltration and lung injury and adoptive transfer of wild type neutrophils into DUSP6 −/− mice revealed that the defect was intrinsic to endothelial cells . Surprisingly, in vitro experiments in primary human umbilical vein endothelial cells (HUVECs) revealed that the role of DUSP6 /MKP-3 in regulating TNFα-induced expression of ICAM1 involved activation of canonical nuclear factor (NF)-κB but was not dependent on its ability to dephosphorylate ERK MAPK . Taken together, these studies indicate that DUSP6 /MKP-3 may play complex and tissue specific roles in immune cell function and inflammatory processes and further work will be required to delineate the precise nature of the signalling events involved and the tissue specificity of DUSP6 /MKP-3 functions. 3.1.2 DUSP6 /MKP-3 in metabolic homeostasis The first indication that DUSP6 /MKP-3 might be involved in metabolic control was the finding that its expression was able to prevent the suppression of phosphoenolpyruvate carboxylase (PEPCK) gene expression by insulin. DUSP6 /MKP-3 was also expressed in insulin-responsive tissues and expression levels were markedly elevated in the livers of insulin-resistant genetically obese (db/db) mice . Subsequent work showed that expression of DUSP6 /MKP-3 is also increased in the livers of HFD-induced obese mice and that adenovirus-mediated DUSP6 /MKP-3 expression in lean mice promoted gluconeogenesis and increased levels of fasting blood glucose. In contrast, shRNA knockdown of DUSP6 /MKP-3 in both lean and obese mice resulted in decreased fasting blood glucose levels . Mechanistically the transcriptional upregulation of gluconeogenic genes such as PEPCK that underpinned these effects was mediated by the dephosphorylation and nuclear translocation of Forkhead box protein O1 (FOXO1). Surprisingly, the effects of DUSP6 /MKP-3 on FOXO1 were postulated to be direct, via protein-protein interaction and dephosphorylation of this transcription factor . However, this finding is extremely difficult to reconcile with the known biochemical properties of DUSP6 /MKP-3 and in particular the requirement for ERK2 binding to achieve catalytic activation of this phosphatase . Experiments utilising DUSP6 −/− mice to study metabolism have now been performed with the finding that mice lacking DUSP6 /MKP-3 are somewhat protected from diet-induced obesity . However, quite different conclusions were reached regarding the underlying mechanism. Feng et al. reported that DUSP6 −/− mice are protected against both HFD-induced weight gain and hepatosteatosis and that these effects were accompanied by reduced liver Triglyceride (TG) levels and adiposity. DUSP6 −/− mice also exhibited increased energy expenditure, enhanced peripheral glucose disposal, and improved systemic insulin sensitivity . Phosphoproteomic analyses in cultured Hepa1–6 cells +/− siRNAs targeting DUSP6 /MKP-3 and comparing liver lysates from DUSP6 −/− and DUSP6 +/+ mice revealed significant increases in the phosphorylation of HDAC1 and 2. Pharmacological inhibition or combined knockdown of these enzymes in primary hepatocytes from DUSP6 −/− mice was able to reverse the protective effects of DUSP6 /MKP-3 deletion by raising the expression levels of several lipogenic genes, indicating that these enzymes may be the relevant in vivo targets . Ruan et al. also reported protection against HFD-induced weight gain in DUSP6 −/− mice together with improved glucose tolerance, increased insulin sensitivity and protection against hepatosteatosis . However, faecal transplantation from HFD-fed DUSP6 −/− mice into germ-free animals phenocopied this resistance and following studies of DUSP6 /MKP-3-dependant changes in gut microbiota, intestinal barrier function and the gut transcriptome they concluded that DUSP6 /MKP-3 loss protects the intestinal epithelial barrier from HFD-induced interruption and subsequent remodelling of the gut flora to maintain a lean-associated microflora. They conclude that DUSP6 /MKP-3 regulates homeostasis between the gut epithelium, mucosal immunity and microbiota . In contrast, a recent study observed comparable body weight, fat and lean mass in DUSP6 −/− and DUSP6 +/+ mice after 26 weeks on a HFD. However, glucose tolerance was somewhat abnormal in both lean and obese DUSP6 −/− mice when compared to controls . At present it is unclear why there is a discrepancy between the latter study and the previous two, but the finding that variations in the gut microbiota can have profound consequences for sensitivity to HFD coupled with differences in the genetic backgrounds used in these studies (pure C57Bl/6 J vs . mixed 129 x C57Bl/6 J) may account for this. As was the case for studies of DUSP1 /MKP-1, the use of an unconditional (whole body) knockout of DUSP6 /MKP-3 also makes the interpretation of these studies more difficult and future experiments using conditional ablation of DUSP6 /MKP-3, will be required to address complexity and tissue specific interplay in the regulation of metabolic homeostasis by this phosphatase. 3.1.3 DUSP6 /MKP-3 and cancer Given the direct involvement of Ras/ERK signalling in human cancer a possible role for DUSP6 /MKP-3 has been explored in some depth . As was the case for the inducible nuclear ERK-specific phosphatase DUSP5 , increased DUSP6 /MKP-3 expression has been observed in primary tumours and cancer cell lines, which harbor mutations in either Ras or Braf, where its role as a negative feedback regulator of ERK activity has led to the suggestion that it may act as a tumour suppressor . Thus DUSP6 expression is initially elevated and then epigenetically silenced during the progression of mutant Kras-driven pancreatic ductal adenocarcinoma, with the lowest levels detected in the most invasive and poorly differentiated tumours . In a similar vein, loss of DUSP6 /MKP-3 expression in mutant Kras-driven lung tumours is associated with increased disease severity and histological grade and loss of heterozygosity at the DUSP6 locus occurs in almost 20% of patients . However, as is the case for many MKPs much of the mechanistic data obtained so far involves the reversal of cancer-associated phenotypes by ectopic expression of DUSP6 /MKP-3 in cancer cell lines and must therefore be treated with a degree of caution. No direct evidence of a tumour suppressor function for this MKP has yet been obtained by crossing DUSP6 −/− mice into established murine cancer models. In contrast, two recent reports contain evidence that DUSP6 /MKP-3 may actually be an oncogenic driver in certain human cancers. Firstly, Shojaee et al., found that the acute oncogenic activation of BCR-Abl and Nras G12D in human pre-B cells was invariably lethal. However, any surviving cells were transformed and displayed increased expression of negative regulators of ERK signalling including DUSP6/MKP-3. Furthermore, this upregulation of DUSP6 /MKP-3 was also seen in pre-B ALL cells, where it was driven by both Abl and ERK activity and high DUSP6/MKP-3 mRNA levels in patients with Philadelphia chromosome positive (Ph + ) (BRC-Abl-driven) ALL was associated with shorter survival . To gain mechanistic insight, Shojaee et al. transformed bone marrow-derived B cell lineage progenitor cells from DUSP6 −/− mice and wild-type controls with BCR-Abl1. Interestingly, the survival of the DUSP6 −/− cells was significantly reduced and conditional expression of Nras G12D in pre-B cells was only able to transform wild type but not DUSP6 −/− cells. siRNA-mediated knockdown of DUSP6 /MKP-3 in human pre-B ALL cells also reduced survival. All of these observations strongly indicate that pre-B ALL cells are dependent on DUSP6/MKP-3-mediated negative feedback control of ERK signalling for continued survival and growth . In support of this conclusion, BCI, a compound identified as an allosteric inhibitor of DUSP6 /MKP-3, caused a rapid increase in ERK activity in patient-derived Ph + ALL cells. Mouse xenograft experiments using Ph + ALL cells derived from patients after relapse during ongoing therapy with the Abl inhibitor Imatinib (Gleevec) showed these to be resistant to tyrosine kinase inhibition, but sensitive to treatment with BCI, indicating that this drug might be used to treat TKI-resistant Ph + ALL . While these results are very provocative, they must be treated with a degree of caution in that BCI, as mentioned previously in the context of its use as an inhibitor of DUSP1 /MKP-1, is known to be relatively non-specific and displays considerable off-target toxicity . In a recent genetic screen Wittig-Blaich et al., identified DUSP6 /MKP-3 amongst a set of genes with growth suppressive properties consistent with tumour suppressor function. However, specifically in the context of mutant Braf–driven melanoma they found that siRNA-mediated knockdown of DUSP6 /MKP-3 caused apoptosis. They speculate that DUSP6 /MKP-3 might be required to prevent Braf V600E hyperactivation from triggering cell death via ERK1/2 downstream substrates and suggest that it might represent a synthetic lethal drug target in this subset of melanoma patients . The idea that MKPs can intervene to prevent the engagement of tumour suppressive pathways has been suggested previously, mainly in the context of ERK-dependent oncogene-induced senescence (OIS) and, as discussed previously, there is some support for differential outcomes in terms of cell proliferation and senescence after deletion of the inducible nuclear ERK-specific MKP DUSP5 in cells expressing either activated Ras or Braf . In support of the idea that DUSP6 /MKP-3 may be pro-oncogenic in certain cancer types DUSP6/MKP-3 is upregulated in human glioblastoma cell lines and mouse xenograft experiments showed that tumours arising from glioblastoma cells expressing DUSP6/MKP-3 grew significantly faster than non-expressing controls . The overexpression of DUSP6/MKP-3 in papillary thyroid carcinoma (PTC) cell lines is also associated with increased cell migration and invasion . Finally, although DUSP1 /MKP-1 has been widely studied as a modulator of drug responses in cancer chemotherapy (see ) less attention has been paid to the ERK-specific MKPs. However, Phuchareon et al. recently identified the down-regulation of DUSP6 /MKP-3 as a contributing factor to the reactivation of Ras-ERK signalling and drug resistance in epidermal growth factor receptor (EGFR) mutant lung cancer cell lines exposed to the TKI's gefitinib (Iressa) and erlotinib (Tarceva) . Resistance was mediated by increased ERK-dependent phosphorylation and degradation of the extra-long isoform of the pro-apoptotic B-cell lymphoma-2 (Bcl-2) family protein Bim (BimEL) thus promoting cancer cell survival. Interestingly, DUSP6 /MKP-3 down-regulation has also been implicated in mediating reactivation of Ras-ERK signalling and drug resistance in lung cancer cells harbouring the echinoderm microtubule-associated protein-like 4 -anaplastic lymphoma kinase (ELM4-ALK) fusion protein exposed to the TKI crizotinib (Xalkori) . As one of these two driver mutations are present in almost 20% of non-small cell lung cancers (NSCLC), targeting ERK signalling in combination with the use of TKI's may be a viable strategy to forestall or prevent TKI resistance and loss of DUSP6 /MKP-3 is worthy of wider study as a possible ERK-mediated drug resistance mechanism in other tumour types driven by abnormal tyrosine kinase activity. In conclusion it is perhaps no surprise that DUSP6 /MKP-3 is implicated in the regulation of oncogenic signalling through the Ras/ERK pathway and recent work indicates that the effects of altered expression levels may depend on both the oncogenic background the tissue(s) involved . Further studies will be required, preferably using conditional deletion of this MKP in murine models to dissect out its precise role in carcinogenesis, particularly given its emerging role in immune regulation and inflammatory processes, both of which may have a bearing on tumour initiation and development.
DUSP6 /MKP-3 in immunity and inflammation Relatively few studies have addressed a role for DUSP6 /MKP-3 in immune regulation. DUSP6 −/− mice are reported to be indistinguishable from their wild type littermates in terms of the total number of cells and proportions of CD4 + , CD8 + , and CD4/CD8 double-positive cells in spleen, mesenchymal lymph nodes (MLN) and thymus . However, T cell receptor (TCR) stimulation of DUSP6 −/− CD4 + T cells resulted in higher levels of phospho-ERK1/2, but not of JNK or p38 and anti-CD3/28 stimulated CD4 + T cells harvested from spleen and MLN produced higher amounts of IFN-γ and lower amounts of IL-17A when compared to wild type controls. Activated DUSP6 −/− CD4 + T cells also displayed increased proliferation in vitro , but this was accompanied by increased levels of activation-induced cell death (AICD), perhaps explaining why lymphoid cellularity remains unchanged. In DUSP6 −/− CD8 + T cells the expression of CD107a or lysosomal associated membrane protein 1; (LAMP-1) a marker of lymphocyte degranulation, was reduced suggesting that DUSP6 /MKP-3 may also regulate the cytotoxic activity of CD8 + T cells. The changes in cytokine production by CD4+ T cells after DUSP6 /MKP-3 deletion are suggestive of a role in T cell polarisation. When subjected to Th1 polarizing conditions a larger number of DUSP6 −/− CD4 + T cells produced IFN-γ, while under Th17 polarizing, conditions DUSP6 −/− CD4 + T cells gave rise to fewer IL-17A producing cells. Taken together these results indicate that DUSP6 /MKP-3 regulates the polarisation of CD4 + T cell subsets by inhibiting Th1 differentiation and favouring Th17 differentiation. Furthermore, to assess the function of DUSP6 −/− regulatory T cells (Treg) these were isolated and co-cultured with naïve CD4 + T cells from WT mice and stimulated with anti-CD3/CD28 for 72 h before assessing CD4+ T cell proliferation. DUSP6 −/− Treg cells consistently showed a lower capacity to inhibit the proliferation of naïve CD4 + T cells indicating that DUSP6 /MKP-3 is required for suppressive Treg function. Finally, to assess DUSP6 /MKP-3 function in vivo DUSP6 −/− mice were crossed with IL-10 −/− mice and assessed for the development of intestinal colitis. The double knockout (DKO) mice consistently developed severe inflammation with epithelial crypt hyperplasia, loss of goblet cells, and immune cell infiltration into colonic connective tissue while DKO colonic explants produced increased levels of IFN-γ and TNFα, but lower levels of IL-17A when compared to IL-10 −/− tissues. Satisfyingly, administration of PD0325901, a specific MEK inhibitor, both ameliorated and reversed the inflammatory phenotype seen in the DKO mice demonstrating that this is a direct result of increased levels of ERK1/2 signalling in the absence of DUSP6 /MKP-3 . In a recent study of the role of DUSP6 in endothelial cell inflammation Hsu et al. found that tail vein injection of TNFα caused elevated expression of Intercellular Adhesion Molecule 1 (ICAM1) in wild type but not DUSP6 −/− animals, suggesting that DUSP6 would modulate neutrophil recruitment and transendothelial migration at sites of acute inflammation. In agreement with this hypothesis, DUSP6 −/− animals injected intraperitoneally with either TNFα or LPS showed lower levels of pulmonary neutrophil infiltration and lung injury and adoptive transfer of wild type neutrophils into DUSP6 −/− mice revealed that the defect was intrinsic to endothelial cells . Surprisingly, in vitro experiments in primary human umbilical vein endothelial cells (HUVECs) revealed that the role of DUSP6 /MKP-3 in regulating TNFα-induced expression of ICAM1 involved activation of canonical nuclear factor (NF)-κB but was not dependent on its ability to dephosphorylate ERK MAPK . Taken together, these studies indicate that DUSP6 /MKP-3 may play complex and tissue specific roles in immune cell function and inflammatory processes and further work will be required to delineate the precise nature of the signalling events involved and the tissue specificity of DUSP6 /MKP-3 functions.
DUSP6 /MKP-3 in metabolic homeostasis The first indication that DUSP6 /MKP-3 might be involved in metabolic control was the finding that its expression was able to prevent the suppression of phosphoenolpyruvate carboxylase (PEPCK) gene expression by insulin. DUSP6 /MKP-3 was also expressed in insulin-responsive tissues and expression levels were markedly elevated in the livers of insulin-resistant genetically obese (db/db) mice . Subsequent work showed that expression of DUSP6 /MKP-3 is also increased in the livers of HFD-induced obese mice and that adenovirus-mediated DUSP6 /MKP-3 expression in lean mice promoted gluconeogenesis and increased levels of fasting blood glucose. In contrast, shRNA knockdown of DUSP6 /MKP-3 in both lean and obese mice resulted in decreased fasting blood glucose levels . Mechanistically the transcriptional upregulation of gluconeogenic genes such as PEPCK that underpinned these effects was mediated by the dephosphorylation and nuclear translocation of Forkhead box protein O1 (FOXO1). Surprisingly, the effects of DUSP6 /MKP-3 on FOXO1 were postulated to be direct, via protein-protein interaction and dephosphorylation of this transcription factor . However, this finding is extremely difficult to reconcile with the known biochemical properties of DUSP6 /MKP-3 and in particular the requirement for ERK2 binding to achieve catalytic activation of this phosphatase . Experiments utilising DUSP6 −/− mice to study metabolism have now been performed with the finding that mice lacking DUSP6 /MKP-3 are somewhat protected from diet-induced obesity . However, quite different conclusions were reached regarding the underlying mechanism. Feng et al. reported that DUSP6 −/− mice are protected against both HFD-induced weight gain and hepatosteatosis and that these effects were accompanied by reduced liver Triglyceride (TG) levels and adiposity. DUSP6 −/− mice also exhibited increased energy expenditure, enhanced peripheral glucose disposal, and improved systemic insulin sensitivity . Phosphoproteomic analyses in cultured Hepa1–6 cells +/− siRNAs targeting DUSP6 /MKP-3 and comparing liver lysates from DUSP6 −/− and DUSP6 +/+ mice revealed significant increases in the phosphorylation of HDAC1 and 2. Pharmacological inhibition or combined knockdown of these enzymes in primary hepatocytes from DUSP6 −/− mice was able to reverse the protective effects of DUSP6 /MKP-3 deletion by raising the expression levels of several lipogenic genes, indicating that these enzymes may be the relevant in vivo targets . Ruan et al. also reported protection against HFD-induced weight gain in DUSP6 −/− mice together with improved glucose tolerance, increased insulin sensitivity and protection against hepatosteatosis . However, faecal transplantation from HFD-fed DUSP6 −/− mice into germ-free animals phenocopied this resistance and following studies of DUSP6 /MKP-3-dependant changes in gut microbiota, intestinal barrier function and the gut transcriptome they concluded that DUSP6 /MKP-3 loss protects the intestinal epithelial barrier from HFD-induced interruption and subsequent remodelling of the gut flora to maintain a lean-associated microflora. They conclude that DUSP6 /MKP-3 regulates homeostasis between the gut epithelium, mucosal immunity and microbiota . In contrast, a recent study observed comparable body weight, fat and lean mass in DUSP6 −/− and DUSP6 +/+ mice after 26 weeks on a HFD. However, glucose tolerance was somewhat abnormal in both lean and obese DUSP6 −/− mice when compared to controls . At present it is unclear why there is a discrepancy between the latter study and the previous two, but the finding that variations in the gut microbiota can have profound consequences for sensitivity to HFD coupled with differences in the genetic backgrounds used in these studies (pure C57Bl/6 J vs . mixed 129 x C57Bl/6 J) may account for this. As was the case for studies of DUSP1 /MKP-1, the use of an unconditional (whole body) knockout of DUSP6 /MKP-3 also makes the interpretation of these studies more difficult and future experiments using conditional ablation of DUSP6 /MKP-3, will be required to address complexity and tissue specific interplay in the regulation of metabolic homeostasis by this phosphatase.
DUSP6 /MKP-3 and cancer Given the direct involvement of Ras/ERK signalling in human cancer a possible role for DUSP6 /MKP-3 has been explored in some depth . As was the case for the inducible nuclear ERK-specific phosphatase DUSP5 , increased DUSP6 /MKP-3 expression has been observed in primary tumours and cancer cell lines, which harbor mutations in either Ras or Braf, where its role as a negative feedback regulator of ERK activity has led to the suggestion that it may act as a tumour suppressor . Thus DUSP6 expression is initially elevated and then epigenetically silenced during the progression of mutant Kras-driven pancreatic ductal adenocarcinoma, with the lowest levels detected in the most invasive and poorly differentiated tumours . In a similar vein, loss of DUSP6 /MKP-3 expression in mutant Kras-driven lung tumours is associated with increased disease severity and histological grade and loss of heterozygosity at the DUSP6 locus occurs in almost 20% of patients . However, as is the case for many MKPs much of the mechanistic data obtained so far involves the reversal of cancer-associated phenotypes by ectopic expression of DUSP6 /MKP-3 in cancer cell lines and must therefore be treated with a degree of caution. No direct evidence of a tumour suppressor function for this MKP has yet been obtained by crossing DUSP6 −/− mice into established murine cancer models. In contrast, two recent reports contain evidence that DUSP6 /MKP-3 may actually be an oncogenic driver in certain human cancers. Firstly, Shojaee et al., found that the acute oncogenic activation of BCR-Abl and Nras G12D in human pre-B cells was invariably lethal. However, any surviving cells were transformed and displayed increased expression of negative regulators of ERK signalling including DUSP6/MKP-3. Furthermore, this upregulation of DUSP6 /MKP-3 was also seen in pre-B ALL cells, where it was driven by both Abl and ERK activity and high DUSP6/MKP-3 mRNA levels in patients with Philadelphia chromosome positive (Ph + ) (BRC-Abl-driven) ALL was associated with shorter survival . To gain mechanistic insight, Shojaee et al. transformed bone marrow-derived B cell lineage progenitor cells from DUSP6 −/− mice and wild-type controls with BCR-Abl1. Interestingly, the survival of the DUSP6 −/− cells was significantly reduced and conditional expression of Nras G12D in pre-B cells was only able to transform wild type but not DUSP6 −/− cells. siRNA-mediated knockdown of DUSP6 /MKP-3 in human pre-B ALL cells also reduced survival. All of these observations strongly indicate that pre-B ALL cells are dependent on DUSP6/MKP-3-mediated negative feedback control of ERK signalling for continued survival and growth . In support of this conclusion, BCI, a compound identified as an allosteric inhibitor of DUSP6 /MKP-3, caused a rapid increase in ERK activity in patient-derived Ph + ALL cells. Mouse xenograft experiments using Ph + ALL cells derived from patients after relapse during ongoing therapy with the Abl inhibitor Imatinib (Gleevec) showed these to be resistant to tyrosine kinase inhibition, but sensitive to treatment with BCI, indicating that this drug might be used to treat TKI-resistant Ph + ALL . While these results are very provocative, they must be treated with a degree of caution in that BCI, as mentioned previously in the context of its use as an inhibitor of DUSP1 /MKP-1, is known to be relatively non-specific and displays considerable off-target toxicity . In a recent genetic screen Wittig-Blaich et al., identified DUSP6 /MKP-3 amongst a set of genes with growth suppressive properties consistent with tumour suppressor function. However, specifically in the context of mutant Braf–driven melanoma they found that siRNA-mediated knockdown of DUSP6 /MKP-3 caused apoptosis. They speculate that DUSP6 /MKP-3 might be required to prevent Braf V600E hyperactivation from triggering cell death via ERK1/2 downstream substrates and suggest that it might represent a synthetic lethal drug target in this subset of melanoma patients . The idea that MKPs can intervene to prevent the engagement of tumour suppressive pathways has been suggested previously, mainly in the context of ERK-dependent oncogene-induced senescence (OIS) and, as discussed previously, there is some support for differential outcomes in terms of cell proliferation and senescence after deletion of the inducible nuclear ERK-specific MKP DUSP5 in cells expressing either activated Ras or Braf . In support of the idea that DUSP6 /MKP-3 may be pro-oncogenic in certain cancer types DUSP6/MKP-3 is upregulated in human glioblastoma cell lines and mouse xenograft experiments showed that tumours arising from glioblastoma cells expressing DUSP6/MKP-3 grew significantly faster than non-expressing controls . The overexpression of DUSP6/MKP-3 in papillary thyroid carcinoma (PTC) cell lines is also associated with increased cell migration and invasion . Finally, although DUSP1 /MKP-1 has been widely studied as a modulator of drug responses in cancer chemotherapy (see ) less attention has been paid to the ERK-specific MKPs. However, Phuchareon et al. recently identified the down-regulation of DUSP6 /MKP-3 as a contributing factor to the reactivation of Ras-ERK signalling and drug resistance in epidermal growth factor receptor (EGFR) mutant lung cancer cell lines exposed to the TKI's gefitinib (Iressa) and erlotinib (Tarceva) . Resistance was mediated by increased ERK-dependent phosphorylation and degradation of the extra-long isoform of the pro-apoptotic B-cell lymphoma-2 (Bcl-2) family protein Bim (BimEL) thus promoting cancer cell survival. Interestingly, DUSP6 /MKP-3 down-regulation has also been implicated in mediating reactivation of Ras-ERK signalling and drug resistance in lung cancer cells harbouring the echinoderm microtubule-associated protein-like 4 -anaplastic lymphoma kinase (ELM4-ALK) fusion protein exposed to the TKI crizotinib (Xalkori) . As one of these two driver mutations are present in almost 20% of non-small cell lung cancers (NSCLC), targeting ERK signalling in combination with the use of TKI's may be a viable strategy to forestall or prevent TKI resistance and loss of DUSP6 /MKP-3 is worthy of wider study as a possible ERK-mediated drug resistance mechanism in other tumour types driven by abnormal tyrosine kinase activity. In conclusion it is perhaps no surprise that DUSP6 /MKP-3 is implicated in the regulation of oncogenic signalling through the Ras/ERK pathway and recent work indicates that the effects of altered expression levels may depend on both the oncogenic background the tissue(s) involved . Further studies will be required, preferably using conditional deletion of this MKP in murine models to dissect out its precise role in carcinogenesis, particularly given its emerging role in immune regulation and inflammatory processes, both of which may have a bearing on tumour initiation and development.
DUSP7 /MKP-X DUSP7 /MKP-X most closely related to DUSP6 /MKP-3 and shares many properties, including cytoplasmic localisation, a high degree of substrate specificity for the ERK1/2 MAPKs and substrate-induced catalytic activation but, in contrast to DUSP6 /MKP-3, almost nothing is known about its physiological function(s) or association with human disease. However, DUSP7 /MKP-X was identified in an siRNA based phenotypic screen for genes involved in meiotic progression in mouse oocytes and more recent work has shown that DUSP7 -depleted oocytes either fail or are significantly delayed in resuming meiosis and that cyclin-dependent kinase-1/cyclin B (Cdk1/CycB) activity drops below the critical level required to reinitiate meiosis. Once in meiosis DUSP7 /MKP-X depleted oocytes also had severe chromosome alignment defects and progressed into anaphase prematurely . These effects were judged to be secondary to a failure to dephosphorylate and inhibit protein kinase C isoforms, a prerequisite for the timely activation of Cdk1/CycB and are likely to be significant as both male and female mice lacking DUSP7 /MKP-X have been reported as viable but infertile ( http://www.mousephenotype.org/data/genes/MGI:2387100 ). As failure to resume oocyte meiosis is a contributing factor to female infertility in humans it will be important to establish whether this signalling pathway is conserved.
DUSP9 /MKP-4 MKP-4, the third member of this group of cytoplasmic phosphatases, is encoded by the X-linked DUSP9 gene and has properties in common with DUSP6 /MKP-3 and DUSP7 /MKP-X, although its substrate specificity is somewhat more relaxed with respect to binding and inactivation of JNK and p38 MAPKs . DUSP9 /MKP-4 is also somewhat unusual in that it is not transcriptionally regulated in response to either growth factor simulation or stress, but instead seems to be constitutively expressed and regulated via phosphorylation of a conserved serine residue adjacent to the KIM, which abrogates substrate binding . Unconditional deletion of DUSP9 results in embryonic lethality due to placental insufficiency, but tetraploid rescue experiments demonstrated that it is otherwise dispensable for normal embryonic development . Information about possible physiological roles for DUSP9 and links to human disease is relatively scant. Selective expression of DUSP9 expression in Plasmacytoid dendritic cells (pDCs), but not conventional dendritic cells (cDCs), suggested a possible role in determining this phenotype. However, conditional deletion of DUSP9 in pDCs did not increase ERK activation after TLR9 stimulation and only weakly affected IFN-β and IL-12Beta (IL-12p40) production by these cells indicating that this MKP is not essential for the high level production of IFN-β, which is characteristic of pDCs . However, recent work now indicates a link between DUSP9 /MKP-4 and metabolic homeostasis . In obese and insulin resistant mouse models DUSP9 /MKP-4, protein levels are reported to be elevated in insulin responsive tissues and expression of DUSP9 /MKP-4 caused increased expression of PEPCK. Overexpression of DUSP9 /MKP-4 in adipocytes also blocked insulin-stimulated glucose uptake, again suggesting that this enzyme antagonises the effects of insulin in responsive cells and tissues . However, in a stress-induced in vitro model of insulin resistance and following adenoviral-mediated overexpression of DUSP9 /MKP-4 in the livers of genetically obese (ob/ob) mice Emanuelli et al. reported that DUSP9 /MKP-4 expression improved glucose intolerance, decreased the expression of gluconeogenic genes and reduced hepatic steatosis . Despite these apparently contradictory results, a recent study in which conditional liver-specific deletion of DUSP9 /MKP-4 was used to study the response to a high fat diet has demonstrated that loss of DUSP9 /MKP-4 in the liver sensitises animals to hepatic steatosis and inflammatory responses and that DUSP9 deficiency aggravated high fat, high cholesterol (HFHC)-induced liver fibrosis . In this context it will be interesting to compare the effects of whole body DUSP9 KO and also loss of this MKP in other insulin responsive tissues as studies of DUSP1 /MKP-1 function in metabolic homeostasis have revealed that the liver phenotype may not be dominant over the effects of loss of function in other tissues . Finally, a genetic variant mapping near to the DUSP 9 gene locus has repeatedly been detected in genome-wide association studies (GWAS) as a risk factor for the development of Type-2 diabetes across different ethnicities in human populations, further reinforcing a link between DUSP9 /MKP-4 and metabolic control . The availability of a conditionally targeted allele for DUSP9 should greatly accelerate future work on the possible role of this MKP, both in metabolic disease and in other pathologies.
The JNK and p38 specific MKPs 4.1 DUSP8 Along with DUSP 7/MKP-X, DUSP8 is probably the least studied of the 10 dual-specificity MKPs and there is virtually nothing known about its physiological function. Since its identification as a gene encoding an MKP with a translated complex trinucleotide repeat within its coding region and characterisation of its specificity for the inactivation of JNK and p38 MAPKs fewer than 20 papers have been published on DUSP8 and although targeted mouse ES cells have been generated by the international mouse phenotyping consortium (IMPC), these have not yet been exploited to produce a mouse model. 4.2 DUSP10 /MKP-5 DUSP10 /MKP-5 was first characterised as a widely expressed JNK and p38 specific MKP, which when expressed in mammalian cells is found in both the cytoplasm and nucleus . One unique feature of DUSP10 /MKP-5 is that it carries an amino-terminal extension of unknown function, but which may carry signals that specify its subcellular localisation . 4.2.1 DUSP10 /MKP-5 in immunity and inflammation Along with DUSP1 /MKP-1, DUSP10 /MKP-5 was one of the first MKPs found to regulate both innate and adaptive immune function. While development of the myeloid and lymphoid lineages was normal in mice lacking DUSP10 /MKP-5, the DUSP10 /MKP-5 gene is inducible at the transcriptional level in response to TLR signalling and peritoneal macrophages lacking DUSP10 /MKP-5 showed increased production of the pro-inflammatory cytokines IL-6 and TNFα. Consistent with this, LPS injected DUSP10 −/− mice had higher serum levels of TNFα when compared to wild type controls. Mechanistically, JNK appeared to be the relevant DUSP10 /MKP-5 substrate as elevated levels of phospho-JNK were observed in both macrophages and T cells derived from DUSP10 −/− mice . However, subsequent experiments have clearly shown that DUSP10 /MKP5 does regulate p38 MAPK activity in both monocyte/macrophages and neutrophils indicating that both stress activated MAPK pathways are subject to negative regulation by DUSP10 /MKP-5. As discussed previously in relation to the function of DUSP1 /MKP-1 , an important function of the innate immune system is mediated via professional antigen presenting cells (APC), which activate antigen specific T cells, thus bridging the innate and adaptive immune systems . LPS treated APCs isolated from DUSP10 −/− mice exhibited enhanced priming of ovalbumin-transgenic OT-I (CD4+) and OT-II (CD8+) T cells as assessed by increased levels of IL-2 production and T cell proliferation when compared to wild type APCs. Taken together these results indicate that DUSP10 /MKP-5 plays a non-redundant role as a negative regulator of the innate immune response . In terms of T cell differentiation and function DUSP10 −/− Th1 and Th2 cells showed increased JNK activation, but this was lost rapidly on anti-CD3 re-stimulation, indicating that DUSP10 /MKP-5 is not the sole arbiter of JNK activity in these cells. Activation of CD4+ T cells from DUSP10 −/− mice with anti-CD3 (with or without anti-CD28 co-stimulation) resulted in reduced proliferation when compared to wild type, indicating that DUSP10 /MKP-5 is required for T cell expansion. However, IFN-γ production by activated Th1 and IL-4 production by activated Th2 cells lacking DUSP10 /MKP-5 were increased, while activated effector CD8+ T cells from DUSP10 −/− mice produced more IFN-γ and TNFα than wild type cells . In concordance with these results, immunisation of DUSP10 −/− mice with keyhole limpet haemocyanin (KLH) and re-stimulation ex vivo , resulted in reduced antigen-driven proliferation of splenic T cells but increased levels of IFN-γ and IL-4 production confirming the reciprocal role of DUSP10 /MKP-5 in regulating of T-cell clonal expansion and effector T-cell cytokine expression . Finally, the role of DUSP10 /MKP-5 in susceptibility to infection with LCMV and to MOG-induced EAE was tested. While DUSP10 −/− mice showed little difference in their initial primary T cell response to infection or in viral clearance, re-challenge caused immune–mediated death, probably as a result of the markedly elevated levels of serum TNFα produced by CD4+ and CD8+ T cells. In the EAE model DUSP10 −/− mice exhibited a reduced number of CD4 T cells in the brain, which correlated with reduced incidence and severity of disease, indicating that this MKP plays a positive role in the generation and/or expansion of autoreactive T cells in this autoimmune disease model . More recent work has explored the role of this MKP in regulating host responses to inflammatory stimuli using a mouse model of local Shwartzman reaction (LSR). LSR is a delayed vascular injury produced by sequential subcutaneous injection of LPS and TNFα and DUSP10 −/− mice were found to be much more susceptible to this form of injury. Mechanistically this was the result of increased p38 MAPK activation in neutrophils and the production of greatly increased levels of superoxide anion by the NADPH oxidase complex, thus revealing an essential role for DUSP10 /MKP-5 as a negative regulator of p38-mediated neutrophil reactive oxygen intermediate (ROI) production, . Interestingly, p38-mediated cytokine and ROS production by macrophages also accounts for the increased susceptibility of DUSP10 −/− mice to endotoxin-induced acute lung injury following intratracheal injection of LPS, again reinforcing the role of this MKP in protection against inflammatory tissue injury . Collectively these studies reveal that DUSP10 /MKP-5 like DUSP1 /MKP-1 plays an important role in regulating both innate and adaptive immune responses and reveals significant complexity and tissue specificity in its interactions with MAP kinase signalling. This is illustrated by the observation that while the defect in T cell expansion seen on loss of DUSP10 /MKP-5 is responsible for protection against EAE, this deficit does not result in a reduction in the numbers of LCMV-reactive T cells following viral infection, probably because of the compensatory effects of DUSP10 /MKP-5 loss in stimulating APC function. 4.2.2 DUSP10 /MKP-5 function in other tissues Shi et al. have reported a function for DUSP10 /MKP-5 in regulating muscle stem cell function and muscle regeneration. DUSP10 −/− mice had increased levels of p38 and JNK activation, muscle mass and muscle fibre size when compared to wild type animals. Furthermore, in response to muscle injury following injection of cardiotoxin, they display an enhanced regenerative response associated with early upregulation of JNK and later of p38 activity. Interestingly, when crossed into the mdx (dystrophin null) mouse model of Duchenne's muscular dystrophy, the double knockout animals showed an amelioration of disease manifested by improved skeletal muscle morphology, a reduced number of degenerating muscle fibres and improved contractile function. Mechanistically, this was underpinned by increased muscle stem cell proliferation and differentiation, which were regulated by increased JNK-mediated expression of cyclin D and p38 mediated myogenesis respectively. Finally, despite its function as an immune regulator, these effects of DUSP10 /MKP-5 loss appeared to be completely independent of alterations in immune cell infiltration into damaged muscle . Interestingly, the effects of DUSP10 /MKP-5 deletion on myogenesis appear to be due to two main effects. Firstly, deletion of DUSP10 /MKP-5 increases MAPK-dependent phosphorylation of guanine nucleotide exchange factor for the Ras-related protein Rab-3A Rab3A (GRAB) at Serine 169, a site required for secretion of the promyogenic cytokine IL-6 . Secondly, in the absence of DUSP10 /MKP-5 increased JNK and p38 mediated phosphorylation and activation of STAT3, increases the expression of the anti-apoptotic protein B-cell lymphoma 2 (Bcl2) thus preventing apoptosis during regenerative myogenesis and also leads to improved antioxidant defence capacity due to a sustained increase in catalase expression that protects mitochondrial function . Finally, a recent study has used DUSP10 −/− mice to address the possible function of this MKP in modulating the development of DSS-induced intestinal inflammation and colitis associated cancer (CAC) . Surprisingly, given the previous findings that mice lacking DUSP10/MKP-5 were more sensitive to inflammatory tissue damage in skin and lung , DSS treated DUSP10 −/− mice exhibited lower levels of intestinal inflammation, better intestinal crypt architecture and lower levels of pro-inflammatory cytokine/chemokine expression than wild type animals. This protection was secondary to improved intestinal epithelial cell (IEC) barrier function, which serves to separate luminal contents from the mucosal immune system, as evidenced by reduced IEC leakage of fluorescein isothiocyanate (FITC)-dextran . Mechanistically, this was due to increased ERK–mediated expression of Kruppel like factor-5 (KLF5) a transcription factor which up-regulates cyclinB expression and promotes IEC proliferation during intestinal regeneration and wound healing. In accordance with this increased numbers of proliferating Ki67-positive cells were observed in the intestinal crypts of DSS-treated DUSP10 −/− colon compared with wild type tissue . Again these results suggest possible tissue specific variation in DUSP10 /MKP-5 activity towards MAPKs as ERK but not JNK or p38 activity was affected by DUSP10 /MKP-5 loss. Finally, although protective against DSS-induced inflammation, the combined treatment of DUSP10 −/− mice with the mutagen azoxymethane (AOM) and DSS resulted in an increased incidence of adenomatous polyps of larger size, which stained positive for Ki67 and β-catenin. Overall, this strongly suggests that the IEC and the subsequent tumours were more proliferative in the absence of DUSP10 /MKP-5 indicative of a tumour suppressor function for this MKP, an idea supported by the observation that higher levels of DUSP10 /MKP-5 expression correlated with better survival amongst patients with colorectal cancer . 4.3 DUSP16 /MKP-7 DUSP16 /MKP-7 was the last of the 10 dual-specificity MKPs to be identified and was characterised as a JNK and p38-specific MKP with a possible function as a regulator of JNK activity in macrophages . Although relatively little is known about this phosphatase, three recent studies using knockout mice have begun to shed some light on its possible physiological role(s). Firstly, using a gene trap null mutation Niedzielska et al. reported that loss of DUSP16 /MKP-7 caused perinatal lethality . Observing that DUSP16 /MKP-7 was inducible in macrophages in response to TLR agonists, they used fetal liver cells from the null mice to reconstitute the lymphoid and myeloid lineages in lethally irradiated syngeneic CD45.1+ animals. They found that T and B cell populations were present in the normal numbers, that >95% of resident macrophages were derived from the DUSP16 /MKP-7 null donor cells and the mice had normal numbers of granulocytes and plasmacytoid dentritic cells, indicating that DUSP16 /MKP-7 is not essential for homeostasis of the immune system under steady state conditions. However, there was a deficit in numbers of splenic CD11c+/CD11b + myeloid dendritic cells secondary to impaired granulocyte-macrophage colony-stimulating factor (GM-CSF)-driven proliferation of bone marrow progenitors . Subjecting reconstituted mice to LPS challenge revealed no undue sensitivity to sepsis, but did reveal JNK-dependent overproduction of IL-12Beta (IL-12p40) by macrophages in response to LPS . Overall these results reveal a dual function for DUSP16 /MKP-7 in the innate immune system involving selective control of differentiation and cytokine production , but further work is required to map out the physiological consequences of this regulation. Zhang et al also reported that loss of DUSP16 /MKP-7 was lethal and used reconstitution experiments to study the role of this MKP in adaptive immunity . In agreement with Niedzielska et al. , they found that T cell development and numbers were normal, but that CD4+ T cells lacking DUSP16 /MKP-7 were hyper-responsive to activation, produced much more IL-2 and had higher rates of proliferation when compared to wild type cells. To study T cell differentiation and function they cultured naive DUSP16 −/− CD4 + T cells under Th1, Th2, or Th17 conditions in vitro and found that while functional Th1 and Th2 cells were produced normally, DUSP16 −/− Th17 cell populations produced less IL-17A and IL-17F, and contained nearly 50% less IL-17A–producing cells compared with WT cells . Interestingly, U0126 a specific MEK inhibitor, efficiently reversed the deficit in IL-17A producing Th17 cells in vitro , indicating that regulation of ERK and not JNK or p38 was responsible. Given the role that Th17 cells play in autoimmunity, the susceptibility of the reconstituted animals was also assessed using the MOG-induced model of EAE and consistent with the functional Th17 deficit, these animals were less susceptible to disease indicating an essential role for this MKP in autoimmune responses . Finally, a recent study has explored the reason for the embryonic/prenatal lethality in mice lacking DUSP16 /MKP-7 and revealed an essential role for this MKP in brain development. Embryos lacking DUSP16 /MKP-7 exhibited congenital obstructive hydrocephalus together with brain overgrowth. This was secondary to blockage of the midbrain aqueduct by the expansion of neural progenitor cells, eventually preventing the outflow of cerebrospinal fluid. Interestingly only an increase in cells staining positively for phospho-p38 was observed in the affected regions of the CNS indicating that this MAPK, rather than ERK or JNK could be the relevant target . Taken together these studies reveal the first essential role for a member of the MKP family of enzymes in brain development and as the phenotype of DUSP16 −/− mice recapitulates aspects of different human neurodevelopmental disorders suggests that either DUSP16 /MKP-7 or the pathways it regulates may be implicated. They also reveal specific defects in innate and adaptive immunity in mice lacking DUSP16 /MKP-7. In particular the specific role of DUSP16 /MKP-7 in promoting autoimmunity make it a potential therapeutic target in a range of human disorders such as inflammatory bowel disease, rheumatoid arthritis and lupus.
DUSP8 Along with DUSP 7/MKP-X, DUSP8 is probably the least studied of the 10 dual-specificity MKPs and there is virtually nothing known about its physiological function. Since its identification as a gene encoding an MKP with a translated complex trinucleotide repeat within its coding region and characterisation of its specificity for the inactivation of JNK and p38 MAPKs fewer than 20 papers have been published on DUSP8 and although targeted mouse ES cells have been generated by the international mouse phenotyping consortium (IMPC), these have not yet been exploited to produce a mouse model.
DUSP10 /MKP-5 DUSP10 /MKP-5 was first characterised as a widely expressed JNK and p38 specific MKP, which when expressed in mammalian cells is found in both the cytoplasm and nucleus . One unique feature of DUSP10 /MKP-5 is that it carries an amino-terminal extension of unknown function, but which may carry signals that specify its subcellular localisation . 4.2.1 DUSP10 /MKP-5 in immunity and inflammation Along with DUSP1 /MKP-1, DUSP10 /MKP-5 was one of the first MKPs found to regulate both innate and adaptive immune function. While development of the myeloid and lymphoid lineages was normal in mice lacking DUSP10 /MKP-5, the DUSP10 /MKP-5 gene is inducible at the transcriptional level in response to TLR signalling and peritoneal macrophages lacking DUSP10 /MKP-5 showed increased production of the pro-inflammatory cytokines IL-6 and TNFα. Consistent with this, LPS injected DUSP10 −/− mice had higher serum levels of TNFα when compared to wild type controls. Mechanistically, JNK appeared to be the relevant DUSP10 /MKP-5 substrate as elevated levels of phospho-JNK were observed in both macrophages and T cells derived from DUSP10 −/− mice . However, subsequent experiments have clearly shown that DUSP10 /MKP5 does regulate p38 MAPK activity in both monocyte/macrophages and neutrophils indicating that both stress activated MAPK pathways are subject to negative regulation by DUSP10 /MKP-5. As discussed previously in relation to the function of DUSP1 /MKP-1 , an important function of the innate immune system is mediated via professional antigen presenting cells (APC), which activate antigen specific T cells, thus bridging the innate and adaptive immune systems . LPS treated APCs isolated from DUSP10 −/− mice exhibited enhanced priming of ovalbumin-transgenic OT-I (CD4+) and OT-II (CD8+) T cells as assessed by increased levels of IL-2 production and T cell proliferation when compared to wild type APCs. Taken together these results indicate that DUSP10 /MKP-5 plays a non-redundant role as a negative regulator of the innate immune response . In terms of T cell differentiation and function DUSP10 −/− Th1 and Th2 cells showed increased JNK activation, but this was lost rapidly on anti-CD3 re-stimulation, indicating that DUSP10 /MKP-5 is not the sole arbiter of JNK activity in these cells. Activation of CD4+ T cells from DUSP10 −/− mice with anti-CD3 (with or without anti-CD28 co-stimulation) resulted in reduced proliferation when compared to wild type, indicating that DUSP10 /MKP-5 is required for T cell expansion. However, IFN-γ production by activated Th1 and IL-4 production by activated Th2 cells lacking DUSP10 /MKP-5 were increased, while activated effector CD8+ T cells from DUSP10 −/− mice produced more IFN-γ and TNFα than wild type cells . In concordance with these results, immunisation of DUSP10 −/− mice with keyhole limpet haemocyanin (KLH) and re-stimulation ex vivo , resulted in reduced antigen-driven proliferation of splenic T cells but increased levels of IFN-γ and IL-4 production confirming the reciprocal role of DUSP10 /MKP-5 in regulating of T-cell clonal expansion and effector T-cell cytokine expression . Finally, the role of DUSP10 /MKP-5 in susceptibility to infection with LCMV and to MOG-induced EAE was tested. While DUSP10 −/− mice showed little difference in their initial primary T cell response to infection or in viral clearance, re-challenge caused immune–mediated death, probably as a result of the markedly elevated levels of serum TNFα produced by CD4+ and CD8+ T cells. In the EAE model DUSP10 −/− mice exhibited a reduced number of CD4 T cells in the brain, which correlated with reduced incidence and severity of disease, indicating that this MKP plays a positive role in the generation and/or expansion of autoreactive T cells in this autoimmune disease model . More recent work has explored the role of this MKP in regulating host responses to inflammatory stimuli using a mouse model of local Shwartzman reaction (LSR). LSR is a delayed vascular injury produced by sequential subcutaneous injection of LPS and TNFα and DUSP10 −/− mice were found to be much more susceptible to this form of injury. Mechanistically this was the result of increased p38 MAPK activation in neutrophils and the production of greatly increased levels of superoxide anion by the NADPH oxidase complex, thus revealing an essential role for DUSP10 /MKP-5 as a negative regulator of p38-mediated neutrophil reactive oxygen intermediate (ROI) production, . Interestingly, p38-mediated cytokine and ROS production by macrophages also accounts for the increased susceptibility of DUSP10 −/− mice to endotoxin-induced acute lung injury following intratracheal injection of LPS, again reinforcing the role of this MKP in protection against inflammatory tissue injury . Collectively these studies reveal that DUSP10 /MKP-5 like DUSP1 /MKP-1 plays an important role in regulating both innate and adaptive immune responses and reveals significant complexity and tissue specificity in its interactions with MAP kinase signalling. This is illustrated by the observation that while the defect in T cell expansion seen on loss of DUSP10 /MKP-5 is responsible for protection against EAE, this deficit does not result in a reduction in the numbers of LCMV-reactive T cells following viral infection, probably because of the compensatory effects of DUSP10 /MKP-5 loss in stimulating APC function. 4.2.2 DUSP10 /MKP-5 function in other tissues Shi et al. have reported a function for DUSP10 /MKP-5 in regulating muscle stem cell function and muscle regeneration. DUSP10 −/− mice had increased levels of p38 and JNK activation, muscle mass and muscle fibre size when compared to wild type animals. Furthermore, in response to muscle injury following injection of cardiotoxin, they display an enhanced regenerative response associated with early upregulation of JNK and later of p38 activity. Interestingly, when crossed into the mdx (dystrophin null) mouse model of Duchenne's muscular dystrophy, the double knockout animals showed an amelioration of disease manifested by improved skeletal muscle morphology, a reduced number of degenerating muscle fibres and improved contractile function. Mechanistically, this was underpinned by increased muscle stem cell proliferation and differentiation, which were regulated by increased JNK-mediated expression of cyclin D and p38 mediated myogenesis respectively. Finally, despite its function as an immune regulator, these effects of DUSP10 /MKP-5 loss appeared to be completely independent of alterations in immune cell infiltration into damaged muscle . Interestingly, the effects of DUSP10 /MKP-5 deletion on myogenesis appear to be due to two main effects. Firstly, deletion of DUSP10 /MKP-5 increases MAPK-dependent phosphorylation of guanine nucleotide exchange factor for the Ras-related protein Rab-3A Rab3A (GRAB) at Serine 169, a site required for secretion of the promyogenic cytokine IL-6 . Secondly, in the absence of DUSP10 /MKP-5 increased JNK and p38 mediated phosphorylation and activation of STAT3, increases the expression of the anti-apoptotic protein B-cell lymphoma 2 (Bcl2) thus preventing apoptosis during regenerative myogenesis and also leads to improved antioxidant defence capacity due to a sustained increase in catalase expression that protects mitochondrial function . Finally, a recent study has used DUSP10 −/− mice to address the possible function of this MKP in modulating the development of DSS-induced intestinal inflammation and colitis associated cancer (CAC) . Surprisingly, given the previous findings that mice lacking DUSP10/MKP-5 were more sensitive to inflammatory tissue damage in skin and lung , DSS treated DUSP10 −/− mice exhibited lower levels of intestinal inflammation, better intestinal crypt architecture and lower levels of pro-inflammatory cytokine/chemokine expression than wild type animals. This protection was secondary to improved intestinal epithelial cell (IEC) barrier function, which serves to separate luminal contents from the mucosal immune system, as evidenced by reduced IEC leakage of fluorescein isothiocyanate (FITC)-dextran . Mechanistically, this was due to increased ERK–mediated expression of Kruppel like factor-5 (KLF5) a transcription factor which up-regulates cyclinB expression and promotes IEC proliferation during intestinal regeneration and wound healing. In accordance with this increased numbers of proliferating Ki67-positive cells were observed in the intestinal crypts of DSS-treated DUSP10 −/− colon compared with wild type tissue . Again these results suggest possible tissue specific variation in DUSP10 /MKP-5 activity towards MAPKs as ERK but not JNK or p38 activity was affected by DUSP10 /MKP-5 loss. Finally, although protective against DSS-induced inflammation, the combined treatment of DUSP10 −/− mice with the mutagen azoxymethane (AOM) and DSS resulted in an increased incidence of adenomatous polyps of larger size, which stained positive for Ki67 and β-catenin. Overall, this strongly suggests that the IEC and the subsequent tumours were more proliferative in the absence of DUSP10 /MKP-5 indicative of a tumour suppressor function for this MKP, an idea supported by the observation that higher levels of DUSP10 /MKP-5 expression correlated with better survival amongst patients with colorectal cancer .
DUSP10 /MKP-5 in immunity and inflammation Along with DUSP1 /MKP-1, DUSP10 /MKP-5 was one of the first MKPs found to regulate both innate and adaptive immune function. While development of the myeloid and lymphoid lineages was normal in mice lacking DUSP10 /MKP-5, the DUSP10 /MKP-5 gene is inducible at the transcriptional level in response to TLR signalling and peritoneal macrophages lacking DUSP10 /MKP-5 showed increased production of the pro-inflammatory cytokines IL-6 and TNFα. Consistent with this, LPS injected DUSP10 −/− mice had higher serum levels of TNFα when compared to wild type controls. Mechanistically, JNK appeared to be the relevant DUSP10 /MKP-5 substrate as elevated levels of phospho-JNK were observed in both macrophages and T cells derived from DUSP10 −/− mice . However, subsequent experiments have clearly shown that DUSP10 /MKP5 does regulate p38 MAPK activity in both monocyte/macrophages and neutrophils indicating that both stress activated MAPK pathways are subject to negative regulation by DUSP10 /MKP-5. As discussed previously in relation to the function of DUSP1 /MKP-1 , an important function of the innate immune system is mediated via professional antigen presenting cells (APC), which activate antigen specific T cells, thus bridging the innate and adaptive immune systems . LPS treated APCs isolated from DUSP10 −/− mice exhibited enhanced priming of ovalbumin-transgenic OT-I (CD4+) and OT-II (CD8+) T cells as assessed by increased levels of IL-2 production and T cell proliferation when compared to wild type APCs. Taken together these results indicate that DUSP10 /MKP-5 plays a non-redundant role as a negative regulator of the innate immune response . In terms of T cell differentiation and function DUSP10 −/− Th1 and Th2 cells showed increased JNK activation, but this was lost rapidly on anti-CD3 re-stimulation, indicating that DUSP10 /MKP-5 is not the sole arbiter of JNK activity in these cells. Activation of CD4+ T cells from DUSP10 −/− mice with anti-CD3 (with or without anti-CD28 co-stimulation) resulted in reduced proliferation when compared to wild type, indicating that DUSP10 /MKP-5 is required for T cell expansion. However, IFN-γ production by activated Th1 and IL-4 production by activated Th2 cells lacking DUSP10 /MKP-5 were increased, while activated effector CD8+ T cells from DUSP10 −/− mice produced more IFN-γ and TNFα than wild type cells . In concordance with these results, immunisation of DUSP10 −/− mice with keyhole limpet haemocyanin (KLH) and re-stimulation ex vivo , resulted in reduced antigen-driven proliferation of splenic T cells but increased levels of IFN-γ and IL-4 production confirming the reciprocal role of DUSP10 /MKP-5 in regulating of T-cell clonal expansion and effector T-cell cytokine expression . Finally, the role of DUSP10 /MKP-5 in susceptibility to infection with LCMV and to MOG-induced EAE was tested. While DUSP10 −/− mice showed little difference in their initial primary T cell response to infection or in viral clearance, re-challenge caused immune–mediated death, probably as a result of the markedly elevated levels of serum TNFα produced by CD4+ and CD8+ T cells. In the EAE model DUSP10 −/− mice exhibited a reduced number of CD4 T cells in the brain, which correlated with reduced incidence and severity of disease, indicating that this MKP plays a positive role in the generation and/or expansion of autoreactive T cells in this autoimmune disease model . More recent work has explored the role of this MKP in regulating host responses to inflammatory stimuli using a mouse model of local Shwartzman reaction (LSR). LSR is a delayed vascular injury produced by sequential subcutaneous injection of LPS and TNFα and DUSP10 −/− mice were found to be much more susceptible to this form of injury. Mechanistically this was the result of increased p38 MAPK activation in neutrophils and the production of greatly increased levels of superoxide anion by the NADPH oxidase complex, thus revealing an essential role for DUSP10 /MKP-5 as a negative regulator of p38-mediated neutrophil reactive oxygen intermediate (ROI) production, . Interestingly, p38-mediated cytokine and ROS production by macrophages also accounts for the increased susceptibility of DUSP10 −/− mice to endotoxin-induced acute lung injury following intratracheal injection of LPS, again reinforcing the role of this MKP in protection against inflammatory tissue injury . Collectively these studies reveal that DUSP10 /MKP-5 like DUSP1 /MKP-1 plays an important role in regulating both innate and adaptive immune responses and reveals significant complexity and tissue specificity in its interactions with MAP kinase signalling. This is illustrated by the observation that while the defect in T cell expansion seen on loss of DUSP10 /MKP-5 is responsible for protection against EAE, this deficit does not result in a reduction in the numbers of LCMV-reactive T cells following viral infection, probably because of the compensatory effects of DUSP10 /MKP-5 loss in stimulating APC function.
DUSP10 /MKP-5 function in other tissues Shi et al. have reported a function for DUSP10 /MKP-5 in regulating muscle stem cell function and muscle regeneration. DUSP10 −/− mice had increased levels of p38 and JNK activation, muscle mass and muscle fibre size when compared to wild type animals. Furthermore, in response to muscle injury following injection of cardiotoxin, they display an enhanced regenerative response associated with early upregulation of JNK and later of p38 activity. Interestingly, when crossed into the mdx (dystrophin null) mouse model of Duchenne's muscular dystrophy, the double knockout animals showed an amelioration of disease manifested by improved skeletal muscle morphology, a reduced number of degenerating muscle fibres and improved contractile function. Mechanistically, this was underpinned by increased muscle stem cell proliferation and differentiation, which were regulated by increased JNK-mediated expression of cyclin D and p38 mediated myogenesis respectively. Finally, despite its function as an immune regulator, these effects of DUSP10 /MKP-5 loss appeared to be completely independent of alterations in immune cell infiltration into damaged muscle . Interestingly, the effects of DUSP10 /MKP-5 deletion on myogenesis appear to be due to two main effects. Firstly, deletion of DUSP10 /MKP-5 increases MAPK-dependent phosphorylation of guanine nucleotide exchange factor for the Ras-related protein Rab-3A Rab3A (GRAB) at Serine 169, a site required for secretion of the promyogenic cytokine IL-6 . Secondly, in the absence of DUSP10 /MKP-5 increased JNK and p38 mediated phosphorylation and activation of STAT3, increases the expression of the anti-apoptotic protein B-cell lymphoma 2 (Bcl2) thus preventing apoptosis during regenerative myogenesis and also leads to improved antioxidant defence capacity due to a sustained increase in catalase expression that protects mitochondrial function . Finally, a recent study has used DUSP10 −/− mice to address the possible function of this MKP in modulating the development of DSS-induced intestinal inflammation and colitis associated cancer (CAC) . Surprisingly, given the previous findings that mice lacking DUSP10/MKP-5 were more sensitive to inflammatory tissue damage in skin and lung , DSS treated DUSP10 −/− mice exhibited lower levels of intestinal inflammation, better intestinal crypt architecture and lower levels of pro-inflammatory cytokine/chemokine expression than wild type animals. This protection was secondary to improved intestinal epithelial cell (IEC) barrier function, which serves to separate luminal contents from the mucosal immune system, as evidenced by reduced IEC leakage of fluorescein isothiocyanate (FITC)-dextran . Mechanistically, this was due to increased ERK–mediated expression of Kruppel like factor-5 (KLF5) a transcription factor which up-regulates cyclinB expression and promotes IEC proliferation during intestinal regeneration and wound healing. In accordance with this increased numbers of proliferating Ki67-positive cells were observed in the intestinal crypts of DSS-treated DUSP10 −/− colon compared with wild type tissue . Again these results suggest possible tissue specific variation in DUSP10 /MKP-5 activity towards MAPKs as ERK but not JNK or p38 activity was affected by DUSP10 /MKP-5 loss. Finally, although protective against DSS-induced inflammation, the combined treatment of DUSP10 −/− mice with the mutagen azoxymethane (AOM) and DSS resulted in an increased incidence of adenomatous polyps of larger size, which stained positive for Ki67 and β-catenin. Overall, this strongly suggests that the IEC and the subsequent tumours were more proliferative in the absence of DUSP10 /MKP-5 indicative of a tumour suppressor function for this MKP, an idea supported by the observation that higher levels of DUSP10 /MKP-5 expression correlated with better survival amongst patients with colorectal cancer .
DUSP16 /MKP-7 DUSP16 /MKP-7 was the last of the 10 dual-specificity MKPs to be identified and was characterised as a JNK and p38-specific MKP with a possible function as a regulator of JNK activity in macrophages . Although relatively little is known about this phosphatase, three recent studies using knockout mice have begun to shed some light on its possible physiological role(s). Firstly, using a gene trap null mutation Niedzielska et al. reported that loss of DUSP16 /MKP-7 caused perinatal lethality . Observing that DUSP16 /MKP-7 was inducible in macrophages in response to TLR agonists, they used fetal liver cells from the null mice to reconstitute the lymphoid and myeloid lineages in lethally irradiated syngeneic CD45.1+ animals. They found that T and B cell populations were present in the normal numbers, that >95% of resident macrophages were derived from the DUSP16 /MKP-7 null donor cells and the mice had normal numbers of granulocytes and plasmacytoid dentritic cells, indicating that DUSP16 /MKP-7 is not essential for homeostasis of the immune system under steady state conditions. However, there was a deficit in numbers of splenic CD11c+/CD11b + myeloid dendritic cells secondary to impaired granulocyte-macrophage colony-stimulating factor (GM-CSF)-driven proliferation of bone marrow progenitors . Subjecting reconstituted mice to LPS challenge revealed no undue sensitivity to sepsis, but did reveal JNK-dependent overproduction of IL-12Beta (IL-12p40) by macrophages in response to LPS . Overall these results reveal a dual function for DUSP16 /MKP-7 in the innate immune system involving selective control of differentiation and cytokine production , but further work is required to map out the physiological consequences of this regulation. Zhang et al also reported that loss of DUSP16 /MKP-7 was lethal and used reconstitution experiments to study the role of this MKP in adaptive immunity . In agreement with Niedzielska et al. , they found that T cell development and numbers were normal, but that CD4+ T cells lacking DUSP16 /MKP-7 were hyper-responsive to activation, produced much more IL-2 and had higher rates of proliferation when compared to wild type cells. To study T cell differentiation and function they cultured naive DUSP16 −/− CD4 + T cells under Th1, Th2, or Th17 conditions in vitro and found that while functional Th1 and Th2 cells were produced normally, DUSP16 −/− Th17 cell populations produced less IL-17A and IL-17F, and contained nearly 50% less IL-17A–producing cells compared with WT cells . Interestingly, U0126 a specific MEK inhibitor, efficiently reversed the deficit in IL-17A producing Th17 cells in vitro , indicating that regulation of ERK and not JNK or p38 was responsible. Given the role that Th17 cells play in autoimmunity, the susceptibility of the reconstituted animals was also assessed using the MOG-induced model of EAE and consistent with the functional Th17 deficit, these animals were less susceptible to disease indicating an essential role for this MKP in autoimmune responses . Finally, a recent study has explored the reason for the embryonic/prenatal lethality in mice lacking DUSP16 /MKP-7 and revealed an essential role for this MKP in brain development. Embryos lacking DUSP16 /MKP-7 exhibited congenital obstructive hydrocephalus together with brain overgrowth. This was secondary to blockage of the midbrain aqueduct by the expansion of neural progenitor cells, eventually preventing the outflow of cerebrospinal fluid. Interestingly only an increase in cells staining positively for phospho-p38 was observed in the affected regions of the CNS indicating that this MAPK, rather than ERK or JNK could be the relevant target . Taken together these studies reveal the first essential role for a member of the MKP family of enzymes in brain development and as the phenotype of DUSP16 −/− mice recapitulates aspects of different human neurodevelopmental disorders suggests that either DUSP16 /MKP-7 or the pathways it regulates may be implicated. They also reveal specific defects in innate and adaptive immunity in mice lacking DUSP16 /MKP-7. In particular the specific role of DUSP16 /MKP-7 in promoting autoimmunity make it a potential therapeutic target in a range of human disorders such as inflammatory bowel disease, rheumatoid arthritis and lupus.
Conclusions and future perspectives The past decade has seen an acceleration in the use of GEM models to probe the physiological and pathophysiological roles of the MKPs and these have provided a wealth of information for certain members of the family, particularly in relation to the functional regulation of the immune system, but also in metabolic disease and cancer. As is the case for other classes of protein phosphatases, it is abundantly clear that MKPs are not merely passive “erasers” of protein phosphorylation, but instead form a complex network of activities in cells and tissues that act to regulate the spatiotemporal activity of the different MAP kinase pathways and play essential roles in regulating key physiological outcomes. Several themes have emerged, one of which is the importance of compartmentalised regulation of MAPK signalling by activities in the nucleus and cytoplasm as exemplified by the regulation of nuclear JNK activity by DUSP1 /MKP-1 in metabolic control and nuclear ERK activity by DUSP5 in cancer . It is also clear that there are both tissue and cell type specific differences in the MAPK isoforms targeted by particular MKPs, one example being the preference of DUSP1 /MKP-1 for inactivation of p38 MAPK in macrophages and dendritic cells whereas JNK is the relevant substrate in T cells . Thus far, we do not have any detailed grasp of how these specificities may be altered in vivo . This must be addressed in future work as must the validity of putative non-MAPK substrates for certain MKPs invoked in disease models. The latter include the possibility that STAT3 is directly targeted by DUSP2 in innate immunity , the potential regulation of FOXO1 and as yet uncharacterised non MAPK substrates by DUSP6 /MKP-3 in metabolic regulation and endothelial inflammation respectively and modulation of PKC activity by DUSP7 /MKP-X in oocyte meiotic progression . Finally, there are numerous examples where knockout phenotypes indicate that certain MKPs may be therapeutic targets in human disease. These include inhibition of DUSP1 /MKP-1 in combatting obesity and depression and DUSP2 as a potential anti-inflammatory drug target . However, caution must be exercised here as inhibition of DUSP1 /MKP-1 could also result in more severe inflammatory responses and many existing anti-inflammatory agents, such as glucocorticoids, actually enhance the expression and activity of this MKP as part of their mode of action. Clearly specificity of action towards individual MKPs would also be crucial in any strategy to target these enzymes and the avoidance of undesirable side effects. In this regard and due to the involvement of a redox active cysteine residue in catalysis, the PTPase superfamily has long been regarded as “undruggable”. However, recent progress in the development of allosteric PTPase inhibitors gives hope that future work will lead to the development of highly specific small molecules able to target MKP activity, which will then allow a meaningful exploration of their therapeutic potential.
Transparency document. Image 1
|
Strategies to Improve Health Communication: Can Health Professionals Be Heroes? | 7bb2ef5f-78a1-4623-8a62-7654f7b4eda4 | 7353280 | Health Communication[mh] | Communicating health messages clearly and openly to the public is challenging, as much of the evidence focuses on longevity and disease prevention, rather than behaviours that generate instant results and gratification . Previous research has indicated that young adults and university students are often uninterested in the long-term benefits or consequences of their current eating behaviours . Whilst nutrition science has enabled countless discoveries and progressions in science , nutrition science is more complicated than other disciplines of science in multiple ways. Firstly, food is an essential part of every human’s life, thus many people have a vested interest in nutrition and care about their health . Diet quality is typically poor, particularly amongst young adults and university students, and the current obesogenic climate makes healthy choices more challenging than ever before . Secondly, with the widespread use of social media (SM; see for a glossary of terms), many people without formal qualifications, such as celebrities and social media influencers (herein referred to collectively as SMIs; ) are sharing science-related information that is influential and highly accessible to a wide audience. The discipline of nutrition science is riddled with questions surrounding authenticity, trustworthiness, and credibility . The oversimplification of translating study findings by some media outlets causes confusion among the public. These translation activities often ignore key differences in study design—i.e., methods and the population of interest (human vs. animal studies)—thus causing confusion and lack of trust when results are conflicting. Our work has shown that government translation of nutrition research (i.e., the Australian Guide to Healthy Eating) fails to capture the attention of young adults as it is not relevant and applicable to their lives, unlike content from SMIs . SM has enhanced the proliferation of ‘fad diets’, particularly those which restrict whole food groups (e.g., the paleo diet; grains and dairy), thus limiting the variety in our diets. Young adults are the biggest consumers of SM content, with approximately 70% of 18–24-year-olds using Instagram , and University students feeling permanently connected to SM . However, our systematic review identified that the use of SM for health interventions in young adults had limited success with highly variable engagement rates ranging from 3–69% . Health professionals often have jobs outside of SM, and are bound by professionalism principles, and thus may not be as candid or have as much time as SMIs to grow their audience and spread evidence-based information, potentially limiting their ability to engage young people. In the existing post-truth era , experts are often less highly regarded by young adults and emotional message appeals are typically the most effective methods of communication . Previous research has detailed the effectiveness of positive emotional message appeals, such as humour , as a way to increase engagement on SM . On Instagram, young adults are surrounded by the pressure of idyllic lifestyles, frequently being exposed to accounts promoting ‘#fitspiration’ and ‘#cleaneating’ encouraging self-comparison and negative self-image . Many of the photos on Instagram are heavily edited, creating unrealistic expectations and impacting vulnerable people; particularly women, who are trying to fit-in with others online . Exposure to image-related content is associated with higher body dissatisfaction, dieting or restricting food, overeating, and choosing healthy foods in young adults; hence, the mental and emotional impact of SMa differs between individuals . Young adults often feel pressure to be healthy due to the tendency to compare themselves to others, however, they frequently lack the motivation or ability to successfully make healthy behaviour changes due to many environmental and social barriers . This sub-population is an important target for health interventions to increase the capacity to adopt healthy eating behaviours, which aid in reducing the risk of chronic disease later in life . Therefore, exploring the perceived trustworthiness, authenticity, and message appeals of nutrition professionals (NPs; ) and SMIs on Instagram could be useful to inform health communication techniques targeted at young adults in the future. Some NPs could be considered SMIs, but the distinguishing factor in our study is the presence or absence of a tertiary qualification in nutrition . This paper is informed by the self-determination theory and the source credibility model . The self-determination theory encompasses the concept of authenticity, defined as “being true to the self in terms of an individual’s thoughts, feelings, and behaviours reflecting their true identity” . In the marketing literature, individuals tend to perceive another person (e.g., a celebrity) as authentic when the other person’s actions reflect their autonomous, self-determining, true self . Those who are perceived as authentic have a higher level of influence over others, both online and offline . Source credibility is a communicator’s positive characteristics that affect the receiver’s acceptance of a message, encompassing attractiveness, trustworthiness, and expertise . In these analyses, we focus primarily on trustworthiness. Our scoping review has highlighted the paucity of research from health and nutrition science that considers the trustworthiness or authenticity of a spokesperson’s communications . The aim of this paper was to understand the differences in consumer’s perceptions of NPs and SMIs on Instagram through exploring authenticity and trustworthiness. In marketing, authenticity and trustworthiness are distinct constructs; however, research has found that the perceived authenticity of celebrities and brands increases their trustworthiness (e.g., authentic brands are more likely to be trusted by consumers compared to inauthentic brands) . Based on the health and marketing literature, it is hypothesised that SMIs are similar to celebrity endorsers (who are perceived as authentic ; ) as they are both recognised as authorities in their specific fields , and, therefore, consumers will perceive the Instagram post of the SMI (versus NP) as more: (1) authentic; and (2) trustworthy. Given that positive or gain-framed messages have been found to more effectively promote the adoption of healthy eating behaviours , we hypothesise that a positive message appeal will influence the authenticity of SMIs’ (versus NPs) Instagram posts.
2.1. Participants and Recruitment A cross-sectional pilot study was used to explore University students’ perceptions of a SMI and NP. Prior to recruitment, this study was approved by the Monash University Human Research Ethics Committee (approval number: 19201). Participants were a convenience sample of marketing research methods students enrolled in a Bachelor of Business at a Metropolitan University in Australia. Recruitment occurred via Sona, an online research portal in which University researchers advertise current research opportunities for students to participate in. Only second-year students enrolled in the marketing research methods subject could participate. Within this sample, there were no specific inclusion or exclusion criteria. A brief description of the study was provided on the Sona portal, with 13 possible time slots for participation available, allowing a maximum of 220 participants. Participation in the study contributed to 3% of the students’ coursework requirements. Overall, 152 participants attended the sessions. 2.2. Procedure Data collection occurred over two days on 13 May and 16 May 2019. The study was held in a Behavioural Laboratory at the University. Participants attended one 30-min session of their choice, occurring between 9:00 a.m. and 5:00 p.m. on either of the days of data collection. On arrival, the participants entered the laboratory and were given a unique participant identification number to access the questionnaire. Participants’ names and identification numbers were kept confidential at all times. Participants sat down at a desk of their choice where a laptop was set up to begin the questionnaire. Informed consent was required before commencing, or alternatively, if the participant did not consent, they could leave without participating. There were barriers around individual desks to ensure the privacy of participants, and headphones were supplied to reduce noise disturbance. 2.3. Pilot Questionnaire 2.3.1. Development The pilot questionnaire was online, self-administered, and generated on Qualtrics ® (Provo, UT, USA) software. The pilot questionnaire was developed through collaboration with researchers from the nutrition science (ELJ, AM, TMC) and marketing disciplines (JI, SC) and informed by our prior research . A template was developed from a previous questionnaire conducted as part of the Communicating Health study , then further refined to be specific to this study. The questionnaire was piloted to assess the format, layout, and length, with the feedback used to further refine the questionnaire before completing the final draft. The final questionnaire consisted of 75 questions covering 14 topics . Originally, the questionnaire had a broad scope and included questions relating to communication objectives, SM behaviour, and physical activity. However, based on the findings of our scoping review these analyses focus only on the results relating to trustworthiness, authenticity, and message appeals . 2.3.2. Stimulus The participants were shown screenshots of real-life Instagram posts from a SMI and a NP. These posts were sourced from Socialbakers © (Prague, Czech Republic), an online SM analytical company, as part of a previous student project . Previously, the top 10 most popular Facebook profiles among Australian users for both lifestyle SMIs and NPs were identified (methods detailed elsewhere) . The highest performing accounts with Instagram (based on the number of followers) from both categories were chosen to be included in this questionnaire. The accounts identified were both female, with a differing number of followers at the time of data collection (NP = 91.2 thousand; SMI = 11.3 million). Over the period of 25 June 2018 to 29 July 2018, Socialbakers © was used to determine the highest performing post (based on engagement i.e., sum of likes and comments) on each of the Instagram accounts. These posts ( n = 2) were then included in the pilot questionnaire. Photos were edited to ensure consistency across the profiles (i.e., removing comments whilst leaving the number of likes visible). There was a difference in the number of likes between sources (NP 2686 likes, SMI 282, 711 likes). For this pilot questionnaire, a screenshot of the Instagram profile (small profile picture and biography) for both the NP and SMI, and their corresponding post (highest performing) was also included. The NP’s profile biography stated that she was a dietitian, body positive, and an author of a nutrition-related book, and her post included in the questionnaire discussed body image and body positivity . The SMI’s profile biography included her relationship status, her pregnancy progress, and advertising for her fitness program, and her post discussed her long-term relationship. The order in which participants saw the NP or SMI was randomised (by Qualtrics ® online software) to minimise confounding order-effects. 2.3.3. Measures Familiarity and likeability of the source was first measured based on the Instagram profile (measured using a validated semantic differential scale, each with five items) . The main outcomes were post trustworthiness and post authenticity assessed using validated scales, as well as perceived message appeal used in the Instagram post (measured by eight individual items on a five-point Likert scale ranging from ‘strongly disagree’ to ‘strongly agree’; affiliation, hope, humour, heroic, convenience, guilt, sorrow, fear) . Participants answered questions related to their own nutrition knowledge, healthy eating behaviours, and quality of life, also assessed via statements on a five-point Likert scale . All scales had been previously used in the literature. Participant’s demographic data such as gender, age, country of birth, employment status, and self-reported weight and height were also collected. 2.4. Statistical Analysis 2.4.1. Software Questionnaire data was collected in and exported from Qualtrics ® online software into IBM SPSS ® (Statistical Package for Social Sciences; Version 26, Armonk, NY, USA) for Windows. The PROCESS macro was used for regression analysis . 2.4.2. Statistical Tests Participants who did not respond to demographic questions were excluded from analysis ( n = 3). General data cleaning took place prior to analysing data. Variables that were negatively worded within a scale were reverse coded. To assess the reliability of the scales used and ensure the items in each scale were measuring the same concept, Cronbach’s alpha (α) was calculated. If the α value was above 0.7 the scale was considered a reliable measure. Once reliability was confirmed, an average score for each subscale was calculated by dividing the total scale score by the number of items within the scale (e.g., trustworthiness, five items). For any scales where α was below 0.7 (i.e., nutrition knowledge), factor analysis was conducted, and the scale was split into the appropriate number of sub-scales with an α value above 0.7. The body mass index (BMI) (kg/m 2 ) of participants was calculated using self-reported weight and height data from the questionnaire. BMI (kg/m 2 ) was classified into three groups: underweight (<18.49 kg/m 2 ), healthy weight (18.5–24.99 kg/m 2 ), and overweight or obese (>25 kg/m 2 ), based on the World Health Organisation (WHO) cut-offs . The PROCESS macro bootstrapping procedure ( n = 10,000) was used to test for moderated mediation (PROCESS Model 7; See ) . The PROCESS macro tests the effect of the interaction (i.e., source (SMI/NP) × message appeal) on the mediator (i.e., post authenticity), as well as the effect of the mediator (i.e., post authenticity) on the outcome variable (i.e., post trustworthiness). The independent variable was dummy coded to reflect the source of the profile (i.e., SMI = 0, NP = 1). Covariates included in the regression models were participants’ gender, self-reported BMI, participants’ subjective nutrition knowledge, subjective nutrition expertise, participants’ healthy eating behaviour, self-reported quality of life, and the perceived familiarity and likeability of the source.
A cross-sectional pilot study was used to explore University students’ perceptions of a SMI and NP. Prior to recruitment, this study was approved by the Monash University Human Research Ethics Committee (approval number: 19201). Participants were a convenience sample of marketing research methods students enrolled in a Bachelor of Business at a Metropolitan University in Australia. Recruitment occurred via Sona, an online research portal in which University researchers advertise current research opportunities for students to participate in. Only second-year students enrolled in the marketing research methods subject could participate. Within this sample, there were no specific inclusion or exclusion criteria. A brief description of the study was provided on the Sona portal, with 13 possible time slots for participation available, allowing a maximum of 220 participants. Participation in the study contributed to 3% of the students’ coursework requirements. Overall, 152 participants attended the sessions.
Data collection occurred over two days on 13 May and 16 May 2019. The study was held in a Behavioural Laboratory at the University. Participants attended one 30-min session of their choice, occurring between 9:00 a.m. and 5:00 p.m. on either of the days of data collection. On arrival, the participants entered the laboratory and were given a unique participant identification number to access the questionnaire. Participants’ names and identification numbers were kept confidential at all times. Participants sat down at a desk of their choice where a laptop was set up to begin the questionnaire. Informed consent was required before commencing, or alternatively, if the participant did not consent, they could leave without participating. There were barriers around individual desks to ensure the privacy of participants, and headphones were supplied to reduce noise disturbance.
2.3.1. Development The pilot questionnaire was online, self-administered, and generated on Qualtrics ® (Provo, UT, USA) software. The pilot questionnaire was developed through collaboration with researchers from the nutrition science (ELJ, AM, TMC) and marketing disciplines (JI, SC) and informed by our prior research . A template was developed from a previous questionnaire conducted as part of the Communicating Health study , then further refined to be specific to this study. The questionnaire was piloted to assess the format, layout, and length, with the feedback used to further refine the questionnaire before completing the final draft. The final questionnaire consisted of 75 questions covering 14 topics . Originally, the questionnaire had a broad scope and included questions relating to communication objectives, SM behaviour, and physical activity. However, based on the findings of our scoping review these analyses focus only on the results relating to trustworthiness, authenticity, and message appeals . 2.3.2. Stimulus The participants were shown screenshots of real-life Instagram posts from a SMI and a NP. These posts were sourced from Socialbakers © (Prague, Czech Republic), an online SM analytical company, as part of a previous student project . Previously, the top 10 most popular Facebook profiles among Australian users for both lifestyle SMIs and NPs were identified (methods detailed elsewhere) . The highest performing accounts with Instagram (based on the number of followers) from both categories were chosen to be included in this questionnaire. The accounts identified were both female, with a differing number of followers at the time of data collection (NP = 91.2 thousand; SMI = 11.3 million). Over the period of 25 June 2018 to 29 July 2018, Socialbakers © was used to determine the highest performing post (based on engagement i.e., sum of likes and comments) on each of the Instagram accounts. These posts ( n = 2) were then included in the pilot questionnaire. Photos were edited to ensure consistency across the profiles (i.e., removing comments whilst leaving the number of likes visible). There was a difference in the number of likes between sources (NP 2686 likes, SMI 282, 711 likes). For this pilot questionnaire, a screenshot of the Instagram profile (small profile picture and biography) for both the NP and SMI, and their corresponding post (highest performing) was also included. The NP’s profile biography stated that she was a dietitian, body positive, and an author of a nutrition-related book, and her post included in the questionnaire discussed body image and body positivity . The SMI’s profile biography included her relationship status, her pregnancy progress, and advertising for her fitness program, and her post discussed her long-term relationship. The order in which participants saw the NP or SMI was randomised (by Qualtrics ® online software) to minimise confounding order-effects. 2.3.3. Measures Familiarity and likeability of the source was first measured based on the Instagram profile (measured using a validated semantic differential scale, each with five items) . The main outcomes were post trustworthiness and post authenticity assessed using validated scales, as well as perceived message appeal used in the Instagram post (measured by eight individual items on a five-point Likert scale ranging from ‘strongly disagree’ to ‘strongly agree’; affiliation, hope, humour, heroic, convenience, guilt, sorrow, fear) . Participants answered questions related to their own nutrition knowledge, healthy eating behaviours, and quality of life, also assessed via statements on a five-point Likert scale . All scales had been previously used in the literature. Participant’s demographic data such as gender, age, country of birth, employment status, and self-reported weight and height were also collected.
The pilot questionnaire was online, self-administered, and generated on Qualtrics ® (Provo, UT, USA) software. The pilot questionnaire was developed through collaboration with researchers from the nutrition science (ELJ, AM, TMC) and marketing disciplines (JI, SC) and informed by our prior research . A template was developed from a previous questionnaire conducted as part of the Communicating Health study , then further refined to be specific to this study. The questionnaire was piloted to assess the format, layout, and length, with the feedback used to further refine the questionnaire before completing the final draft. The final questionnaire consisted of 75 questions covering 14 topics . Originally, the questionnaire had a broad scope and included questions relating to communication objectives, SM behaviour, and physical activity. However, based on the findings of our scoping review these analyses focus only on the results relating to trustworthiness, authenticity, and message appeals .
The participants were shown screenshots of real-life Instagram posts from a SMI and a NP. These posts were sourced from Socialbakers © (Prague, Czech Republic), an online SM analytical company, as part of a previous student project . Previously, the top 10 most popular Facebook profiles among Australian users for both lifestyle SMIs and NPs were identified (methods detailed elsewhere) . The highest performing accounts with Instagram (based on the number of followers) from both categories were chosen to be included in this questionnaire. The accounts identified were both female, with a differing number of followers at the time of data collection (NP = 91.2 thousand; SMI = 11.3 million). Over the period of 25 June 2018 to 29 July 2018, Socialbakers © was used to determine the highest performing post (based on engagement i.e., sum of likes and comments) on each of the Instagram accounts. These posts ( n = 2) were then included in the pilot questionnaire. Photos were edited to ensure consistency across the profiles (i.e., removing comments whilst leaving the number of likes visible). There was a difference in the number of likes between sources (NP 2686 likes, SMI 282, 711 likes). For this pilot questionnaire, a screenshot of the Instagram profile (small profile picture and biography) for both the NP and SMI, and their corresponding post (highest performing) was also included. The NP’s profile biography stated that she was a dietitian, body positive, and an author of a nutrition-related book, and her post included in the questionnaire discussed body image and body positivity . The SMI’s profile biography included her relationship status, her pregnancy progress, and advertising for her fitness program, and her post discussed her long-term relationship. The order in which participants saw the NP or SMI was randomised (by Qualtrics ® online software) to minimise confounding order-effects.
Familiarity and likeability of the source was first measured based on the Instagram profile (measured using a validated semantic differential scale, each with five items) . The main outcomes were post trustworthiness and post authenticity assessed using validated scales, as well as perceived message appeal used in the Instagram post (measured by eight individual items on a five-point Likert scale ranging from ‘strongly disagree’ to ‘strongly agree’; affiliation, hope, humour, heroic, convenience, guilt, sorrow, fear) . Participants answered questions related to their own nutrition knowledge, healthy eating behaviours, and quality of life, also assessed via statements on a five-point Likert scale . All scales had been previously used in the literature. Participant’s demographic data such as gender, age, country of birth, employment status, and self-reported weight and height were also collected.
2.4.1. Software Questionnaire data was collected in and exported from Qualtrics ® online software into IBM SPSS ® (Statistical Package for Social Sciences; Version 26, Armonk, NY, USA) for Windows. The PROCESS macro was used for regression analysis . 2.4.2. Statistical Tests Participants who did not respond to demographic questions were excluded from analysis ( n = 3). General data cleaning took place prior to analysing data. Variables that were negatively worded within a scale were reverse coded. To assess the reliability of the scales used and ensure the items in each scale were measuring the same concept, Cronbach’s alpha (α) was calculated. If the α value was above 0.7 the scale was considered a reliable measure. Once reliability was confirmed, an average score for each subscale was calculated by dividing the total scale score by the number of items within the scale (e.g., trustworthiness, five items). For any scales where α was below 0.7 (i.e., nutrition knowledge), factor analysis was conducted, and the scale was split into the appropriate number of sub-scales with an α value above 0.7. The body mass index (BMI) (kg/m 2 ) of participants was calculated using self-reported weight and height data from the questionnaire. BMI (kg/m 2 ) was classified into three groups: underweight (<18.49 kg/m 2 ), healthy weight (18.5–24.99 kg/m 2 ), and overweight or obese (>25 kg/m 2 ), based on the World Health Organisation (WHO) cut-offs . The PROCESS macro bootstrapping procedure ( n = 10,000) was used to test for moderated mediation (PROCESS Model 7; See ) . The PROCESS macro tests the effect of the interaction (i.e., source (SMI/NP) × message appeal) on the mediator (i.e., post authenticity), as well as the effect of the mediator (i.e., post authenticity) on the outcome variable (i.e., post trustworthiness). The independent variable was dummy coded to reflect the source of the profile (i.e., SMI = 0, NP = 1). Covariates included in the regression models were participants’ gender, self-reported BMI, participants’ subjective nutrition knowledge, subjective nutrition expertise, participants’ healthy eating behaviour, self-reported quality of life, and the perceived familiarity and likeability of the source.
Questionnaire data was collected in and exported from Qualtrics ® online software into IBM SPSS ® (Statistical Package for Social Sciences; Version 26, Armonk, NY, USA) for Windows. The PROCESS macro was used for regression analysis .
Participants who did not respond to demographic questions were excluded from analysis ( n = 3). General data cleaning took place prior to analysing data. Variables that were negatively worded within a scale were reverse coded. To assess the reliability of the scales used and ensure the items in each scale were measuring the same concept, Cronbach’s alpha (α) was calculated. If the α value was above 0.7 the scale was considered a reliable measure. Once reliability was confirmed, an average score for each subscale was calculated by dividing the total scale score by the number of items within the scale (e.g., trustworthiness, five items). For any scales where α was below 0.7 (i.e., nutrition knowledge), factor analysis was conducted, and the scale was split into the appropriate number of sub-scales with an α value above 0.7. The body mass index (BMI) (kg/m 2 ) of participants was calculated using self-reported weight and height data from the questionnaire. BMI (kg/m 2 ) was classified into three groups: underweight (<18.49 kg/m 2 ), healthy weight (18.5–24.99 kg/m 2 ), and overweight or obese (>25 kg/m 2 ), based on the World Health Organisation (WHO) cut-offs . The PROCESS macro bootstrapping procedure ( n = 10,000) was used to test for moderated mediation (PROCESS Model 7; See ) . The PROCESS macro tests the effect of the interaction (i.e., source (SMI/NP) × message appeal) on the mediator (i.e., post authenticity), as well as the effect of the mediator (i.e., post authenticity) on the outcome variable (i.e., post trustworthiness). The independent variable was dummy coded to reflect the source of the profile (i.e., SMI = 0, NP = 1). Covariates included in the regression models were participants’ gender, self-reported BMI, participants’ subjective nutrition knowledge, subjective nutrition expertise, participants’ healthy eating behaviour, self-reported quality of life, and the perceived familiarity and likeability of the source.
3.1. Demographics A total of 149 (97.4%) participants with a median age of 20 years (19, 21, values are 25th and 75th percentiles) completed the survey. Approximately half of participants identified as female (54.4%) and were born in Australia (51.7%; ). Males were taller and heavier, and consequently had a greater BMI (kg/m 2 ) than females ( p = 0.002). However, there was no difference in the proportion of participants between each BMI category . The majority of participants studied full time (98%) and engaged in part-time or casual work (62.1%; ). Self-reported quality of life was generally high . Participants believed they had high levels of nutrition knowledge, with females rating significantly higher than males ( p = 0.026; ). 3.2. Reliability of Scales Cronbach’s α was above 0.7 for all scales including post trustworthiness (5 items), post authenticity (4 items), source familiarity (4 items), source likeability (3 items), quality of life (4 items), healthy eating behaviours (3 items), and nutrition knowledge (split into general nutrition knowledge: 4 items, and nutrition expertise: 2 items), allowing the mean scores to be used for further analysis . 3.3. Descriptive Statistics The NP was perceived as significantly more authentic and trustworthy than the SMI . In terms of perceived message appeal, the NP was more likely to use a heroic message appeal, with the SMI using affiliation (feelings of love, belonging, and togetherness) and guilt/shame message appeals ( p < 0.001; ). 3.4. Regression Results Separate models estimated the effect of the interaction between the source and each of the eight message appeals on the dependent variable: post authenticity . Each of the separate models included participants’ gender, self-reported BMI, source familiarity, source likeability, self-reported quality of life, participants’ healthy eating behaviour, participants’ subjective nutrition knowledge, and subjective nutrition expertise. The results showed that the perceived heroic appeal (conveys bravery, nobility, and admiration; ) communicated by the source was the only predictor of post authenticity in this case. As such, we further explore this relationship in greater detail below, while also including post trustworthiness in our model. Model 1 estimated the effect of the interaction between the source and heroic message appeal on the dependent variable: post authenticity. The model included participants’ gender, self-reported BMI, source familiarity, source likeability, self-reported quality of life, participants’ healthy eating behaviour, subjective nutrition knowledge, and subjective nutrition expertise as covariates. The results showed that the source was a significant predictor of post authenticity. Interestingly, the NP’s post was perceived as significantly more authentic than the post by the SMI ( t = −2.06, p = 0.04; M SMI = 3.79, M NP = 3.98). Heroic message appeal was not a significant predictor of post authenticity ( t = −1.01, p = 0.31). However, the interaction (Source x Heroic Message Appeal) was found to have a significant effect on perceived post authenticity ( t = 2.54, p = 0.01; , Model 1). Specifically, a strong heroic message appeal (+1 SD above the mean) significantly increased the perceived authenticity of the NP’s post only (M SMI = 4.26, M NP = 3.86; t = 2.76, p = 0.01; ). Both the SMI’s and NP’s posts were perceived as less authentic when their message appeal was weak in heroism (−1 SD Below the mean; M SMI = 3.75, M NP = 3.70; t = −0.39, p = 0.70; ). The effect of participant’s gender, source likability, and nutrition expertise on post authenticity were also statistically significant. We then tested whether the effect of the interaction (source × heroic message appeal) on post trustworthiness was mediated by post authenticity ( , Model 2). When testing for moderated mediation, the key indicator is the indirect effect of the interaction term on the dependent variable through the mediator . The interaction (source × heroic message appeal) was a significant predictor of post authenticity, and post authenticity was a positive and significant predictor of post trustworthiness. As the 95% bootstrapped confidence interval for the indirect effect of the interaction on post trustworthiness through post authenticity did not include zero ( β = 0.15, 95% CI = 0.02 to 0.29), a significant moderated mediation effect was demonstrated. Specifically, the results provide evidence that post authenticity enhances post trustworthiness only when participants perceive a strong heroic message appeal being used by a NP . The effect of participants’ gender and participants’ nutrition expertise on post trustworthiness was also statistically significant.
A total of 149 (97.4%) participants with a median age of 20 years (19, 21, values are 25th and 75th percentiles) completed the survey. Approximately half of participants identified as female (54.4%) and were born in Australia (51.7%; ). Males were taller and heavier, and consequently had a greater BMI (kg/m 2 ) than females ( p = 0.002). However, there was no difference in the proportion of participants between each BMI category . The majority of participants studied full time (98%) and engaged in part-time or casual work (62.1%; ). Self-reported quality of life was generally high . Participants believed they had high levels of nutrition knowledge, with females rating significantly higher than males ( p = 0.026; ).
Cronbach’s α was above 0.7 for all scales including post trustworthiness (5 items), post authenticity (4 items), source familiarity (4 items), source likeability (3 items), quality of life (4 items), healthy eating behaviours (3 items), and nutrition knowledge (split into general nutrition knowledge: 4 items, and nutrition expertise: 2 items), allowing the mean scores to be used for further analysis .
The NP was perceived as significantly more authentic and trustworthy than the SMI . In terms of perceived message appeal, the NP was more likely to use a heroic message appeal, with the SMI using affiliation (feelings of love, belonging, and togetherness) and guilt/shame message appeals ( p < 0.001; ).
Separate models estimated the effect of the interaction between the source and each of the eight message appeals on the dependent variable: post authenticity . Each of the separate models included participants’ gender, self-reported BMI, source familiarity, source likeability, self-reported quality of life, participants’ healthy eating behaviour, participants’ subjective nutrition knowledge, and subjective nutrition expertise. The results showed that the perceived heroic appeal (conveys bravery, nobility, and admiration; ) communicated by the source was the only predictor of post authenticity in this case. As such, we further explore this relationship in greater detail below, while also including post trustworthiness in our model. Model 1 estimated the effect of the interaction between the source and heroic message appeal on the dependent variable: post authenticity. The model included participants’ gender, self-reported BMI, source familiarity, source likeability, self-reported quality of life, participants’ healthy eating behaviour, subjective nutrition knowledge, and subjective nutrition expertise as covariates. The results showed that the source was a significant predictor of post authenticity. Interestingly, the NP’s post was perceived as significantly more authentic than the post by the SMI ( t = −2.06, p = 0.04; M SMI = 3.79, M NP = 3.98). Heroic message appeal was not a significant predictor of post authenticity ( t = −1.01, p = 0.31). However, the interaction (Source x Heroic Message Appeal) was found to have a significant effect on perceived post authenticity ( t = 2.54, p = 0.01; , Model 1). Specifically, a strong heroic message appeal (+1 SD above the mean) significantly increased the perceived authenticity of the NP’s post only (M SMI = 4.26, M NP = 3.86; t = 2.76, p = 0.01; ). Both the SMI’s and NP’s posts were perceived as less authentic when their message appeal was weak in heroism (−1 SD Below the mean; M SMI = 3.75, M NP = 3.70; t = −0.39, p = 0.70; ). The effect of participant’s gender, source likability, and nutrition expertise on post authenticity were also statistically significant. We then tested whether the effect of the interaction (source × heroic message appeal) on post trustworthiness was mediated by post authenticity ( , Model 2). When testing for moderated mediation, the key indicator is the indirect effect of the interaction term on the dependent variable through the mediator . The interaction (source × heroic message appeal) was a significant predictor of post authenticity, and post authenticity was a positive and significant predictor of post trustworthiness. As the 95% bootstrapped confidence interval for the indirect effect of the interaction on post trustworthiness through post authenticity did not include zero ( β = 0.15, 95% CI = 0.02 to 0.29), a significant moderated mediation effect was demonstrated. Specifically, the results provide evidence that post authenticity enhances post trustworthiness only when participants perceive a strong heroic message appeal being used by a NP . The effect of participants’ gender and participants’ nutrition expertise on post trustworthiness was also statistically significant.
It was hypothesised that the SMI would be more trustworthy and authentic than the NP based on the extensive celebrity endorsement literature, whereby an Instagram influencer is akin to a true celebrity . However, the results of this study provide initial evidence that the NPs post was perceived by young adults to be more authentic, and subsequently, more trustworthy, than a SMIs post. We provide evidence of this relationship irrespective of the young adults’ gender, BMI, familiarity with the source, likability of the source, self-reported quality of life, healthy eating behaviour, subjective nutrition knowledge, or subjective nutrition expertise. Therefore, hypothesis 1 and hypothesis 2 were not supported. Communicating health through SM is challenging, and research focused on the methodology for improving SM engagement for NPs is currently lacking. In this study, a novel concept in this field of research was examined: the perceived authenticity and trustworthiness of NPs posts compared to SMIs. SMIs often promote damaging fad-diets and share misinformation without consequence; whilst NPs promote evidence-based sustainable diet changes for disease prevention. Currently, Instagram and Facebook are unregulated in regard to health misinformation, with the exception of COVID-19 related information. In 2019, a policy was introduced that prevented diet supplements being advertised to under 18 year olds which is a step in the right direction. We await the application of these techniques to other areas in regard to stemming the proliferation of misinformation. Our previous research has identified many factors that may impact the trustworthiness and authenticity of such posts such as number of followers, message appeal, and authority cues . However, there is a paucity of research on the influence of either NPs or SMIs on perceived trustworthiness and authenticity of SM posts . Based on our exploratory results, we further showed that the authenticity of the NP’s posts was dependent on the perceived strength of the heroic message appeal communicated. A NP’s post was perceived as more authentic, and subsequently more trustworthy, when the message appeal used in the post was perceived to be strongly heroic. On the other hand, the authenticity of the NP’s post was perceived as significantly less authentic, and subsequently less trustworthy, when their message appeal was perceived to be weak in heroism. In other words, it is suggested that, when appropriate, NPs attempt to convey positive emotions relating to heroism such as bravery, nobility, and success, in their messages in order to enhance the genuineness and realness of their posts. Whilst the authenticity of NPs’ messages, or content, has not been directly measured to date (to the authors’ knowledge), the medical literature highlights the importance of authenticity driving the motives of health professionals, such as doctors and nurses . Conceptualisations of authenticity in health surround the themes of genuineness, consistency, and caring . Trust is also an important consideration in healthcare settings, with trusted professionals being more likely to lead their patients to better health outcomes and consequently, quality of life . The current literature (from clinical settings) suggests that health professionals can develop trusting relationships through being non-judgemental and encouraging two-way interaction between themselves and patients . However, young adults have previously identified lack of trust and communication difficulties as important factors contributing to negative healthcare experiences . Trusting relationships are essential in achieving behavioural change over SM; as those on SM platforms who have higher trust from their audience have a higher level of influence over others’ behaviours . In a review looking at the efficacy of using SM for achieving nutrition outcomes in young adults, it was found that while young adults considered SM an acceptable source for health information, they preferred a one-way conversation, regarding health with professionals through SM and did not wish to discuss their weight . In addition, health and fitness information shared through University affiliated SM pages has been found to be acceptable by University students . Young adults constitute the audience of many SMIs, who share their personal life online and attract a loyal following, wanting to form a personal connection. This is known as a ‘parasocial relationship’ , thus creating an illusion of intimacy and friendship, despite the majority of followers remaining unknown by the SMI . In contrast, NPs must maintain a sense of professionalism online and, therefore, typically cannot create the same type of content without risking the loss of their professional image . However, in this study, the caption in the NP’s post is a vulnerable description of when she was struggling with weight-loss. This use of vulnerability may perhaps not be the ‘norm’ for NPs on SM, but it does provide strategies for how professionalism and vulnerability may be combined, as suggested by the celebrity literature from marketing and psychology . One study looking at the authenticity of bloggers found that bloggers who shared their innermost thoughts and many facets of their personal life on their blog were seen as more relatable and authentic than those who were less open about their personal life . In both commercial and social marketing, message appeal is often manipulated to influence consumer’s emotions and generate a greater persuasive capacity . There is limited research published in health and medicine that considers the effectiveness of varying message appeals over SM . However, those published emphasise the effectiveness of positive emotional appeals to increase engagement with public health messages . Specifically, a ‘heroic’ message appeal, associated with bravery and nobility, is not commonly researched in the literature, with many studies focusing on negative emotional appeals such as ‘guilt’, ‘fear’, and ‘shame’, or single positive appeals such as ‘humour’ . Negative emotional appeals can often result in feelings detrimental to an individual’s wellbeing , and have been found to lead to the ‘flight, fight, or freeze’ response, eliciting a longer-lasting impact on the nervous system when compared to positive appeals . As detailed in Self-Determination Theory, an individual should be autonomous in their decision making to attain long-lasting behaviour change . Individuals exposed to negative appeals without control over their exposure (i.e., seeing an advertisement on TV), can result in an ‘avoidance motivation’, whereby an individual makes effort to avoid anything they anticipate will cause sadness and anxiety , hence scare tactics are not always effective in influencing behaviours. Our online conversations provide further evidence of needing to move away from negative rhetoric, as guilt appeals around healthy eating did not work on young adults who did not hold the same beliefs about health (e.g., ‘sinners’) . Traditionally, health promotion organisations have maintained a serious message and utilised rational message appeals (e.g., facts, statistics) over emotional appeals, generating lower engagement rates from their audience . Young adults have indicated that healthy eating messages would be more persuasive if they incorporated empathy, while an authoritative message was rated poorly for perceived ability to encourage healthy eating . The findings from our study suggest that by focusing on positive emotional appeals such as heroism, the audience can empathetically connect with NPs, which could lead to greater engagement rates. Specifically, results from the regression analysis indicated that a perceived ‘strong’ heroic message appeal resulted in a more authentic perception of the NP’s post, and when using a ‘weak’ perceived heroic message appeal, the authenticity of both the SMI’s and NP’s post was reduced. This research has highlighted the importance placed on message appeal in assessing the authenticity of SM posts, especially those of NPs. NPs could benefit from communicating their success and bravery to increase the authenticity of their posts. Although this study found that young adults were less likely to perceive the posts of SMIs as authentic and trustworthy, consumers are often inspired by celebrities and influencers, and follow them to learn from their experiences with different diets and exercise regimes. Many SMIs advertise one-on-one consultation sessions, and sell personalised meal plans, exercise eBooks, and nutritional supplements based on anecdotal evidence and pseudoscience . As the number of followers increases for SMIs, there are more consumers trying the behaviours promoted, reflecting ‘herd behaviour’ , the phenomenon of individuals deciding to follow others and imitating group behaviours rather than deciding independently on the basis of their own, private information . Herd behaviour can trigger a larger trend, which could be harmful to many people, particularly if it is based on unqualified advice. To our knowledge, there are no consistent ways for the public to identify credentialed health professionals and scientists, particularly on SM. NPs need to be cognisant of the different communication strategies used in the online environment and focus on educating the public through sharing relatable evidence-based health advice. The profiles and posts included in the survey were sourced based on objective engagement metrics rather than being subjectively chosen by the researchers. Real-life profiles and posts were used rather than fictional characters, making the study more realistic and providing a true sense of consumers’ perceptions of the SMI and NP. Furthermore, the use of validated scales to assess consumer perceptions produced excellent Cronbach’s α scores for all measures and ensured the survey measured what it was intended to. The experimental nature of this study—rather than a real-life setting, as well as the convenience sample of University students participating as part of their coursework requirements—limits the generalisability of these results to the wider population of young adults. Furthermore, the use of student participants limits the variability of results as students samples (referred to as western, educated, industrialised, rich, and democratic (WEIRD)) are seen as more homogenous in terms of education level and socioeconomic status than the general public. Additionally, the two Instagram profiles sourced by Socialbakers © were both young and conventionally attractive females, which could have affected the variability of results. In an effort to keep the posts as similar to real as possible, the number of likes was visible, however there was a large difference between sources (SMI 282,711, NP 2686) which could have impacted participants’ perceptions. The messages in the SMI and NP posts were also discussing different topics (NP: body image, SMI: relationships) which could have unknowingly affected results. In this pilot study, we focussed on measuring the trustworthiness of the Instagram posts. Future iterations should consider adapting the questions to measure perceived expertise and attractiveness (if the person is shown in the photo) of the SMI and NP based on the Instagram posts. Future research could use an experimental design to manipulate the message in the caption and/or number of likes on the Instagram posts to examine the effect of message topic and bandwagon cues on trustworthiness and authenticity. Further research is required to enhance the understanding of trustworthiness and authenticity on SM, as well as to validate these findings using male SMIs and NPs, and non-student young adult populations.
This study was the first of the authors’ knowledge to compare perceptions of SMI and NP posts on Instagram, with a focus on trustworthiness, authenticity, and message appeal. We have developed recommendations for NPs using SM that add to our existing recommendations from our work emanating from Communicating Health , particularly in relation to body image , nuanced messaging for groups with different attitudes and beliefs , and the need to adapt posts to the different SM platforms along with utilising strategies associated with higher engagement . Nutrition science and health communication is fraught with difficulty in distilling complex messages into information that can be simply understood by the lay population. We found that by using a heroic message appeal, the NP post was seen as more authentic and, subsequently, more trustworthy. This research has highlighted the impact that message appeals can have on an audience, therefore posting positive, brave and successful content could be an effective way to improve the persuasiveness of health communication by NPs.
|
Metabolic Atlas of Human Eyelid Infiltrative Basal Cell Carcinoma | 6cd64420-a057-4d07-b0b4-696a31d6289c | 11717128 | Biochemistry[mh] | Ethics Approval The human clinical study received approval from the Ethics Committee and the Institutional Review Board of Zhongshan Ophthalmic Center, Sun Yat-Sen University (Ethics number: 2024KYPJ012). We adhered to all institutional regulations regarding the ethical use of human volunteer information and samples, and informed consent was obtained from each participant. The human tissue experiments complied with the guidelines of the ARVO Best Practices for Using Human Eye Tissue in Research (Nov2021). Human Eyelid Basal Cell Carcinoma Samples Collection Four patients with eyelid iBCC who underwent controlled lesion excision at the Oculoplastic Department of Zhongshan Ophthalmic Center were selected . During surgery, one tumor sample and one normal skin sample were collected from each patient for metabolic analysis. Inclusion criteria: 1. Age between 40 and 75 years 2. Postoperative pathology-confirmed basal cell carcinoma and confirmed as infiltrative subtype 3. Absence of other underlying metabolic diseases (e.g., diabetes, hyperthyroidism) or cancers before surgery 4. No prior radiotherapy or chemotherapy Data Sets The single-cell basal cell carcinoma data were sourced from the GEO database (GSE141526). Raw gene expression profiles of four basal cell carcinoma samples and two normal samples were used. Copy Variation Analysis for Individual Samples The copy variation analysis and the identification of the malignant cell and subclone area was conducted by SCEVAN R packages. Quality Control of Raw Matrix Cells were retained for analysis if they met the following criteria: 1. 350 < genes/cell <5000 2. Cells with less than 10% mitochondrial gene expression 3. Cells not identified as outliers ( P = 1 × 10 −3 ) Data Integration Following quality control, UMI count matrices were log-normalized and scaled using Seurat. Batch processing and sequencing were conducted. Batch effect correction used the Harmony algorithm (version 1.2.0). Dimensionality reduction, clustering, and differential expression analysis followed Seurat's tutorial, employing principal component analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP) with 20 principal components. Marker Gene Selection and Clustering Graph-based clustering was performed using Seurat's “FindClusters” function (clustering resolution = 0.3, k-nearest neighbors = 20). Marker genes for each cluster were identified using the “FindAllMarkers” function, with criteria min.pct > 0.25 and logfc.threshold > 0.25. The top 100 ranked marker genes were selected for each cluster . Differential Gene Expression Analysis Differential gene expression between tumor and normal cells was analyzed using the “FindMarkers” function with the Wilcoxon test. Genes with adjusted |log fold change| > 0.25 and adjusted P < 0.05 were considered differentially expressed. Gene Ontology (GO) Enrichment, Kyoto Encyclopedia of Genes and Genomes, and Gene Set Enrichment Analysis Functional enrichment analysis of marker and differentially expressed genes was performed using Metascape ( https://metascape.org/gp/index.html ), visualized with the ggplot2 R package (version 3.3.5). Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis used the clusterProfiler R package (version 4.8.3). Gene Set Variation Analysis Gene set variation analysis (GSVA) is an unsupervised analytical method commonly used to assess enrichment scores of samples within predefined gene sets. GSVA compares the enrichment score differences between samples to reveal changes in gene set activity under different conditions or between groups. The GSVA analysis is performed by GSVA R package. The upregulated/downregulated pathways with P < 0.05 were considered significant and visualized ( A). Gene sets for metabolism-related pathways are sourced from the Cancer Cell Metabolism Gene Database ( http://bioinfo.uth.edu/ ), a comprehensive annotation resource focused on cell metabolism genes in cancer. This database provides valuable resources on the functional annotations of cell metabolism genes across various cancer types, supporting research on cancer cell metabolism and broader studies. Single-Cell Flux Estimation Analysis Single-cell metabolic flux analysis was conducted using scFEA (v1.1; Python 3.9). Metabolic module data were downloaded from the official website ( https://github.com/changwn/scFEA ). Sample Preparation Postsurgical basal cell carcinoma samples were embedded in optimal cutting temperature (OCT) and stored at −80°C before analysis. Tissues were placed at −20°C 1 hour before use, and tissue sections were prepared using a Leica CM1950 cryostat (Leica Microsystems GmbH, Wetzlar, Germany). Tissue sections of 20 mm thickness were thaw mounted onto ITO-coated microscopic slides (Bruker Daltonics, Billerica, MA, USA) for matrix assisted laser desorption ionization (MALDI) MSI and onto glass slides for hematoxylin and eosin (H&E) staining. The mounted tissue sections were dried for 30 minutes in a desiccator prior to matrix application and then transferred them to a −80°C refrigerator for sealing storage after vacuum packaging. MALDI Matrix Preparation and Application In total, 5 mg/mL alpha-cyano-4-hydroxycinnamic acid (CHCA) was dissolved in 70% HPLC-grade Acetonitrile, a solvent frequently used in HPLC (CAN) with 0.1% trifluoroacetic acid (TFA). The 20-µm-thick tissue sections were sprayed using a TM-sprayer (HTX Technologies, Chapel Hill, North Carolina, US). The parameters of the matrix application set in the TM-sprayer were as follows: spray nozzle velocity (1200 mm/min), track spacing (3 mm), flow rate (0.08 mL/min), spray nozzle temperature (60°C), and nitrogen gas pressure (10 psi). Mass Spectrometry Imaging Place the conductive slides with the sprayed matrix onto the target plate of the Timstof flex MALDI-2. Using the Bruker Data Imaging software, select the detection area of the tissue sections and set the imaging resolution. The tissue section is divided into several two-dimensional grids based on its size for imaging. At an appropriate laser energy level, the tissue sections are scanned, and the molecules ionized and released from the target sites are detected by mass spectrometry, generating raw data files (Raw Data). The specific parameters are as follows: Data Analysis MSI data were analyzed using SCiLS Lab software (version 2021c premium; Bruker Daltonics) with the root mean square normalization. Metabolites were identified by comparing the accuracy of the m/z value (<10 ppm) with the in-house database and Bruker Library MS-Metabobase 3.0 database. Receiver operating characteristic analysis and Student's t -test analysis were applied to determine the significance of differences between the samples/segmentations. Area under the curve >0.7 or <0.3 and corrected P < 0.01 were used to screen significant changed metabolites.
The human clinical study received approval from the Ethics Committee and the Institutional Review Board of Zhongshan Ophthalmic Center, Sun Yat-Sen University (Ethics number: 2024KYPJ012). We adhered to all institutional regulations regarding the ethical use of human volunteer information and samples, and informed consent was obtained from each participant. The human tissue experiments complied with the guidelines of the ARVO Best Practices for Using Human Eye Tissue in Research (Nov2021).
Four patients with eyelid iBCC who underwent controlled lesion excision at the Oculoplastic Department of Zhongshan Ophthalmic Center were selected . During surgery, one tumor sample and one normal skin sample were collected from each patient for metabolic analysis. Inclusion criteria: 1. Age between 40 and 75 years 2. Postoperative pathology-confirmed basal cell carcinoma and confirmed as infiltrative subtype 3. Absence of other underlying metabolic diseases (e.g., diabetes, hyperthyroidism) or cancers before surgery 4. No prior radiotherapy or chemotherapy
The single-cell basal cell carcinoma data were sourced from the GEO database (GSE141526). Raw gene expression profiles of four basal cell carcinoma samples and two normal samples were used.
The copy variation analysis and the identification of the malignant cell and subclone area was conducted by SCEVAN R packages.
Cells were retained for analysis if they met the following criteria: 1. 350 < genes/cell <5000 2. Cells with less than 10% mitochondrial gene expression 3. Cells not identified as outliers ( P = 1 × 10 −3 )
Following quality control, UMI count matrices were log-normalized and scaled using Seurat. Batch processing and sequencing were conducted. Batch effect correction used the Harmony algorithm (version 1.2.0). Dimensionality reduction, clustering, and differential expression analysis followed Seurat's tutorial, employing principal component analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP) with 20 principal components.
Graph-based clustering was performed using Seurat's “FindClusters” function (clustering resolution = 0.3, k-nearest neighbors = 20). Marker genes for each cluster were identified using the “FindAllMarkers” function, with criteria min.pct > 0.25 and logfc.threshold > 0.25. The top 100 ranked marker genes were selected for each cluster .
Differential gene expression between tumor and normal cells was analyzed using the “FindMarkers” function with the Wilcoxon test. Genes with adjusted |log fold change| > 0.25 and adjusted P < 0.05 were considered differentially expressed.
Functional enrichment analysis of marker and differentially expressed genes was performed using Metascape ( https://metascape.org/gp/index.html ), visualized with the ggplot2 R package (version 3.3.5). Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis used the clusterProfiler R package (version 4.8.3).
Gene set variation analysis (GSVA) is an unsupervised analytical method commonly used to assess enrichment scores of samples within predefined gene sets. GSVA compares the enrichment score differences between samples to reveal changes in gene set activity under different conditions or between groups. The GSVA analysis is performed by GSVA R package. The upregulated/downregulated pathways with P < 0.05 were considered significant and visualized ( A). Gene sets for metabolism-related pathways are sourced from the Cancer Cell Metabolism Gene Database ( http://bioinfo.uth.edu/ ), a comprehensive annotation resource focused on cell metabolism genes in cancer. This database provides valuable resources on the functional annotations of cell metabolism genes across various cancer types, supporting research on cancer cell metabolism and broader studies.
Single-cell metabolic flux analysis was conducted using scFEA (v1.1; Python 3.9). Metabolic module data were downloaded from the official website ( https://github.com/changwn/scFEA ).
Postsurgical basal cell carcinoma samples were embedded in optimal cutting temperature (OCT) and stored at −80°C before analysis. Tissues were placed at −20°C 1 hour before use, and tissue sections were prepared using a Leica CM1950 cryostat (Leica Microsystems GmbH, Wetzlar, Germany). Tissue sections of 20 mm thickness were thaw mounted onto ITO-coated microscopic slides (Bruker Daltonics, Billerica, MA, USA) for matrix assisted laser desorption ionization (MALDI) MSI and onto glass slides for hematoxylin and eosin (H&E) staining. The mounted tissue sections were dried for 30 minutes in a desiccator prior to matrix application and then transferred them to a −80°C refrigerator for sealing storage after vacuum packaging.
In total, 5 mg/mL alpha-cyano-4-hydroxycinnamic acid (CHCA) was dissolved in 70% HPLC-grade Acetonitrile, a solvent frequently used in HPLC (CAN) with 0.1% trifluoroacetic acid (TFA). The 20-µm-thick tissue sections were sprayed using a TM-sprayer (HTX Technologies, Chapel Hill, North Carolina, US). The parameters of the matrix application set in the TM-sprayer were as follows: spray nozzle velocity (1200 mm/min), track spacing (3 mm), flow rate (0.08 mL/min), spray nozzle temperature (60°C), and nitrogen gas pressure (10 psi).
Place the conductive slides with the sprayed matrix onto the target plate of the Timstof flex MALDI-2. Using the Bruker Data Imaging software, select the detection area of the tissue sections and set the imaging resolution. The tissue section is divided into several two-dimensional grids based on its size for imaging. At an appropriate laser energy level, the tissue sections are scanned, and the molecules ionized and released from the target sites are detected by mass spectrometry, generating raw data files (Raw Data). The specific parameters are as follows:
MSI data were analyzed using SCiLS Lab software (version 2021c premium; Bruker Daltonics) with the root mean square normalization. Metabolites were identified by comparing the accuracy of the m/z value (<10 ppm) with the in-house database and Bruker Library MS-Metabobase 3.0 database. Receiver operating characteristic analysis and Student's t -test analysis were applied to determine the significance of differences between the samples/segmentations. Area under the curve >0.7 or <0.3 and corrected P < 0.01 were used to screen significant changed metabolites.
Construction of Single-Cell Transcriptome Atlas for Human Basal Cell Carcinoma We obtained single-cell RNA sequencing (scRNA-seq) data from the GEO database (GSE141526), including primary human BCC samples ( n = 4) and peri-tumoral skin (PTS) tissues ( n = 2). In total, we processed 38,435 raw single cells. Following the removal of doublets and quality control filtering, 38,373 cells were retained for downstream analysis. To elucidate the cellular diversity and heterogeneity within each tumor and to map the distribution of malignant cells in each sample, we analyzed individual samples using the SCEVAN R package. The results revealed distinct boundaries between tumor and normal cells in BCC1 and BCC2, while BCC3 and BCC4 showed more diffuse boundaries, indicating greater tumor cell invasion and infiltration ( A). These findings are consistent with the clinical subtypes of the respective samples: BCC1 and BCC2 as nodular BCC and BCC3 and BCC4 as infiltrative BCC. The copy number variation (CNV) analysis showed varying subclone numbers in the BCC samples ( B). BCC1 had five subclone regions with an increased copy number on chromosome 6. BCC2 had nine subclone regions with a reduced copy number on chromosomes 11 and 12. BCC3 had four subclone regions with no significant genomic abnormalities. BCC4 had four subclone regions with increased copy numbers on chromosomes 5 and 7. After integrating BCC samples (BCC1–BCC4) and PTS samples (PTS1–PTS2) ( C),we clustered the integrated data into a total of 12 subgroups. Based on common tumor biomarkers ( B), we identified eight cell types and visualizing them using the UMAP plots, including BCC and PTS ( B). The epithelial cell clusters included basal epithelial/tumor cells (KRT14), proliferative epithelial cells (MKI67), and terminally differentiated keratinocytes (IVL). Two types of fibroblasts were identified based on commonly expressed genes in fibroblasts (COL1A1, COL1A2). Among these, cluster 3 and cluster 11 were PDGFRA + cells, which we defined as inflammatory cancer-associated fibroblasts (iCAFs) based on previous studies. Cluster 5 specifically expressed RGS5, and we termed these cells myofibroblastic cancer-associated fibroblasts (myoCAFs). Other cell types were identified using common marker genes, including endothelial cells (ECs) (PECAM1), melanocytes (MLANA), and immune cells (PTPRC). Analysis of Fibroblast Subpopulations in Human Basal Cell Carcinoma Fibroblasts exhibited notable differences between PTS and BCC samples ( B). To further investigate these differences, we performed a subpopulation analysis of fibroblasts in the integrated samples. This analysis revealed two distinct fibroblast subtypes: iCAFs, characterized by PDGFRA expression, and myoCAFs, characterized by RGS5 expression. Interestingly, a small number of iCAFs and myoCAFs were also identified in PTS, suggesting potential tumor cell infiltration and invasion. To elucidate the functions of these two fibroblast subtypes, we analyzed their differentially expressed genes ( A). Notably, PLA2G2A and CFD were upregulated in iCAFs ( B). This finding is consistent with previous breast cancer studies that highlighted the unique expression of phospholipase encoded by PLA2G2A and complement pathway-related genes (such as CFD and C3) in iCAFs. GO enrichment analysis of iCAFs ( C) indicated their involvement in extracellular matrix organization, collagen metabolism regulation, and inflammatory chemotaxis. In myoCAFs, we observed specific upregulation of RERGL and FHL5 ( B), both of which are related to angiogenesis. GO analysis ( C) also showed significant enrichment of myoCAFs in muscle system processes and angiogenesis-related pathways. These findings suggest that iCAFs and myoCAFs exhibit distinct heterogeneity and may play different roles in the development and progression of BCC. iCAFs appear to be involved in extracellular matrix remodeling, collagen metabolism, and inflammatory processes, while myoCAFs are likely associated with angiogenesis and muscle system processes. Metabolic Reprogramming in Human Infiltrative Basal Cell Carcinoma To illustrate the comprehensive metabolic changes in BCC, we first analyzed the GSVA scores of KEGG pathways between BCC and PTS samples ( A). Overall, multiple metabolic pathways were upregulated in BCC compared to PTS, with the fructose and mannose metabolism pathway showing the most significant difference. Previous studies have shown that fructose enhances the Warburg effect by downregulating mitochondrial respiration and increasing aerobic glycolysis, which may support metastatic cancer growth and drive metabolic reprogramming in cancer cells. In melanoma cells, fructose activates cryoprotection by inducing heme oxygenase 1 expression, helping cells resist immune-mediated killing during immune checkpoint blockade therapy. Other pathways related to lipid and amino acid metabolism also showed notable upregulation. Given the significant metabolic activation observed in BCC samples, we conducted a detailed analysis of gene set variation in metabolic pathways across all BCC clusters ( B). This analysis revealed that, in contrast to tumor cells, stromal clusters in BCC (including iCAFs, myoCAFs, ECs, and immune cells) display more pronounced metabolic activity. Based on these findings, we chose to treat tumor cells as a single group in subsequent metabolic pathway analyses to facilitate a comparison of metabolic differences between tumor and stromal tissue and to further explore the specific metabolic module changes and its metabolic flux in different clusters in BCC. We used the scFEA R package to predict metabolic modules in iBCC subsets (BCC3, BCC4). First, we predicted the tricarboxylic acid cycle (TCA) modules ( A, B) and found an upregulation of citrate across all clusters. The concentration of citrate in the tumor microenvironment has been reported to be involved in maintaining tumor growth. In vitro experiments have also shown that physiological concentrations of citrate can sustain the proliferation of various cancer cells, including prostate, pancreatic, and gastric cancer cells. , We observed a significant upregulation of pyruvate in iCAFs ( A), which is consistent with findings from other studies. In primary lymphomas, CAFs were found to secrete significant amounts of pyruvate, and the presence of pyruvate promoted the survival of lymphoma cells. However, we did not observe a significantly high expression of pyruvate in myoCAFs, suggesting that the maintenance of tumor cell survival mainly relies on iCAFs. Furthermore, we observed a significant upregulation in the first five modules of the TCA, corresponding to the glycolysis process ( B). This indicates an increased rate of energy production in tumor cells, which are in a high metabolic state. Notably, this upregulation was also observed in tumor microenvironment–related cell types, including CAFs, ECs, and immune cells. These findings further demonstrate the “nutrient” role of the tumor microenvironment in supporting tumor growth and survival. We next analyzed the changes in metabolites in the glucose–glutamine metabolic pathway ( C). Interestingly, we observed that the expression of glutamine is higher in proliferative epithelial cells compared to tumor cells. Recent studies have highlighted the importance of glutamine metabolism in maintaining the proliferation and apoptosis of tumor epithelial cells. In tumor cells that use glutamine as a source of energy and building materials, a portion of the TCA cycle shows an opposite direction for reactions from α-ketoglutarate to isocitrate. In this manner, α-ketoglutarate, which originates from glutaminolysis, supplies the TCA cycle in the opposite direction, potentially contributing to the accumulation of citrate ( C). Furthermore, an increase in phosphoglycerate dehydrogenase (PHGDH) was observed in both myoCAFs and immune cells, suggesting the importance of these cell types in supporting tumor metabolism. Given that BCC is highly associated with DNA damage caused by photodamage, we investigated the metabolic changes in the methionine–glutathione–folate network ( D) to assess DNA methylation and related metabolic alterations. Ultraviolet radiation induces carcinogenic photoproducts such as cyclobutane pyrimidine dimers and 6-4 pyrimidone photoproducts, leading to DNA mutations and local immunosuppression. Compared to other cells, we found an increase in S-adenosylmethionine and 5,10-methylenetetrahydrofolate in immune cells. Recent literature also reports that DNA methylation is related to metabolic remodeling in immune cells within the tumor microenvironment, highlighting the complex interplay between epigenetic regulation and metabolism in the immune compartment of iBCC. In the iron-related metabolism ( E), we observed high expression of heme and heme-associated metabolites in both tumor cells and proliferative epithelial cells, along with a mild upregulation of these metabolites in the cell components of the tumor microenvironment (ECs, CAFs, and immune cells). Heme was known to be an important factor in the tumor microenvironment and was involved in macrophage polarization, angiogenic potential of tumor endothelial cells, matrix remodeling by cancer-associated fibroblasts, and interactions between neural and cancer cells. The significance of these substances in the tumor microenvironment is noteworthy and will be further elucidated in the upcoming spatial metabolomics analysis. Our analysis of the metabolic changes in the tumor microenvironment of iBCC revealed significant alterations in lipid metabolism ( E). Notably, we observed a substantial increase in oxaloacetate levels across all cell subgroups. Oxaloacetate has been implicated in nucleotide biosynthesis in rapidly proliferating cancer cells and plays a pivotal role in metabolic reprogramming in various cancer models. In addition to oxaloacetate, we found an upregulation of 3-phosphoglycerate (3PG) in immune cells and CAF cell types. Concurrently, a significant increase in the metabolic flux of the 3PG → serine pathway was observed in cancer cells and proliferative epithelial cancer cells ( E). Previous studies suggested that the glycolysis intermediate 3PG can be diverted to produce serine, and during solid tumor progression, the stiffness of the extracellular matrix in the tumor microenvironment may enhance glycolysis-derived serine biosynthesis. As CAFs are closely associated with tumor extracellular matrix reprogramming, they may further influence tumor serine biosynthesis by upregulating 3PG, thereby regulating tumor growth and invasion. We also observed an increase in glutathione (GSH) metabolic flux in ECs, iCAFs, melanocytes, and proliferative ECs ( E). In human squamous cell carcinoma, CAFs have been shown to utilize glutamate to form glutathione, balancing cellular redox status and promoting stromal extracellular matrix remodeling. Furthermore, a significant upregulation of glutathione was also found in melanocytes, which may be due to UVA-mediated melanogenesis caused by excessive production of oxidants and deterioration of the antioxidant defense network. Our results demonstrate that the iBCC microenvironment exhibits enhanced metabolic flux in glycolysis, glutamine, heme, and glutathione pathways. Moreover, we observed significant differences in the metabolic profiles of the two types of CAFs. In iCAFs, pyruvate content was relatively high, while in myoCAFs, PHGDH and 3PG were highly expressed, indicating their function in extracellular matrix reprogramming. MALDI MSI Reveals Metabolic Complexity in Human Infiltrative Basal Cell Carcinoma of the Eyelid To corroborate the metabolic alterations in iBCC identified through single-cell analysis and to elucidate the complex metabolic landscape of the tumor microenvironment, we collected four samples for MALDI MSI ( ; Methods; A). Due to the specificity of tumor resection surgery, some sections were able to display changes in the tumor invasive zone ( A). Both tumor and stroma regions were identified in the samples ( B). Compared with H&E staining, red pixels exhibited spatial characteristics similar to stroma regions, while the spatial distribution of yellow pixels was consistent with tumor regions ( C). We identified metabolic similarities in the MS images of each BCC sample using bipartitional k-means clustering, which grouped image pixels with similar metabolic fingerprints together. Both C1 and C2 were divided into six clusters, suggesting a high degree of metabolic complexity within the tumor ( D). Further analysis revealed that all detected features could be separated into six major components ( A). We detected numerous active components in human eyelid iBCC samples ( E, F). The top 10 categories of substances included carboxylic acids and their derivatives, aromatic hydrocarbons, fatty acyls, steroids, and glycerophospholipids. The significant presence of metabolites related to lipid metabolism suggested a high degree of lipid metabolic heterogeneity in iBCC. We analyzed the metabolites that were upregulated in all iBCC samples compared to normal skin ( A). Taurine showed a significant increase in all four tumor samples ( B). Previous studies have shown elevated levels of taurine in primitive embryonic tumors and brain tumors such as retinoblastoma, medulloblastoma, and neuroblastoma. , Deoxy-GMP, associated with DNA damage caused by light exposure, a significant pathogenic factor in BCC, was also upregulated. This suggests its role as an activated carcinogenic metabolite that binds to DNA and forms adducts, inducing gene mutations and accelerating tumor progression ( C). O-Phosphoethanolamine, significantly elevated in tumors ( D), is linked to tumor growth, endurance in harsh microenvironments, invasion, and infiltration. Pyrithione, a heat shock response inducer causing DNA damage and impaired genomic integrity, was also upregulated. Notably, metabolites associated with carbohydrate metabolism (such as ATP, UTP, and sodium lactate) are upregulated in samples C3 and C4 compared to C1 and C2 ( A). This may be due to the presence of central ulceration and recurrent bleeding in the C3 and C4 samples, making them more aggressive compared to the C1 and C2 samples and thereby requiring more rapid energy support. KEGG enrichment analysis revealed that the sphingolipid signaling pathway, with O-phosphoethanolamine as a key metabolite, was upregulated in all four tumor samples ( B). O-phosphoethanolamine, derived from sphingosine-1-phosphate (S1P), is linked to cancer cell transformation, migration, growth, and resistance. In C1, C3, and C4, the lysosomal metabolism pathway was upregulated, likely due to the high carbohydrate affinity required by rapidly proliferating cancer cells. We then investigated the downregulated metabolites in four iBCC samples ( F). Conjugated linoleic acid ( G), known for its antitumor effects in skin keratinocytes, was significantly reduced in all four samples. Compared to normal skin, 2,3-bisphosphoglycerate ( H) was significantly downregulated, indicating enhanced glycolysis in tumor tissues, consistent with previous literature and our findings. , Metabolite Similarity and Differences Between Tumor and Invasive Zone in iBCC Since iBCC strongly interacts with its nearby environment, we subdivided the slice into microregions to analyze the tumor–stroma interactions at the invasive zone. The locations of tumor, invasive zone, and normal area in iBCC were identified ( A). KEGG enrichment analysis showed that lipid metabolism–related pathways were upregulated in both the tumor and invasive area compared to normal tissue ( B). In the fatty acid biosynthesis pathway (hsa00061), oleic acid was found to be highly expressed in the invasive area ( C, D). CAFs in xenograft tumors have been reported to contain higher levels of oleic acid, which can activate lipid metabolism in cancer cells under glucose deprivation conditions, enhancing their stem cell activity and contributing to malignant progression. Additionally, in the biosynthesis pathway of unsaturated fatty acids (hsa01040), arachidonic acid was highly expressed in both the tumor and invasive area ( C, D). As an important mediator of inflammation, arachidonic acid is closely related to the maintenance of chronic inflammation in the tumor microenvironment and the immune infiltration of tumor-associated immune cells. These findings underscore the significant impact of lipid metabolism in the tumor microenvironment on BCC proliferation and immune suppression. We also investigated the metabolic differences between the tumor and invasive zone ( E). GSH levels were higher in the invasive area, consistent with our previous single-cell findings ( E). In human skin squamous cell carcinoma, CAFs use glutamate to form glutathione, balancing cellular redox states and promoting extracellular matrix remodeling. Furthermore, glutathione levels increase by 30% in prostate cancer cells grown in CAF-conditioned media, suggesting that elevated GSH in tumor cells may result from CAF regulation. Heme levels were significantly higher in the invasive zone compared to the tumor area. This finding aligns with recent literature indicating that heme can modulate the tumor microenvironment by acting on tumor-associated endothelial cells and tumor-associated macrophages, promoting angiogenesis and tumor immune suppression, in addition to supporting cancer cell growth, survival, and metastasis. Elevated levels of carnosine were also found in the invasive zone. UXS1 as a Potential Target to Counteract Drug Resistance in Basal Cell Carcinoma Among the upregulated metabolites in the tumor area, UDP-glucuronic acid (UDPGA) exhibited the highest expression ( E, F). UDPGA plays a crucial role in glycosylation within the Golgi apparatus and serves as a key substrate for UDP-glucuronosyltransferase (UGTs). These enzymes utilize UDPGA to conjugate glucuronic acid to xenobiotics, including chemotherapy drugs, facilitating their deactivation and secretion through glucuronidation. This process may contribute to tumor drug resistance. Recent research has highlighted the potential of UXS1 as a regulatory point, as it can convert UDPGA to UDP-xylose, thereby reducing UDPGA accumulation. However, this effect is only observed in tumors with high UDP-glucose 6-dehydrogenase (UGDH) expression. In non–small cell lung cancer with high UGDH expression, the induction of UXS1 loss significantly slows tumor growth and extends the median survival time for chemotherapy drugs A549 and H460. , Motivated by these findings, we explored whether BCC exhibits high UDPH expression. We examined the distribution of UGDH ( G) and found high expression levels in tumor cells and tumor microenvironment cells, including myoCAFs, iCAFs, and ECs. The elevated expression of UGDH in both tumor cells and stromal cells suggests active remodelling of the ECM within the BCC microenvironment, potentially contributing to tumor progression and invasion.
We obtained single-cell RNA sequencing (scRNA-seq) data from the GEO database (GSE141526), including primary human BCC samples ( n = 4) and peri-tumoral skin (PTS) tissues ( n = 2). In total, we processed 38,435 raw single cells. Following the removal of doublets and quality control filtering, 38,373 cells were retained for downstream analysis. To elucidate the cellular diversity and heterogeneity within each tumor and to map the distribution of malignant cells in each sample, we analyzed individual samples using the SCEVAN R package. The results revealed distinct boundaries between tumor and normal cells in BCC1 and BCC2, while BCC3 and BCC4 showed more diffuse boundaries, indicating greater tumor cell invasion and infiltration ( A). These findings are consistent with the clinical subtypes of the respective samples: BCC1 and BCC2 as nodular BCC and BCC3 and BCC4 as infiltrative BCC. The copy number variation (CNV) analysis showed varying subclone numbers in the BCC samples ( B). BCC1 had five subclone regions with an increased copy number on chromosome 6. BCC2 had nine subclone regions with a reduced copy number on chromosomes 11 and 12. BCC3 had four subclone regions with no significant genomic abnormalities. BCC4 had four subclone regions with increased copy numbers on chromosomes 5 and 7. After integrating BCC samples (BCC1–BCC4) and PTS samples (PTS1–PTS2) ( C),we clustered the integrated data into a total of 12 subgroups. Based on common tumor biomarkers ( B), we identified eight cell types and visualizing them using the UMAP plots, including BCC and PTS ( B). The epithelial cell clusters included basal epithelial/tumor cells (KRT14), proliferative epithelial cells (MKI67), and terminally differentiated keratinocytes (IVL). Two types of fibroblasts were identified based on commonly expressed genes in fibroblasts (COL1A1, COL1A2). Among these, cluster 3 and cluster 11 were PDGFRA + cells, which we defined as inflammatory cancer-associated fibroblasts (iCAFs) based on previous studies. Cluster 5 specifically expressed RGS5, and we termed these cells myofibroblastic cancer-associated fibroblasts (myoCAFs). Other cell types were identified using common marker genes, including endothelial cells (ECs) (PECAM1), melanocytes (MLANA), and immune cells (PTPRC).
Fibroblasts exhibited notable differences between PTS and BCC samples ( B). To further investigate these differences, we performed a subpopulation analysis of fibroblasts in the integrated samples. This analysis revealed two distinct fibroblast subtypes: iCAFs, characterized by PDGFRA expression, and myoCAFs, characterized by RGS5 expression. Interestingly, a small number of iCAFs and myoCAFs were also identified in PTS, suggesting potential tumor cell infiltration and invasion. To elucidate the functions of these two fibroblast subtypes, we analyzed their differentially expressed genes ( A). Notably, PLA2G2A and CFD were upregulated in iCAFs ( B). This finding is consistent with previous breast cancer studies that highlighted the unique expression of phospholipase encoded by PLA2G2A and complement pathway-related genes (such as CFD and C3) in iCAFs. GO enrichment analysis of iCAFs ( C) indicated their involvement in extracellular matrix organization, collagen metabolism regulation, and inflammatory chemotaxis. In myoCAFs, we observed specific upregulation of RERGL and FHL5 ( B), both of which are related to angiogenesis. GO analysis ( C) also showed significant enrichment of myoCAFs in muscle system processes and angiogenesis-related pathways. These findings suggest that iCAFs and myoCAFs exhibit distinct heterogeneity and may play different roles in the development and progression of BCC. iCAFs appear to be involved in extracellular matrix remodeling, collagen metabolism, and inflammatory processes, while myoCAFs are likely associated with angiogenesis and muscle system processes.
To illustrate the comprehensive metabolic changes in BCC, we first analyzed the GSVA scores of KEGG pathways between BCC and PTS samples ( A). Overall, multiple metabolic pathways were upregulated in BCC compared to PTS, with the fructose and mannose metabolism pathway showing the most significant difference. Previous studies have shown that fructose enhances the Warburg effect by downregulating mitochondrial respiration and increasing aerobic glycolysis, which may support metastatic cancer growth and drive metabolic reprogramming in cancer cells. In melanoma cells, fructose activates cryoprotection by inducing heme oxygenase 1 expression, helping cells resist immune-mediated killing during immune checkpoint blockade therapy. Other pathways related to lipid and amino acid metabolism also showed notable upregulation. Given the significant metabolic activation observed in BCC samples, we conducted a detailed analysis of gene set variation in metabolic pathways across all BCC clusters ( B). This analysis revealed that, in contrast to tumor cells, stromal clusters in BCC (including iCAFs, myoCAFs, ECs, and immune cells) display more pronounced metabolic activity. Based on these findings, we chose to treat tumor cells as a single group in subsequent metabolic pathway analyses to facilitate a comparison of metabolic differences between tumor and stromal tissue and to further explore the specific metabolic module changes and its metabolic flux in different clusters in BCC. We used the scFEA R package to predict metabolic modules in iBCC subsets (BCC3, BCC4). First, we predicted the tricarboxylic acid cycle (TCA) modules ( A, B) and found an upregulation of citrate across all clusters. The concentration of citrate in the tumor microenvironment has been reported to be involved in maintaining tumor growth. In vitro experiments have also shown that physiological concentrations of citrate can sustain the proliferation of various cancer cells, including prostate, pancreatic, and gastric cancer cells. , We observed a significant upregulation of pyruvate in iCAFs ( A), which is consistent with findings from other studies. In primary lymphomas, CAFs were found to secrete significant amounts of pyruvate, and the presence of pyruvate promoted the survival of lymphoma cells. However, we did not observe a significantly high expression of pyruvate in myoCAFs, suggesting that the maintenance of tumor cell survival mainly relies on iCAFs. Furthermore, we observed a significant upregulation in the first five modules of the TCA, corresponding to the glycolysis process ( B). This indicates an increased rate of energy production in tumor cells, which are in a high metabolic state. Notably, this upregulation was also observed in tumor microenvironment–related cell types, including CAFs, ECs, and immune cells. These findings further demonstrate the “nutrient” role of the tumor microenvironment in supporting tumor growth and survival. We next analyzed the changes in metabolites in the glucose–glutamine metabolic pathway ( C). Interestingly, we observed that the expression of glutamine is higher in proliferative epithelial cells compared to tumor cells. Recent studies have highlighted the importance of glutamine metabolism in maintaining the proliferation and apoptosis of tumor epithelial cells. In tumor cells that use glutamine as a source of energy and building materials, a portion of the TCA cycle shows an opposite direction for reactions from α-ketoglutarate to isocitrate. In this manner, α-ketoglutarate, which originates from glutaminolysis, supplies the TCA cycle in the opposite direction, potentially contributing to the accumulation of citrate ( C). Furthermore, an increase in phosphoglycerate dehydrogenase (PHGDH) was observed in both myoCAFs and immune cells, suggesting the importance of these cell types in supporting tumor metabolism. Given that BCC is highly associated with DNA damage caused by photodamage, we investigated the metabolic changes in the methionine–glutathione–folate network ( D) to assess DNA methylation and related metabolic alterations. Ultraviolet radiation induces carcinogenic photoproducts such as cyclobutane pyrimidine dimers and 6-4 pyrimidone photoproducts, leading to DNA mutations and local immunosuppression. Compared to other cells, we found an increase in S-adenosylmethionine and 5,10-methylenetetrahydrofolate in immune cells. Recent literature also reports that DNA methylation is related to metabolic remodeling in immune cells within the tumor microenvironment, highlighting the complex interplay between epigenetic regulation and metabolism in the immune compartment of iBCC. In the iron-related metabolism ( E), we observed high expression of heme and heme-associated metabolites in both tumor cells and proliferative epithelial cells, along with a mild upregulation of these metabolites in the cell components of the tumor microenvironment (ECs, CAFs, and immune cells). Heme was known to be an important factor in the tumor microenvironment and was involved in macrophage polarization, angiogenic potential of tumor endothelial cells, matrix remodeling by cancer-associated fibroblasts, and interactions between neural and cancer cells. The significance of these substances in the tumor microenvironment is noteworthy and will be further elucidated in the upcoming spatial metabolomics analysis. Our analysis of the metabolic changes in the tumor microenvironment of iBCC revealed significant alterations in lipid metabolism ( E). Notably, we observed a substantial increase in oxaloacetate levels across all cell subgroups. Oxaloacetate has been implicated in nucleotide biosynthesis in rapidly proliferating cancer cells and plays a pivotal role in metabolic reprogramming in various cancer models. In addition to oxaloacetate, we found an upregulation of 3-phosphoglycerate (3PG) in immune cells and CAF cell types. Concurrently, a significant increase in the metabolic flux of the 3PG → serine pathway was observed in cancer cells and proliferative epithelial cancer cells ( E). Previous studies suggested that the glycolysis intermediate 3PG can be diverted to produce serine, and during solid tumor progression, the stiffness of the extracellular matrix in the tumor microenvironment may enhance glycolysis-derived serine biosynthesis. As CAFs are closely associated with tumor extracellular matrix reprogramming, they may further influence tumor serine biosynthesis by upregulating 3PG, thereby regulating tumor growth and invasion. We also observed an increase in glutathione (GSH) metabolic flux in ECs, iCAFs, melanocytes, and proliferative ECs ( E). In human squamous cell carcinoma, CAFs have been shown to utilize glutamate to form glutathione, balancing cellular redox status and promoting stromal extracellular matrix remodeling. Furthermore, a significant upregulation of glutathione was also found in melanocytes, which may be due to UVA-mediated melanogenesis caused by excessive production of oxidants and deterioration of the antioxidant defense network. Our results demonstrate that the iBCC microenvironment exhibits enhanced metabolic flux in glycolysis, glutamine, heme, and glutathione pathways. Moreover, we observed significant differences in the metabolic profiles of the two types of CAFs. In iCAFs, pyruvate content was relatively high, while in myoCAFs, PHGDH and 3PG were highly expressed, indicating their function in extracellular matrix reprogramming.
To corroborate the metabolic alterations in iBCC identified through single-cell analysis and to elucidate the complex metabolic landscape of the tumor microenvironment, we collected four samples for MALDI MSI ( ; Methods; A). Due to the specificity of tumor resection surgery, some sections were able to display changes in the tumor invasive zone ( A). Both tumor and stroma regions were identified in the samples ( B). Compared with H&E staining, red pixels exhibited spatial characteristics similar to stroma regions, while the spatial distribution of yellow pixels was consistent with tumor regions ( C). We identified metabolic similarities in the MS images of each BCC sample using bipartitional k-means clustering, which grouped image pixels with similar metabolic fingerprints together. Both C1 and C2 were divided into six clusters, suggesting a high degree of metabolic complexity within the tumor ( D). Further analysis revealed that all detected features could be separated into six major components ( A). We detected numerous active components in human eyelid iBCC samples ( E, F). The top 10 categories of substances included carboxylic acids and their derivatives, aromatic hydrocarbons, fatty acyls, steroids, and glycerophospholipids. The significant presence of metabolites related to lipid metabolism suggested a high degree of lipid metabolic heterogeneity in iBCC. We analyzed the metabolites that were upregulated in all iBCC samples compared to normal skin ( A). Taurine showed a significant increase in all four tumor samples ( B). Previous studies have shown elevated levels of taurine in primitive embryonic tumors and brain tumors such as retinoblastoma, medulloblastoma, and neuroblastoma. , Deoxy-GMP, associated with DNA damage caused by light exposure, a significant pathogenic factor in BCC, was also upregulated. This suggests its role as an activated carcinogenic metabolite that binds to DNA and forms adducts, inducing gene mutations and accelerating tumor progression ( C). O-Phosphoethanolamine, significantly elevated in tumors ( D), is linked to tumor growth, endurance in harsh microenvironments, invasion, and infiltration. Pyrithione, a heat shock response inducer causing DNA damage and impaired genomic integrity, was also upregulated. Notably, metabolites associated with carbohydrate metabolism (such as ATP, UTP, and sodium lactate) are upregulated in samples C3 and C4 compared to C1 and C2 ( A). This may be due to the presence of central ulceration and recurrent bleeding in the C3 and C4 samples, making them more aggressive compared to the C1 and C2 samples and thereby requiring more rapid energy support. KEGG enrichment analysis revealed that the sphingolipid signaling pathway, with O-phosphoethanolamine as a key metabolite, was upregulated in all four tumor samples ( B). O-phosphoethanolamine, derived from sphingosine-1-phosphate (S1P), is linked to cancer cell transformation, migration, growth, and resistance. In C1, C3, and C4, the lysosomal metabolism pathway was upregulated, likely due to the high carbohydrate affinity required by rapidly proliferating cancer cells. We then investigated the downregulated metabolites in four iBCC samples ( F). Conjugated linoleic acid ( G), known for its antitumor effects in skin keratinocytes, was significantly reduced in all four samples. Compared to normal skin, 2,3-bisphosphoglycerate ( H) was significantly downregulated, indicating enhanced glycolysis in tumor tissues, consistent with previous literature and our findings. ,
Since iBCC strongly interacts with its nearby environment, we subdivided the slice into microregions to analyze the tumor–stroma interactions at the invasive zone. The locations of tumor, invasive zone, and normal area in iBCC were identified ( A). KEGG enrichment analysis showed that lipid metabolism–related pathways were upregulated in both the tumor and invasive area compared to normal tissue ( B). In the fatty acid biosynthesis pathway (hsa00061), oleic acid was found to be highly expressed in the invasive area ( C, D). CAFs in xenograft tumors have been reported to contain higher levels of oleic acid, which can activate lipid metabolism in cancer cells under glucose deprivation conditions, enhancing their stem cell activity and contributing to malignant progression. Additionally, in the biosynthesis pathway of unsaturated fatty acids (hsa01040), arachidonic acid was highly expressed in both the tumor and invasive area ( C, D). As an important mediator of inflammation, arachidonic acid is closely related to the maintenance of chronic inflammation in the tumor microenvironment and the immune infiltration of tumor-associated immune cells. These findings underscore the significant impact of lipid metabolism in the tumor microenvironment on BCC proliferation and immune suppression. We also investigated the metabolic differences between the tumor and invasive zone ( E). GSH levels were higher in the invasive area, consistent with our previous single-cell findings ( E). In human skin squamous cell carcinoma, CAFs use glutamate to form glutathione, balancing cellular redox states and promoting extracellular matrix remodeling. Furthermore, glutathione levels increase by 30% in prostate cancer cells grown in CAF-conditioned media, suggesting that elevated GSH in tumor cells may result from CAF regulation. Heme levels were significantly higher in the invasive zone compared to the tumor area. This finding aligns with recent literature indicating that heme can modulate the tumor microenvironment by acting on tumor-associated endothelial cells and tumor-associated macrophages, promoting angiogenesis and tumor immune suppression, in addition to supporting cancer cell growth, survival, and metastasis. Elevated levels of carnosine were also found in the invasive zone.
Among the upregulated metabolites in the tumor area, UDP-glucuronic acid (UDPGA) exhibited the highest expression ( E, F). UDPGA plays a crucial role in glycosylation within the Golgi apparatus and serves as a key substrate for UDP-glucuronosyltransferase (UGTs). These enzymes utilize UDPGA to conjugate glucuronic acid to xenobiotics, including chemotherapy drugs, facilitating their deactivation and secretion through glucuronidation. This process may contribute to tumor drug resistance. Recent research has highlighted the potential of UXS1 as a regulatory point, as it can convert UDPGA to UDP-xylose, thereby reducing UDPGA accumulation. However, this effect is only observed in tumors with high UDP-glucose 6-dehydrogenase (UGDH) expression. In non–small cell lung cancer with high UGDH expression, the induction of UXS1 loss significantly slows tumor growth and extends the median survival time for chemotherapy drugs A549 and H460. , Motivated by these findings, we explored whether BCC exhibits high UDPH expression. We examined the distribution of UGDH ( G) and found high expression levels in tumor cells and tumor microenvironment cells, including myoCAFs, iCAFs, and ECs. The elevated expression of UGDH in both tumor cells and stromal cells suggests active remodelling of the ECM within the BCC microenvironment, potentially contributing to tumor progression and invasion.
Infiltrative basal cell carcinoma of the eyelid is a highly aggressive subtype, presenting substantial challenges due to its invasive growth pattern, – high risk of recurrence, and considerable treatment burden. Some of the infiltrative cases are considered “difficult to treat” and may need alternative therapies like hedgehog pathway inhibitors (e.g., vismodegib or sonidegib) and immune checkpoint inhibitors (e.g., PD-1), which are approximately 10 times higher than the surgery prices. Our study provides novel insights into the metabolic landscape of this tumor using spatial metabolomics imaging and scRNA-seq. By integrating these advanced techniques, we have mapped the metabolic alterations in the tumor microenvironment and identified potential diagnostic markers and therapeutic targets. Cellular Metabolic Flux in iBCC GSVA analysis between BCC and PTS has demonstrated the activation of various metabolic pathways in BCC. The fructose and mannose metabolism pathway showed the most significantly difference between BCC and PTS. Previous studies have demonstrated that fructose enhances the Warburg effect by downregulating mitochondrial respiration and increasing aerobic glycolysis, potentially supporting metastatic cancer growth and driving metabolic reprogramming in cancer cells. Also, there are some pathways related to lipid metabolism and amino acid metabolism, which led us to further explore the metabolic changes and their metabolic flux in different clusters in BCC. For the subsequent metabolic pathway analysis, we opted to consider tumor cells as a whole since gene set variation analysis of metabolic pathways across all BCC clusters revealed that stromal clusters—such as iCAFs, myoCAFs, ECs, and immune cells—undergo more significant metabolic changes than the tumor cells themselves. This suggests that the stroma plays a more dynamic and active role in the metabolic reprogramming of BCC, offering crucial insights into the broader tumor microenvironment and potentially guiding therapeutic strategies that target stromal elements in addition to tumor cells. We investigated the cellular metabolic flux in iBCC, revealing a significant upregulation of glycolytic flux across all clusters within the iBCC microenvironment. This finding is consistent with the high metabolic state of iBCC and highlights the critical role of glycolysis in tumor growth and progression. Notably, elevated glutamine levels were observed in proliferative epithelial cells, suggesting that glutamine plays a crucial role in sustaining the proliferation and apoptosis of tumor epithelial cells. Metabolic Alterations in the Tumor Microenvironment Spatial metabolomics imaging allowed us to analyze the metabolic differences between tumor cells and normal cells, as well as the metabolic changes in the tumor invasive area. We identified several metabolites, including taurine, deoxy-GMP, O-phosphoethanolamine, and pyrithione, that were highly expressed across tumor samples, indicating their potential as diagnostic markers for eyelid iBCC. Previous studies on BCC indicated that tumors respond to sudden bursts of fibroblast-specific inflammatory signaling pathways by producing heat shock proteins. The upregulation of pyrithione suggests that the heat shock response in BCC may be induced by pyrithione. The upregulation of the sphingolipid signaling pathway, particularly O-phosphoethanolamine derived from S1P cleavage, highlights its significance in promoting cancer cell transformation, migration, growth, and drug resistance. Interestingly, we observed that tumors located in the lower eyelids tend to exhibit central ulceration and recurrent bleeding. Additionally, metabolites associated with carbohydrate metabolism, such as ATP, UTP, and sodium lactate, were upregulated in the C3 and C4 samples compared to the C1 and C2 samples. This observation may be attributed to the more aggressive nature of the C3 and C4 samples, likely due to the presence of central ulceration and recurrent bleeding, which demand higher metabolic activity and rapid energy support to sustain the tumor’s growth and survival. These findings align with the notion that ulceration and ongoing tissue damage in tumors trigger increased glycolytic and energy demands to support invasive and proliferative cellular activities. Lipid Metabolism and Lysosomal Pathway Upregulation Tumor and invasive areas showed upregulation of lipid metabolism pathways, with high expression of oleic and arachidonic acids. This suggests that CAF-derived oleic acid may be transferred to cancer cells for energy production, similar to mechanisms observed in lung adenocarcinoma. The activation of the arachidonic acid pathway in iCAFs and myoCAFs supports their role in maintaining inflammation in the invasive area. The lysosome pathway was also upregulated in both tumor and invasive areas, suggesting that lysosome-induced tumor immunity might also be present in iBCC. Potential Therapeutic Target In the tumor area, we observed a specific upregulation of UDPGA. To investigate the potential activation of its upstream gene, we examined UGDH expression in single cells. We found that UGDH was highly expressed in tumor cells, proliferative epithelial cells, iCAFs, myoCAFs, and ECs. UXS1 was only active in tumor cells with high UGDH expression. Given the high expression of UGDH in iBCC, we speculate that UXS1 could also serve as a potential therapeutic target to control tumor proliferation and drug resistance in iBCC. Limitations of the Study Despite these findings, our study has several limitations. First, the small sample size, due to difficulties in presurgical tumor type identification, potentially limits the comprehensive understanding of metabolic heterogeneity and spatial diversity. Also, due to the rarity of locally advanced basal cell carcinoma and metastatic basal cell carcinoma, we were unable to collect sufficient samples to present the metabolic changes in different stages iBCC. Second, the analysis of epithelial cell subgroups is insufficient. Additionally, the analysis of interactions between tumor cells and other microenvironment components is limited. Lastly, current MSI methods prevent the capture of cell-specific characteristics and correlations in tissues, thereby limiting metabolomic insights at the single-cell level.
GSVA analysis between BCC and PTS has demonstrated the activation of various metabolic pathways in BCC. The fructose and mannose metabolism pathway showed the most significantly difference between BCC and PTS. Previous studies have demonstrated that fructose enhances the Warburg effect by downregulating mitochondrial respiration and increasing aerobic glycolysis, potentially supporting metastatic cancer growth and driving metabolic reprogramming in cancer cells. Also, there are some pathways related to lipid metabolism and amino acid metabolism, which led us to further explore the metabolic changes and their metabolic flux in different clusters in BCC. For the subsequent metabolic pathway analysis, we opted to consider tumor cells as a whole since gene set variation analysis of metabolic pathways across all BCC clusters revealed that stromal clusters—such as iCAFs, myoCAFs, ECs, and immune cells—undergo more significant metabolic changes than the tumor cells themselves. This suggests that the stroma plays a more dynamic and active role in the metabolic reprogramming of BCC, offering crucial insights into the broader tumor microenvironment and potentially guiding therapeutic strategies that target stromal elements in addition to tumor cells. We investigated the cellular metabolic flux in iBCC, revealing a significant upregulation of glycolytic flux across all clusters within the iBCC microenvironment. This finding is consistent with the high metabolic state of iBCC and highlights the critical role of glycolysis in tumor growth and progression. Notably, elevated glutamine levels were observed in proliferative epithelial cells, suggesting that glutamine plays a crucial role in sustaining the proliferation and apoptosis of tumor epithelial cells.
Spatial metabolomics imaging allowed us to analyze the metabolic differences between tumor cells and normal cells, as well as the metabolic changes in the tumor invasive area. We identified several metabolites, including taurine, deoxy-GMP, O-phosphoethanolamine, and pyrithione, that were highly expressed across tumor samples, indicating their potential as diagnostic markers for eyelid iBCC. Previous studies on BCC indicated that tumors respond to sudden bursts of fibroblast-specific inflammatory signaling pathways by producing heat shock proteins. The upregulation of pyrithione suggests that the heat shock response in BCC may be induced by pyrithione. The upregulation of the sphingolipid signaling pathway, particularly O-phosphoethanolamine derived from S1P cleavage, highlights its significance in promoting cancer cell transformation, migration, growth, and drug resistance. Interestingly, we observed that tumors located in the lower eyelids tend to exhibit central ulceration and recurrent bleeding. Additionally, metabolites associated with carbohydrate metabolism, such as ATP, UTP, and sodium lactate, were upregulated in the C3 and C4 samples compared to the C1 and C2 samples. This observation may be attributed to the more aggressive nature of the C3 and C4 samples, likely due to the presence of central ulceration and recurrent bleeding, which demand higher metabolic activity and rapid energy support to sustain the tumor’s growth and survival. These findings align with the notion that ulceration and ongoing tissue damage in tumors trigger increased glycolytic and energy demands to support invasive and proliferative cellular activities.
Tumor and invasive areas showed upregulation of lipid metabolism pathways, with high expression of oleic and arachidonic acids. This suggests that CAF-derived oleic acid may be transferred to cancer cells for energy production, similar to mechanisms observed in lung adenocarcinoma. The activation of the arachidonic acid pathway in iCAFs and myoCAFs supports their role in maintaining inflammation in the invasive area. The lysosome pathway was also upregulated in both tumor and invasive areas, suggesting that lysosome-induced tumor immunity might also be present in iBCC.
In the tumor area, we observed a specific upregulation of UDPGA. To investigate the potential activation of its upstream gene, we examined UGDH expression in single cells. We found that UGDH was highly expressed in tumor cells, proliferative epithelial cells, iCAFs, myoCAFs, and ECs. UXS1 was only active in tumor cells with high UGDH expression. Given the high expression of UGDH in iBCC, we speculate that UXS1 could also serve as a potential therapeutic target to control tumor proliferation and drug resistance in iBCC.
Despite these findings, our study has several limitations. First, the small sample size, due to difficulties in presurgical tumor type identification, potentially limits the comprehensive understanding of metabolic heterogeneity and spatial diversity. Also, due to the rarity of locally advanced basal cell carcinoma and metastatic basal cell carcinoma, we were unable to collect sufficient samples to present the metabolic changes in different stages iBCC. Second, the analysis of epithelial cell subgroups is insufficient. Additionally, the analysis of interactions between tumor cells and other microenvironment components is limited. Lastly, current MSI methods prevent the capture of cell-specific characteristics and correlations in tissues, thereby limiting metabolomic insights at the single-cell level.
In conclusion, our study provides novel insights into the metabolic landscape of iBCC of the eyelid, revealing the pivotal role of metabolic reprogramming—particularly within stromal cells—in tumor progression. We have identified potential diagnostic markers and therapeutic targets, especially UXS1, and highlighted the complex interplay between metabolic alterations and immune evasion in this aggressive eyelid malignancy. Future research is necessary to further validate these findings and explore clinical applications, such as combination therapies targeting both the tumor and stromal components or investigating the molecular mechanisms driving metabolic shifts in BCC. These efforts may help refine current treatment strategies and ultimately enhance patient outcomes.
Supplement 1 Supplement 2
|
Breathe Easy EDA: A MATLAB toolbox for psychophysiology data management, cleaning, and analysis | b9ff392d-7296-448d-85bf-d787f5166347 | 6317497 | Physiology[mh] | Electrodermal activity (EDA) methods evaluate fluctuations in skin electrical conductance caused by changes in sweat gland production. The sympathetic nervous system innervates palmar and plantar eccrine sweat glands, and changes in skin conductance are thought to measure sympathetic nervous system arousal ( ). Importantly, EDA recordings are a valuable and popular psychophysiological measurement in studies of affect and cognition ( ). It is well known that respiration and EDA influence each other ( ). In laboratory settings, researchers often leverage this relationship to check the integrity of a psychophysiology set-up. Asking participants to take a deep breath should produce concurrent deflections in both waveforms, and properly configured recording equipment should detect that response. EDA is typically recorded using electrodes placed on the palmar or plantar surfaces where eccrine sweat glands are densely located. Respiration, typically recorded using a belt secured around the diaphragm, is an oscillatory event that approximates a sine wave with regular breathing. However, irregular respiration, or abnormalities in the respiration waveform (frequency or amplitude), are associated with non-specific changes in the EDA waveform. These physiological respiration-related artifacts can lead researchers to overestimate the presence or magnitude of skin conductance responses (SCRs) in experiments ( ). Despite the strong relationship observed between EDA and respiration traces, prior work has shown that EDA and respiratory signals are not strictly coupled ( ), which may relate to differences in their physiological origin. Physiologically, the emotion-reactive palmar and plantar eccrine sweat glands are maximally innervated by cholinergic ( ) sudomotor fibers leaving the ventral root of the spinal cord ( , p. 20). While eccrine sweat glands are modulated by the sympathetic nervous system, the transmission related to EDA is mainly cholinergic, not noradrenalergic ( ; ). However, deep breathing has been associated with sudden increases in free-circulating adrenaline, producing sweat responses ( , p. 32), which mimic SCRs on EDA recordings. As mentioned above, this relationship is useful for checking psychophysiological signal integrity, but can also bias SCR analyses. While movement-induced EDA artifacts are fairly straightforward to identify (e.g., presence of an unusually steep rise in the waveform), physiologically derived artifacts appear similar to arousal-related waveforms ( ). Developing methods for identifying respiration-related artifacts has been a challenge for the field of psychophysiological research due to high intersubject and intrasubject variability in respiration activity, yielding a wide range of waveform characteristics ( ). A lack of analytical solutions has motivated software development within this field since the early 1990’s, with the goal of improving how researchers inspect and manipulate respiration data ( ). Researchers are strongly encouraged to account for such respiration-induced EDA artifacts, and subsequently outline those artifact elimination procedures in their manuscripts ( ). This can be challenging, since common artifact-control practices involve researchers visually inspecting their respiration data, which is unfortunately both time-consuming and subjective. has provided a useful decision tree for discarding artifact EDA responses based on a set of criteria. However, an easy-to-use and freely available software that expedites visual inspection of respiration data, and allows researchers to quantify their artifact-control procedures is not available. This toolbox might be particularly helpful for researchers identifying respiration artifacts in experiments with longer trial durations, such as viewing video clips or recalling autobiographical memories. In these experiments, the standard stimulus-response latency window for identifying event-related SCRs (e.g., 1–4 seconds) may no longer be suitable, and longer trials almost certainly have a higher probability of respiration-related SCR artifact contamination. Currently, there is a need for easy-to-use, flexible, and interoperable software that facilitates EDA artifact elimination via the widely employed and accepted method of visual inspection. We have developed a novel MATLAB toolbox for efficiently eliminating EDA respiration artifacts and analyzing EDA data, which we freely distribute as Breathe Easy EDA or ‘BEEDA’. BEEDA’s streamlined artifact removal interface allows users to quickly identify and clean EDA data, expediting EDA analysis without compromising analysis integrity. Additionally, BEEDA’s integrated EDA analysis functionality allows users to seamlessly analyze cleaned EDA data within the toolbox. Furthermore, the toolbox includes inter-rater reliability (IRR) analyses so that researchers may evaluate the reliability of their artifact-control procedures. The BEEDA toolbox is controllable through a graphical user interface (GUI), and requires no programming skill to use. This toolbox may be used either for simple artifact detection, EDA analyses, or for both artifact elimination and subsequent EDA analyses—as illustrated in . This flexibility allows users to take advantage of BEEDA’s functionality without restricting the use of complementary software such as Mindware (MindWare Technologies Ltd., Gahanna, OH), Ledalab ( ), ANSLAB ( ), or AcqKnowledge ( ). For instance, one could use BEEDA only for marking artifacts in a dataset, and then use the artifact information file BEEDA produces with an alternative EDA analysis program. Furthermore, BEEDA is suitable for any experiment where both EDA and respiration data were collected, and parameters specific to individual experiments can easily be modified through the GUI (e.g., trial structure and analysis options). This permits a great deal of functional flexibility, without encumbering the toolbox’s usability. Here we describe the toolbox design, workflow, and functionality.
Overview BEEDA’s workflow was designed to offer users situationally-specific functionality within the simplest framework possible. This allows researchers to use the toolbox for their specific goals, without the toolbox adding unnecessary work in the process. As illustrated in , the BEEDA workflow begins with loading a dataset and setting a few critical parameters. After that initialization, the GUI main menu ( ) lets researchers tailor their own workflow to their specific needs. This workflow is flexible to include any combination of data visualization, artifact inspection/cleaning, calculating EDA statistics, or performing interrater reliability (IRR) analyses. The degree of overhead imposed by the workflow (e.g., in specifying parameters or manipulating the data) at this stage should only match the requirements of the user. The following sections describe these abilities and their implementation in detail. A. Loading an experiment into BEEDA Initializing the BEEDA toolbox (executing BreatheEasyEDA.m ) immediately launches the data loading GUI. This interface allows users to either load data files for a new session, or load data from a previously saved session. If a new session is started, BEEDA copies and reformats raw data files into a MATLAB structure variable ( BEEDAdata ). The BEEDAdata variable is the toolbox’s primary data structure; all user defined parameters (e.g. analysis settings) and analysis actions (e.g. artifact removal) are written to this BEEDAdata structure. Resuming a previous session reads information from a saved BEEDAdata structure and launches into the main menu. For new sessions, basic analysis parameters are also specified in the data loading GUI. These basic settings are: downsampling and Skin Conductance Response (SCR) parameters. Importantly, once downsampling and SCR options are chosen, these settings are permanently fixed for the current BEEDA session (even if the session is saved and resumed). If a downsampling factor is specified, both the EDA and respiration data are immediately downsampled within BEEDAdata . This downsampling functionality is provided because the sampling rate capabilities of modern EDA systems (e.g. >1000 Hz) far exceed the resolution necessary for EDA analyses. Downsampling datasets to lower temporal resolutions can dramatically reduce a dataset’s size, consequently improving BEEDA’s memory and hard disk requirements, computation time, and GUI responsiveness. B. Main menu The main menu provides a visual summary of your experiment, trial information, analysis settings, and display settings ( ). The main menu also allows users to save the current BEEDA session, start the artifact removal interface, run IRR analyses, and export final analysis results. Before displaying the experiment summary panel, the EDA data is first smoothed via convolution with a Gaussian kernel (as in ). Smoothing removes minor signal noise, which may originate from a variety of sources (e.g. recording equipment or downsampling). Next, valid SCRs are identified based on previously specified threshold and rejection-rate parameters. The experiment summary panel plots the entire experiment’s EDA timecourse, marking onset times for trials, and valid SCRs ( ). This window provides users with an overview of the experiment’s EDA data, allowing users to easily confirm the indented dataset has loaded correctly. All unique trial-types are displayed in the trial-type information window, and the current BEEDA session’s settings are displayed in the setting information window ( ). From the main menu, users can easily set a number of session settings: SCR latency tolerances, valid trials for analysis, and display settings (see Interface display options). SCR latency tolerances establish the stimulus time-locked window when SCRs may be appropriately attributed to the preceding stimulus (see Main EDA analysis parameters), typically a 3-second window between 1–4 seconds post-stimulus onset ( ), but shorter windows have been proposed (e.g., 2 seconds or less; ; ). Additionally, if end-of-trial events were omitted during an experiment’s data collection, specifying a maximum SCR latency parameter effectively creates these events. Specifying the valid trials for analysis determines which trial-types are available for artifact cleaning and EDA analysis. All unique events recorded during data collection may be declared as valid trial-types; this allows users to disregard inter-trial events, baseline events, or events not corresponding to trials of interest. C. Interface display options The “Display settings” main menu button ( ) allows users to customize the Artifact Removal Interface. The Expanded trial window parameter controls the additional timecourse data displayed before and after each trial in the artifact removal interface. For instance, setting expanded trial window to 5 (seconds) will display the 5 seconds before every trial and the 5 seconds after every trial. This option may help users evaluate how respiration immediately preceding or following a trial relates to respiration during a trial. More specifically, we found that being presented with the activity surrounding the trial provided a useful context for identifying potential respiration artifacts. The Number of trial windows to display parameter controls the number of trials simultaneously displayed in the artifact removal interface. This option may be particularly useful when running the BEEDA toolbox on computers with lower resolution computer monitors, as users can adjust the number of trials in each ARI page to best fit their display configuration. D. Artifact removal interface Selecting “Remove artifacts” from the main menu will launch the Artifact Removal Interface (ARI). The ARI allows users to efficiently clean EDA data via streamlined data presentation and easy to use controls. Users can easily scroll through ‘pages’ of trials, examining each trial for irregular respiration waves, as shown in . If problematic respiration waves are identified, users can clean the data with either ‘SCR delete mode’ or ‘drag-delete mode’. Drag-delete mode removes entire time segments of EDA data, whereas SCR delete mode only removes SCRs from analysis consideration. Consequently, drag delete mode is recommended for Skin Conductance Level (SCL) analyses and thorough artifact elimination, whereas SCR delete mode is only recommended for SCR analyses (see EDA analysis functionality). In the ARI, user defined trials of interest are individually displayed by plotting SCR onset timepoints directly onto the trial’s respiration data ( ). This presentation simplifies the manual identification of problematic breathing (e.g. ), and the recommended procedures for EDA respiration artifact scrubbing can be found in . All user actions (e.g., data cleaning) are immediately applied to BEEDAdata and can be saved through the main menu. E. Exporting results and artifact information Selecting “Export final results” in the main menu will analyze the user-defined trials-of-interest and export the analysis results to a Comma Separated Values formatted spreadsheet (.CSV file). This spreadsheet will show trial-wise EDA statistics, in addition to whether or not the trial was flagged for artifacts. A trial will show “flagged for artifacts” if any SCR or data segment was deleted from the trial. In this way, one may simply use BEEDA’s GUI to mark artifacts within an EDA dataset, then use the artifact information output with another EDA analysis software. Similarly, the artifact information output provides an easy means for assessing overall data quality. Experimenters may also directly analyze this output with BEEDA, in order to evaluate how reliably artifacts were identified within a dataset.
BEEDA’s workflow was designed to offer users situationally-specific functionality within the simplest framework possible. This allows researchers to use the toolbox for their specific goals, without the toolbox adding unnecessary work in the process. As illustrated in , the BEEDA workflow begins with loading a dataset and setting a few critical parameters. After that initialization, the GUI main menu ( ) lets researchers tailor their own workflow to their specific needs. This workflow is flexible to include any combination of data visualization, artifact inspection/cleaning, calculating EDA statistics, or performing interrater reliability (IRR) analyses. The degree of overhead imposed by the workflow (e.g., in specifying parameters or manipulating the data) at this stage should only match the requirements of the user. The following sections describe these abilities and their implementation in detail.
Initializing the BEEDA toolbox (executing BreatheEasyEDA.m ) immediately launches the data loading GUI. This interface allows users to either load data files for a new session, or load data from a previously saved session. If a new session is started, BEEDA copies and reformats raw data files into a MATLAB structure variable ( BEEDAdata ). The BEEDAdata variable is the toolbox’s primary data structure; all user defined parameters (e.g. analysis settings) and analysis actions (e.g. artifact removal) are written to this BEEDAdata structure. Resuming a previous session reads information from a saved BEEDAdata structure and launches into the main menu. For new sessions, basic analysis parameters are also specified in the data loading GUI. These basic settings are: downsampling and Skin Conductance Response (SCR) parameters. Importantly, once downsampling and SCR options are chosen, these settings are permanently fixed for the current BEEDA session (even if the session is saved and resumed). If a downsampling factor is specified, both the EDA and respiration data are immediately downsampled within BEEDAdata . This downsampling functionality is provided because the sampling rate capabilities of modern EDA systems (e.g. >1000 Hz) far exceed the resolution necessary for EDA analyses. Downsampling datasets to lower temporal resolutions can dramatically reduce a dataset’s size, consequently improving BEEDA’s memory and hard disk requirements, computation time, and GUI responsiveness.
The main menu provides a visual summary of your experiment, trial information, analysis settings, and display settings ( ). The main menu also allows users to save the current BEEDA session, start the artifact removal interface, run IRR analyses, and export final analysis results. Before displaying the experiment summary panel, the EDA data is first smoothed via convolution with a Gaussian kernel (as in ). Smoothing removes minor signal noise, which may originate from a variety of sources (e.g. recording equipment or downsampling). Next, valid SCRs are identified based on previously specified threshold and rejection-rate parameters. The experiment summary panel plots the entire experiment’s EDA timecourse, marking onset times for trials, and valid SCRs ( ). This window provides users with an overview of the experiment’s EDA data, allowing users to easily confirm the indented dataset has loaded correctly. All unique trial-types are displayed in the trial-type information window, and the current BEEDA session’s settings are displayed in the setting information window ( ). From the main menu, users can easily set a number of session settings: SCR latency tolerances, valid trials for analysis, and display settings (see Interface display options). SCR latency tolerances establish the stimulus time-locked window when SCRs may be appropriately attributed to the preceding stimulus (see Main EDA analysis parameters), typically a 3-second window between 1–4 seconds post-stimulus onset ( ), but shorter windows have been proposed (e.g., 2 seconds or less; ; ). Additionally, if end-of-trial events were omitted during an experiment’s data collection, specifying a maximum SCR latency parameter effectively creates these events. Specifying the valid trials for analysis determines which trial-types are available for artifact cleaning and EDA analysis. All unique events recorded during data collection may be declared as valid trial-types; this allows users to disregard inter-trial events, baseline events, or events not corresponding to trials of interest.
The “Display settings” main menu button ( ) allows users to customize the Artifact Removal Interface. The Expanded trial window parameter controls the additional timecourse data displayed before and after each trial in the artifact removal interface. For instance, setting expanded trial window to 5 (seconds) will display the 5 seconds before every trial and the 5 seconds after every trial. This option may help users evaluate how respiration immediately preceding or following a trial relates to respiration during a trial. More specifically, we found that being presented with the activity surrounding the trial provided a useful context for identifying potential respiration artifacts. The Number of trial windows to display parameter controls the number of trials simultaneously displayed in the artifact removal interface. This option may be particularly useful when running the BEEDA toolbox on computers with lower resolution computer monitors, as users can adjust the number of trials in each ARI page to best fit their display configuration.
Selecting “Remove artifacts” from the main menu will launch the Artifact Removal Interface (ARI). The ARI allows users to efficiently clean EDA data via streamlined data presentation and easy to use controls. Users can easily scroll through ‘pages’ of trials, examining each trial for irregular respiration waves, as shown in . If problematic respiration waves are identified, users can clean the data with either ‘SCR delete mode’ or ‘drag-delete mode’. Drag-delete mode removes entire time segments of EDA data, whereas SCR delete mode only removes SCRs from analysis consideration. Consequently, drag delete mode is recommended for Skin Conductance Level (SCL) analyses and thorough artifact elimination, whereas SCR delete mode is only recommended for SCR analyses (see EDA analysis functionality). In the ARI, user defined trials of interest are individually displayed by plotting SCR onset timepoints directly onto the trial’s respiration data ( ). This presentation simplifies the manual identification of problematic breathing (e.g. ), and the recommended procedures for EDA respiration artifact scrubbing can be found in . All user actions (e.g., data cleaning) are immediately applied to BEEDAdata and can be saved through the main menu.
Selecting “Export final results” in the main menu will analyze the user-defined trials-of-interest and export the analysis results to a Comma Separated Values formatted spreadsheet (.CSV file). This spreadsheet will show trial-wise EDA statistics, in addition to whether or not the trial was flagged for artifacts. A trial will show “flagged for artifacts” if any SCR or data segment was deleted from the trial. In this way, one may simply use BEEDA’s GUI to mark artifacts within an EDA dataset, then use the artifact information output with another EDA analysis software. Similarly, the artifact information output provides an easy means for assessing overall data quality. Experimenters may also directly analyze this output with BEEDA, in order to evaluate how reliably artifacts were identified within a dataset.
To facilitate the reporting and validity of respiration artifact rejection methods , BEEDA includes inter-rater reliability (IRR) analysis functionality for respiration artifact rejection. Selecting “Inter-rater reliability” in the main menu will perform an artifact IRR analysis directly on exported BEEDA result files. This requires that researchers have cleaned a dataset multiple times, under the same relevant parameters (verified by built-in sanity checks). After specifying these files, users can set the IRR analysis’ scope to match their analysis goals. Specifically, users can limit their analysis to only trials containing SCRs (as defined by SCR threshold parameters) or analyze all trials of interest. This is a critical distinction, as the IRR for SCR-negative trials may give unrepresentative reliability statistics for SCR oriented analyses (i.e. trials without SCRs may not have been inspected). On the other hand, these trials would certainly be considered for SCL analyses. This choice determines T in the subsequent equations. After setting the IRR scope, the pair-wise Cohen’s κ between all raters is calculated and exported to a CSV spreadsheet as a labeled matrix. We used Cohen’s κ implementation ( ) in this context: κ = p o − p e 1 − p e For the set of all trials T we defined two trial classes C as: the absence of any artifact marking, or the presence of any SCR/data-segment deletion. The expected chance agreement, p e , between each pair of raters i and j was: p e = 1 | T | 2 ∑ k = 1 2 | T i C k | | T j C k | where | T i C k | would be the number of trials in class C k for rater i . The observed rater agreement, p o , was: p o = 1 | T | ∑ k = 1 2 | T i C k ∩ T j C k | The user-guide documentation describes how this analysis and its output (i.e. the labeled Cohen’s κ matrix) are configured in greater detail.
The BEEDA toolbox features integrated EDA analysis functionality, which may be used with or without prior artifact removal. Selecting the Export final results main menu button will initialize EDA analyses and export the subsequent results as a spreadsheet. These analyses measure tonic and phasic EDA using standard methodology ( ). Tonic EDA is defined as the slow change in SCLs over a timecourse of interest. BEEDA determines the mean and standard deviation of each trial’s EDA levels, and these statistics are included in the results output. Data segments marked as artifacts using the Drag delete mode are not included in SCL analyses. Phasic EDA measurements are determined via the trough-to-peak detection of SCRs ( ). SCRs are quickly changing EDA levels that exceed an amplitude threshold and occur within a response window time-locked to a stimulus. The SCR amplitude is defined as the SCR’s peak EDA level minus the SCR’s initial trough EDA level. Users can explicitly specify an SCR amplitude threshold, and this practice is typical for trough-to-peak SCR detection. Alternately, the amplitude threshold can be flexible and data driven via setting an SCR rejection rate ( ). In BEEDA, specifying an explicit SCR threshold of 0μS and a rejection rate of 10% emulates the algorithmic SCR thresholding procedure described in . While this thresholding procedure is not typically employed, BEEDA includes this functionality to mirror proprietary EDA analysis software packages which offer similar analysis options ( ). For phasic EDA analyses, BEEDA detects valid SCRs and exports the following statistics for each trial: number of SCRs, average SCR magnitude, cumulative SCR magnitude, and maximum SCR magnitude. SCRs in data segments removed with Drag delete mode, in addition to SCRs marked as artifacts with SCR delete mode , are not included in SCR analyses. BEEDA’s does not include functionality for hypothesis testing with EDA statistics. Instead, the analysis results are written to a long-format .CSV spreadsheet with comprehensive labeling. The common data and file formatting allow users to easily run their hypothesis testing with any commonly used software package (e.g. SPSS, R, etc), without arduous file-conversions or reformatting.
The imported raw data is first smoothed according to the following procedure (as also implemented in ). The EDA signal is iteratively smoothed with a Guassian kernel, increasing the standard deviation on each iteration until there is negligible reduction in the signals’ root mean square of successive differences (RMSSD), or until a maximum standard deviation of 125 ms. More explicitly, for an EDA recording X with t timepoints sampled at f Hz , and a Gaussian kernel G specified with μ = 0 and σ = .125 h , the algorithm follows this pseudocode: Initialize h = 0 Initialize RMSSD as ε old = 1 N ∑ t = 2 N ( X t – X t − 1 ) 2 Initialize ε th = 10 –5 , ε new = 0, and Δ ε = ε old While ε th > Δ ε and f > h : h = h + 4 σ = .125 h The Gaussian kernel is specified with this new σ G = 1 σ 2 π e − ( x − μ ) 2 2 σ 2 X = X * G ε new = 1 N ∑ t = 2 N ( X t – X t − 1 ) 2 Δ ε = ε new – ε old ε old = ε new Following this initial smoothing, any requested downsampling is performed via decimation. The data is then resmoothed with the previously described algorithm, and this concludes the smoothing procedures. BEEDA implements trough-to-peak SCR detection for the EDA recording X with the first derivative d x d t for each timepoint (Δ X t ). SCR trough indices O are defined by a positive rate following negative rates: O = { t | Δ X t − 1 + Δ X t > Δ X t − 1 } SCR peak indices P are defined by a negative rate following positive rates: P = { t | Δ X t − 1 − Δ X t > Δ X t − 1 } This implementation was constrained such that the first trough index must precede the first peak index, and the last trough index must precede the last peak index (i.e. elements of O and P must form trough-to-peak pairings). The SCR amplitudes were then simply calculated as R i = X P i – X O i . In the following section describing BEEDA’s EDA analysis statistics, the equations will follow the notation in this section. Additionally, T will describe the set of all experimental trials, with specific trails indexed as T j . The set of responses belonging to a given trial will be indexed as R i T j , and likewise a set of timepoints in given trial will be indexed as X t T j .
Number of SCRs : the number of valid SCRs in a trial: n ( R i T j ) Average SCR magnitude : average trial SCR amplitude: 1 n ( R i T j ) ∑ i = 1 n R i T j Max SCR magnitude : the largest SCR amplitude within a trial: max( R i T j ) Cumulative SCR magnitude : the sum of all trial SCR amplitudes: ∑ i = 1 n R i T j SCL(average) : mean EDA signal within a trial: 1 n ( X t T j ) ∑ t = 1 n X t T j SCL(standard deviation) : the standard deviation of a trial’s EDA signal: V a r ( X t T j )
SCR threshold : Only EDA responses above this amplitude threshold are considered valid SCRs. Typically an amplitude threshold of .05μS is used, although some researchers advocate for thresholds as low as .01μS ( ). recommend that sampling resolution should be taken into account when considering low thresholds, and thresholds lower than .01μS should not be used. Rejection rate : If a rejection rate greater than 0 is specified, trial-wise thresholding is applied according to: R th = max( R ) α where R th is the trial-specific response threshold, R is the trial’s set of responses and α is the rejection rate. For example, if the rejection rate is 10% and a trial’s largest SCR amplitude is 4μS, SCRs with amplitudes below .4 μS are rejected in that trial. Min SCR latency : The minimum time after a trial’s start when EDA data can be considered for analyses (i.e. the stimulus response window). Valid SCRs onsets must begin after the specified minimum latency time, and EDA levels before minimum latency time will be excluded from SCL analyses. report that a minimum latency of 1 second post-stimulus is typical. Max SCR latency : The time after a trial’s start when EDA data cannot be considered for analyses. Valid SCRs must begin before the specified maximum latency, and EDA signal after the maximum latency is excluded from SCL analyses. report that a maximum latency of 3 or 5 seconds post-stimulus is typical.
BEEDA’s system requirements are: Matlab R2014b or newer, and the Matlab Signal Processing Toolbox. Any computer with that prerequisite Matlab software can run BEEDA (e.g. regardless of operating system). However, users are recommended to run BEEDA with Matlab R2015a, since the toolbox was developed and extensively tested with R2015a. BEEDA was designed for input datasets containing both EDA and respiration recordings. However, suitable input files may also contain placeholder values for either data channel (i.e. for datasets without either respiration or EDA recordings). The toolbox was designed to accept raw data files from Biopac (Biopac Systems Inc., USA) recording systems. The BEEDA user-guide describes how these files are obtained from Biopac systems, and how these files are formatted. Although BEEDA was designed to easily accept files from these widely-used systems, any comparably formatted files are also suitable (i.e. from other recording systems). The acceptable formatting is very basic, and therefore recordings from other systems should not present major issues.
We have provided a sample dataset . This data was collected during an emotional-image viewing experiment, and is provided for toolbox demonstration purposes. Documentation for this sample dataset is included with the distribution, and provides further background about the experiment and the data’s structure. We have also provided example analysis output using this dataset as . This output shows artifact information from data cleaning, along with the analyses described in the sections on EDA analysis functionality and statistics. The output file is formatted as a .CSV spreadsheet, with easily interpretable column headers.
Breathe Easy EDA is a novel MATLAB toolbox developed for easy and reliable identification of respiration-related artifacts in EDA data. This software was specifically built to facilitate the methodical considerations of psychophysiology researchers through a simple, flexible, interoperable, and tolerant design. BEEDA’s simplified data presentation allows efficient data inspection and cleaning, without sacrificing functionality in the GUI. In fact, the intuitive interface includes features that are absent from widely used contemporary EDA software, but still essential to researchers (e.g., an “undo” function). The artifact cleaning functionality extends to integrated reliability analyses, providing a simplified means for researchers to establish the consistency of their artifact-control procedures across independent raters. BEEDA’s common output-file format and range of analysis capabilities also allows users to integrate this toolbox in their analysis pipelines without precluding alternate software packages. Furthermore, BEEDA was built to flexibility handle any experiment where both respiration and EDA data were collected, regardless of trial duration or experimental design. In these ways, this software provides researchers with optimized tools for psychophysiology analysis. The toolbox is freely available from http://github.com/johnksander/BreatheEasyEDA , and the user-guide documentation for BEEDA is included with this distribution.
Software/source code available from: http://github.com/johnksander/BreatheEasyEDA Archived source code as at time of publication: https://doi.org/10.5281/zenodo.1168739 ( ). License: GNU General Public
1 – were produced with this data.
|
Development and Evaluation of the Virtual Pathology Slide: A New Tool in Telepathology | 9088fe45-3852-4209-aece-0759d8a8c051 | 1550558 | Pathology[mh] | Definition of Telepathology Telepathology is the practice of diagnostic pathology by a remote pathologist utilizing images of tissue specimens transmitted over a telecommunications network . Traditionally telepathology systems are defined as either dynamic or static. Dynamic systems allow a telepathologist to view images transmitted in real time from a remote robotic microscope that permits complete control of the field of view and magnification . Static (or store-and-forward) telepathology involves the capture and storage of images followed by transmission over the Internet via e-mail attachment, file transfer protocol, or a Web page, or distribution via CD-ROM. Dynamic hybrids also exist, which incorporate aspects of both technologies . Applications of Telepathology The diversity in telepathology systems reflects growing technological expertise in this area and the increasing importance of telepathology in education, training, quality assurance, and teleconsultation [ - ]. Numerous pathology archives abound on the Internet providing links to both educational and commercial telepathology websites. These offer access to either static or dynamic image delivery systems [ - ]. Limitations of Telepathology Image quality and the ability to make diagnostic decisions from electronically-compressed images is a contentious issue [ , - ]. In order for telepathology to be of clinical use, studies have attempted to access the diagnostic accuracy of store-and-forward telepathology, and have shown accuracy in the range of 77% to 100% [ , - ]. The diverse nature of this technology makes it difficult to draw comparisons between studies, or to form a consensus on a method of best practice. There is no universally-accepted standardization in hardware, software, image resolution, color-depth, or image compression and storage . However, studies have shown that the use of images with as low a resolution as 1024 pixels x 768 pixels resolution x 24-bit color does not impair diagnostic performance [ , , - ]. To contend with such nonstandardization, guidelines have been formulated for the capture and treatment of diagnostic images and for the practice of telepathology [ - ]. Recent improvements in Internet-browser technology have facilitated the development of interactive store-and-forward Web pages. These feature the ability to show the spatial relationship between individual images in low-power and high-power views. This technology is commonly visualized using a small image gallery constructed from one or two microscopic fields out of a possibility of thousands, displaying images of the same fields at higher magnifications [ , , ]. Field selection and interpretation are thought to be the primary reasons specific to store-and-forward telepathology that account for its discordance with diagnosis in a conventional pathology setting [ - ]. Studies involving multiple pathologists provide the most robust and accurate method of assessing a telepathology technique [ - ]. However it is difficult to distinguish the performance of the technology from the skill of the pathologist and the degree of difficulty of the cases being presented . Until recently, the development of a tool for routine diagnosis and teleconsultation was the driving goal for the evolution of telepathology systems. Initial expense, lack of broadband Internet connections, potential liabilities, and a lack of knowledge transfer from expert to potential user have all contributed to preventing the incorporation of telepathology into everyday practice [ - ]. The emerging role of telepathology in the area of education and quality assurance is not encumbered by the same difficulties. It has been demonstrated that the application of telepathology in such roles has the advantage of lower cost, less logistical effort, and a positive response to its use by the end user [ - ]. Coupled with the growing presence of ultra-fast slide scanners, this should ensure an increasing role for telepathology in this area [ - ]. The Virtual Pathology Slide (VPS) To overcome problems attributable to sampling bias and interpretation resulting from limited field selection, telepathologists must be able to navigate to any field of view, at magnifications comparable to that of a conventional microscope, using images of sufficient resolution to render a correct diagnosis [ , , ]. To meet such criteria we have developed the Virtual Pathology slide (VPS) . This is a microscope emulator that displays digitized representations of tissue slides, allowing inspection of numerous fields of view, over a wide range of magnifications. Similar applications, commonly referred to as Virtual Slides, have been developed by other commercial and academic bodies [ , , , , , - ]. A screenshot of our VPS is shown in ; further screenshots are in . In an important new departure, the VPS can also record and quantify the diagnostic trace of a pathologist, as a discrete data set on a central server. This allows the decoupling of a pathologist's field selection from the technical functionality of the telepathology system. In this paper, we report on the development of this system, its acceptability among a group of evaluating pathologists, the level of diagnostic agreement among this group, and the potential future applications of the VPS in telepathology.
Telepathology is the practice of diagnostic pathology by a remote pathologist utilizing images of tissue specimens transmitted over a telecommunications network . Traditionally telepathology systems are defined as either dynamic or static. Dynamic systems allow a telepathologist to view images transmitted in real time from a remote robotic microscope that permits complete control of the field of view and magnification . Static (or store-and-forward) telepathology involves the capture and storage of images followed by transmission over the Internet via e-mail attachment, file transfer protocol, or a Web page, or distribution via CD-ROM. Dynamic hybrids also exist, which incorporate aspects of both technologies .
The diversity in telepathology systems reflects growing technological expertise in this area and the increasing importance of telepathology in education, training, quality assurance, and teleconsultation [ - ]. Numerous pathology archives abound on the Internet providing links to both educational and commercial telepathology websites. These offer access to either static or dynamic image delivery systems [ - ].
Image quality and the ability to make diagnostic decisions from electronically-compressed images is a contentious issue [ , - ]. In order for telepathology to be of clinical use, studies have attempted to access the diagnostic accuracy of store-and-forward telepathology, and have shown accuracy in the range of 77% to 100% [ , - ]. The diverse nature of this technology makes it difficult to draw comparisons between studies, or to form a consensus on a method of best practice. There is no universally-accepted standardization in hardware, software, image resolution, color-depth, or image compression and storage . However, studies have shown that the use of images with as low a resolution as 1024 pixels x 768 pixels resolution x 24-bit color does not impair diagnostic performance [ , , - ]. To contend with such nonstandardization, guidelines have been formulated for the capture and treatment of diagnostic images and for the practice of telepathology [ - ]. Recent improvements in Internet-browser technology have facilitated the development of interactive store-and-forward Web pages. These feature the ability to show the spatial relationship between individual images in low-power and high-power views. This technology is commonly visualized using a small image gallery constructed from one or two microscopic fields out of a possibility of thousands, displaying images of the same fields at higher magnifications [ , , ]. Field selection and interpretation are thought to be the primary reasons specific to store-and-forward telepathology that account for its discordance with diagnosis in a conventional pathology setting [ - ]. Studies involving multiple pathologists provide the most robust and accurate method of assessing a telepathology technique [ - ]. However it is difficult to distinguish the performance of the technology from the skill of the pathologist and the degree of difficulty of the cases being presented . Until recently, the development of a tool for routine diagnosis and teleconsultation was the driving goal for the evolution of telepathology systems. Initial expense, lack of broadband Internet connections, potential liabilities, and a lack of knowledge transfer from expert to potential user have all contributed to preventing the incorporation of telepathology into everyday practice [ - ]. The emerging role of telepathology in the area of education and quality assurance is not encumbered by the same difficulties. It has been demonstrated that the application of telepathology in such roles has the advantage of lower cost, less logistical effort, and a positive response to its use by the end user [ - ]. Coupled with the growing presence of ultra-fast slide scanners, this should ensure an increasing role for telepathology in this area [ - ].
To overcome problems attributable to sampling bias and interpretation resulting from limited field selection, telepathologists must be able to navigate to any field of view, at magnifications comparable to that of a conventional microscope, using images of sufficient resolution to render a correct diagnosis [ , , ]. To meet such criteria we have developed the Virtual Pathology slide (VPS) . This is a microscope emulator that displays digitized representations of tissue slides, allowing inspection of numerous fields of view, over a wide range of magnifications. Similar applications, commonly referred to as Virtual Slides, have been developed by other commercial and academic bodies [ , , , , , - ]. A screenshot of our VPS is shown in ; further screenshots are in . In an important new departure, the VPS can also record and quantify the diagnostic trace of a pathologist, as a discrete data set on a central server. This allows the decoupling of a pathologist's field selection from the technical functionality of the telepathology system. In this paper, we report on the development of this system, its acceptability among a group of evaluating pathologists, the level of diagnostic agreement among this group, and the potential future applications of the VPS in telepathology.
A comprehensive document detailing the scanning algorithm and system architecture of the VPS is in . Construction of the VPS Development of VPS Imaging Workstation To create VPS slides, an imaging workstation was developed in-house. An Olympus BX-40 microscope (Olympus, Melville, NY, USA) incorporating a 40x plan apochromat lens with a 0.95 numerical aperture was used. The microscope was fitted with a robotic stage (Prior Scientific Inc, Rockland, Mass, USA) and a JVC 3-CCD (3-chip charge-coupled device) video camera. Development of VPS Slide Scanning Algorithm Using Optimas 6.5 imaging software (Media Cybernetics, Inc, Silver Spring, Md, USA), an algorithm was written in ALI (Analytical Language for Images) to perform a raster scan of 15.53 mm x 11.61 mm (180 mm 2 ) of tissue at 40x objective magnification. The VPS raster scan acquires 128 x 128 images in the X and Y Cartesian directions, one row at a time. Each acquired image represents 0.011 mm 2 at a resolution of 768 pixels by 574 pixels. Images were saved using a JPEG (Joint Photographic Experts Group) format at 10% compression, resulting in image-file sizes in the range of 100 to 150 KB (kilobytes). To build layers of lesser magnification, a second algorithm was developed, which tiles and resizes multiple images from the raster scan into composite images . Images were subsequently uploaded onto the VPS Web server. Development of VPS Web Interface To view images via the Internet a graphical user interface was constructed . This is a Web page powered by server-side scripting in PHP (PHP = Hypertext Preprocessor). The interface emulates the experience of using a conventional microscope by allowing a user to increase or decrease magnification or move laterally while examining a tissue section. A customized browser was developed to control the user's access to the VPS during dedicated studies, to optimize the integrity of recorded data, and to provide a uniform experience for users who would otherwise experience subtle differences due to variation in currently-existing versions of Web browsers. The VPS browser is a Microsoft Foundation Class (MFC) application written in Visual C++, which utilizes Internet Explorer file libraries to behave as a customized browser. The VPS customized browser opens up prescribed Web pages on the VPS server. The VPS browser is optimized for PC users with Microsoft Internet Explorer 5 or greater. Development of VPS Database When a user examines a VPS slide, data describing the user's interaction with the VPS is transmitted from the user's workstation to the VPS server and stored in an Oracle database. The VPS examination database is structured to contain the following data types: System Configuration Data This consists of data automatically recorded on the VPS server and includes parameters such as user's browser version, operating system, screen resolution, screen color depth, and IP (Internet Protocol) address. User Tracking Data This data records a user's "diagnostic pattern" as the user examines a slide. Information recorded includes image file name, image magnification, and the time spent viewing each image. User-submitted Data Diagnostic and descriptive data is submitted to the VPS server by participants, using HTML (Hypertext Markup Language) forms. Information recorded includes the report submitted by the user at the end of each slide examination and a final questionnaire. The observer also has the option to record or annotate every field of view examined. VPS Deployment The user has two choices on how he or she wishes to use the system. Users with high-speed Internet access can download the VPS browser from the VPS homepage and view images downloaded directly from the VPS server. To accommodate users with slow Internet connections, users may launch the VPS browser from a VPS CD and view images stored on the CD. However, an Internet connection is still required to record data on the VPS database, and to provide essential data for statistical analysis and playback facilities. Validation of the VPS Slide Selection Ten needle core biopsies were obtained from the Department of Pathology, Mater Misericordiae Hospital, Dublin, Ireland. The slides were randomly selected by a pathologist (P.A.D.) with a special interest in breast pathology. The slides represented a range of diagnostic classifications. Two of the slides are presented in . All 10 slides can be viewed in . Participants Fifty-four pathologists with at least 2 years experience in pathology practice registered for the study. Of the 54 pathologists, 17 examined all 10 slides and 8 initiated the study but did not complete it. Of the 17 participants who completed the study, 8 were members of the European Working Group of Breast Screening Pathology. Of the 17 participants who examined all 10 slides, 13 subsequently completed a questionnaire on user perception of the VPS. Of the 8 participants that initiated the study but did not complete it, 3 completed the questionnaire. Examination Procedure Upon launching the VPS browser, participants were prompted to log in using the username and password they received at registration. This made them identifiable to the system. On successful log-in, the VPS needle core examination guidelines were displayed. After stating they read the guidelines, users were permitted to browse the slides available for examination and select one from a slide gallery. The slide gallery displayed a thumbnail image of each slide and indicated the patient's age and sex, and a brief case description. Upon selecting a slide for examination, participants were presented with the VPS user interface. While examining a slide, participants could if desired annotate the fields of view using the text area provided. Upon completing a slide examination, participants submitted an online report that provided a diagnostic classification for the case, using an adaptation of the Core Biopsy Reporting Guidelines for Non-operative Diagnostic Procedures and Reporting in Breast Cancer screening as used by the British National Co-ordinating Committee for Breast Screening Pathology. Users were requested to classify the slides as one of the following: B1: Unsatisfactory/normal tissue only. B2: Benign. B3: Benign but of uncertain malignant potential. B4: Suspicious of malignancy. B5: Malignant. For slides categorized as B5, participants were required to subclassify their decision as malignant, in-situ, or invasive. Upon making a classification, participants were returned to the slide gallery from which another slide could be selected for examination. Utilization of this data allowed the following to be determined: Percentage concordance for a user, calculated as the number of slides (expressed as a percentage) for which the user's diagnosis is in agreement with the consensus VPS diagnosis. Percentage concordance of a slide, calculated as the percentage of users who concur as to the correct diagnosis of a slide. Cohen's Kappa [ - ], a measure of agreement between observers taking into account agreement that could occur by chance. Kappa values range from 0 to 1 with a score greater than 0.7 indicating "substantial agreement." Participants who completed examination of the 10 slides were subsequently requested to complete an online questionnaire describing their experience using the VPS. Participants were asked to give a subjective evaluation of diagnostic confidence in using the VPS, reasons for uncertainty, an evaluation of image quality, and perceived download speed.
Development of VPS Imaging Workstation To create VPS slides, an imaging workstation was developed in-house. An Olympus BX-40 microscope (Olympus, Melville, NY, USA) incorporating a 40x plan apochromat lens with a 0.95 numerical aperture was used. The microscope was fitted with a robotic stage (Prior Scientific Inc, Rockland, Mass, USA) and a JVC 3-CCD (3-chip charge-coupled device) video camera. Development of VPS Slide Scanning Algorithm Using Optimas 6.5 imaging software (Media Cybernetics, Inc, Silver Spring, Md, USA), an algorithm was written in ALI (Analytical Language for Images) to perform a raster scan of 15.53 mm x 11.61 mm (180 mm 2 ) of tissue at 40x objective magnification. The VPS raster scan acquires 128 x 128 images in the X and Y Cartesian directions, one row at a time. Each acquired image represents 0.011 mm 2 at a resolution of 768 pixels by 574 pixels. Images were saved using a JPEG (Joint Photographic Experts Group) format at 10% compression, resulting in image-file sizes in the range of 100 to 150 KB (kilobytes). To build layers of lesser magnification, a second algorithm was developed, which tiles and resizes multiple images from the raster scan into composite images . Images were subsequently uploaded onto the VPS Web server. Development of VPS Web Interface To view images via the Internet a graphical user interface was constructed . This is a Web page powered by server-side scripting in PHP (PHP = Hypertext Preprocessor). The interface emulates the experience of using a conventional microscope by allowing a user to increase or decrease magnification or move laterally while examining a tissue section. A customized browser was developed to control the user's access to the VPS during dedicated studies, to optimize the integrity of recorded data, and to provide a uniform experience for users who would otherwise experience subtle differences due to variation in currently-existing versions of Web browsers. The VPS browser is a Microsoft Foundation Class (MFC) application written in Visual C++, which utilizes Internet Explorer file libraries to behave as a customized browser. The VPS customized browser opens up prescribed Web pages on the VPS server. The VPS browser is optimized for PC users with Microsoft Internet Explorer 5 or greater. Development of VPS Database When a user examines a VPS slide, data describing the user's interaction with the VPS is transmitted from the user's workstation to the VPS server and stored in an Oracle database. The VPS examination database is structured to contain the following data types: System Configuration Data This consists of data automatically recorded on the VPS server and includes parameters such as user's browser version, operating system, screen resolution, screen color depth, and IP (Internet Protocol) address. User Tracking Data This data records a user's "diagnostic pattern" as the user examines a slide. Information recorded includes image file name, image magnification, and the time spent viewing each image. User-submitted Data Diagnostic and descriptive data is submitted to the VPS server by participants, using HTML (Hypertext Markup Language) forms. Information recorded includes the report submitted by the user at the end of each slide examination and a final questionnaire. The observer also has the option to record or annotate every field of view examined. VPS Deployment The user has two choices on how he or she wishes to use the system. Users with high-speed Internet access can download the VPS browser from the VPS homepage and view images downloaded directly from the VPS server. To accommodate users with slow Internet connections, users may launch the VPS browser from a VPS CD and view images stored on the CD. However, an Internet connection is still required to record data on the VPS database, and to provide essential data for statistical analysis and playback facilities.
To create VPS slides, an imaging workstation was developed in-house. An Olympus BX-40 microscope (Olympus, Melville, NY, USA) incorporating a 40x plan apochromat lens with a 0.95 numerical aperture was used. The microscope was fitted with a robotic stage (Prior Scientific Inc, Rockland, Mass, USA) and a JVC 3-CCD (3-chip charge-coupled device) video camera.
Using Optimas 6.5 imaging software (Media Cybernetics, Inc, Silver Spring, Md, USA), an algorithm was written in ALI (Analytical Language for Images) to perform a raster scan of 15.53 mm x 11.61 mm (180 mm 2 ) of tissue at 40x objective magnification. The VPS raster scan acquires 128 x 128 images in the X and Y Cartesian directions, one row at a time. Each acquired image represents 0.011 mm 2 at a resolution of 768 pixels by 574 pixels. Images were saved using a JPEG (Joint Photographic Experts Group) format at 10% compression, resulting in image-file sizes in the range of 100 to 150 KB (kilobytes). To build layers of lesser magnification, a second algorithm was developed, which tiles and resizes multiple images from the raster scan into composite images . Images were subsequently uploaded onto the VPS Web server.
To view images via the Internet a graphical user interface was constructed . This is a Web page powered by server-side scripting in PHP (PHP = Hypertext Preprocessor). The interface emulates the experience of using a conventional microscope by allowing a user to increase or decrease magnification or move laterally while examining a tissue section. A customized browser was developed to control the user's access to the VPS during dedicated studies, to optimize the integrity of recorded data, and to provide a uniform experience for users who would otherwise experience subtle differences due to variation in currently-existing versions of Web browsers. The VPS browser is a Microsoft Foundation Class (MFC) application written in Visual C++, which utilizes Internet Explorer file libraries to behave as a customized browser. The VPS customized browser opens up prescribed Web pages on the VPS server. The VPS browser is optimized for PC users with Microsoft Internet Explorer 5 or greater.
When a user examines a VPS slide, data describing the user's interaction with the VPS is transmitted from the user's workstation to the VPS server and stored in an Oracle database. The VPS examination database is structured to contain the following data types:
This consists of data automatically recorded on the VPS server and includes parameters such as user's browser version, operating system, screen resolution, screen color depth, and IP (Internet Protocol) address.
This data records a user's "diagnostic pattern" as the user examines a slide. Information recorded includes image file name, image magnification, and the time spent viewing each image.
Diagnostic and descriptive data is submitted to the VPS server by participants, using HTML (Hypertext Markup Language) forms. Information recorded includes the report submitted by the user at the end of each slide examination and a final questionnaire. The observer also has the option to record or annotate every field of view examined.
The user has two choices on how he or she wishes to use the system. Users with high-speed Internet access can download the VPS browser from the VPS homepage and view images downloaded directly from the VPS server. To accommodate users with slow Internet connections, users may launch the VPS browser from a VPS CD and view images stored on the CD. However, an Internet connection is still required to record data on the VPS database, and to provide essential data for statistical analysis and playback facilities.
Slide Selection Ten needle core biopsies were obtained from the Department of Pathology, Mater Misericordiae Hospital, Dublin, Ireland. The slides were randomly selected by a pathologist (P.A.D.) with a special interest in breast pathology. The slides represented a range of diagnostic classifications. Two of the slides are presented in . All 10 slides can be viewed in . Participants Fifty-four pathologists with at least 2 years experience in pathology practice registered for the study. Of the 54 pathologists, 17 examined all 10 slides and 8 initiated the study but did not complete it. Of the 17 participants who completed the study, 8 were members of the European Working Group of Breast Screening Pathology. Of the 17 participants who examined all 10 slides, 13 subsequently completed a questionnaire on user perception of the VPS. Of the 8 participants that initiated the study but did not complete it, 3 completed the questionnaire. Examination Procedure Upon launching the VPS browser, participants were prompted to log in using the username and password they received at registration. This made them identifiable to the system. On successful log-in, the VPS needle core examination guidelines were displayed. After stating they read the guidelines, users were permitted to browse the slides available for examination and select one from a slide gallery. The slide gallery displayed a thumbnail image of each slide and indicated the patient's age and sex, and a brief case description. Upon selecting a slide for examination, participants were presented with the VPS user interface. While examining a slide, participants could if desired annotate the fields of view using the text area provided. Upon completing a slide examination, participants submitted an online report that provided a diagnostic classification for the case, using an adaptation of the Core Biopsy Reporting Guidelines for Non-operative Diagnostic Procedures and Reporting in Breast Cancer screening as used by the British National Co-ordinating Committee for Breast Screening Pathology. Users were requested to classify the slides as one of the following: B1: Unsatisfactory/normal tissue only. B2: Benign. B3: Benign but of uncertain malignant potential. B4: Suspicious of malignancy. B5: Malignant. For slides categorized as B5, participants were required to subclassify their decision as malignant, in-situ, or invasive. Upon making a classification, participants were returned to the slide gallery from which another slide could be selected for examination. Utilization of this data allowed the following to be determined: Percentage concordance for a user, calculated as the number of slides (expressed as a percentage) for which the user's diagnosis is in agreement with the consensus VPS diagnosis. Percentage concordance of a slide, calculated as the percentage of users who concur as to the correct diagnosis of a slide. Cohen's Kappa [ - ], a measure of agreement between observers taking into account agreement that could occur by chance. Kappa values range from 0 to 1 with a score greater than 0.7 indicating "substantial agreement." Participants who completed examination of the 10 slides were subsequently requested to complete an online questionnaire describing their experience using the VPS. Participants were asked to give a subjective evaluation of diagnostic confidence in using the VPS, reasons for uncertainty, an evaluation of image quality, and perceived download speed.
Ten needle core biopsies were obtained from the Department of Pathology, Mater Misericordiae Hospital, Dublin, Ireland. The slides were randomly selected by a pathologist (P.A.D.) with a special interest in breast pathology. The slides represented a range of diagnostic classifications. Two of the slides are presented in . All 10 slides can be viewed in .
Fifty-four pathologists with at least 2 years experience in pathology practice registered for the study. Of the 54 pathologists, 17 examined all 10 slides and 8 initiated the study but did not complete it. Of the 17 participants who completed the study, 8 were members of the European Working Group of Breast Screening Pathology. Of the 17 participants who examined all 10 slides, 13 subsequently completed a questionnaire on user perception of the VPS. Of the 8 participants that initiated the study but did not complete it, 3 completed the questionnaire.
Upon launching the VPS browser, participants were prompted to log in using the username and password they received at registration. This made them identifiable to the system. On successful log-in, the VPS needle core examination guidelines were displayed. After stating they read the guidelines, users were permitted to browse the slides available for examination and select one from a slide gallery. The slide gallery displayed a thumbnail image of each slide and indicated the patient's age and sex, and a brief case description. Upon selecting a slide for examination, participants were presented with the VPS user interface. While examining a slide, participants could if desired annotate the fields of view using the text area provided. Upon completing a slide examination, participants submitted an online report that provided a diagnostic classification for the case, using an adaptation of the Core Biopsy Reporting Guidelines for Non-operative Diagnostic Procedures and Reporting in Breast Cancer screening as used by the British National Co-ordinating Committee for Breast Screening Pathology. Users were requested to classify the slides as one of the following: B1: Unsatisfactory/normal tissue only. B2: Benign. B3: Benign but of uncertain malignant potential. B4: Suspicious of malignancy. B5: Malignant. For slides categorized as B5, participants were required to subclassify their decision as malignant, in-situ, or invasive. Upon making a classification, participants were returned to the slide gallery from which another slide could be selected for examination. Utilization of this data allowed the following to be determined: Percentage concordance for a user, calculated as the number of slides (expressed as a percentage) for which the user's diagnosis is in agreement with the consensus VPS diagnosis. Percentage concordance of a slide, calculated as the percentage of users who concur as to the correct diagnosis of a slide. Cohen's Kappa [ - ], a measure of agreement between observers taking into account agreement that could occur by chance. Kappa values range from 0 to 1 with a score greater than 0.7 indicating "substantial agreement." Participants who completed examination of the 10 slides were subsequently requested to complete an online questionnaire describing their experience using the VPS. Participants were asked to give a subjective evaluation of diagnostic confidence in using the VPS, reasons for uncertainty, an evaluation of image quality, and perceived download speed.
User Performance Using the VPS shows strong diagnostic agreement between original glass-slide diagnosis and the most-common diagnosis offered by users of the VPS, with agreement being reached in 9 out of the 10 slides. Disagreement by 1 diagnostic degree occurred with slide 8 (glass slide diagnosis was B3; most- common VPS diagnosis was B4). The diagnostic classification of slide 8 had the lowest level of agreement between participants at 38.5%. The second most popular choice for slide 8 was split between B3 and B2, 6 participants (35.3% of users) classified it as B4 while 4 participants (23.5% of users) classified it as B3 and 4 participants (23.5% of users) classified it as B2). Participants with the 4 highest Kappa scores (23.5% of users) classified slide 8 as B4. A more-detailed analysis of the diagnostic classifications made by participants is described in . The average percentage concordance between participants on all cases was 66.5%. Of the 17 participants, 14 attained a percentage concordance of between 90% and 60%. The average Kappa value achieved by participants was 0.76. Participants 36 and 6 achieved a Kappa of 0.26 and 0.23 respectively indicating "fair agreement" [ - ] with other participants while the remaining 15 participants achieved a Kappa of between 0.97 and 0.65. The average percentage concordance for slides was 66.5% with a minimum concordance of 35.3% for slide 8 and a maximum concordance of 100% for slide 6. The percentage concordance for slide 5 was 47%. For all remaining slides there was greater than 50% agreement between participants. The average number of fields of view examined by each participant was 23 per slide. Participant number 5, who achieved the highest Kappa, examined 321 views, while participant number 6, who had the lowest Kappa, examined 418 fields of view. The highest number of number of fields of view examined for a particular slide was 118 by participant number 6 while examining slide 10. This slide had a percentage concordance between participants of 52.9%. The lowest number of views examined while examining a slide was 3; this was by participant 10 who achieved a Kappa score of 0.91 and agreed with the group consensus for slide 2. Diagnosis for slide 2 had a percentage concurrence amongst participants of 94%. The average time taken for participants to examine a slide was recorded as 6 minutes 11 seconds. The maximum time taken to examine a slide was recorded as 12 minutes 49 seconds by participant number 36 with an average bandwidth of 20 kilobits per second while examining slide 7. The minimum examination time was recorded as 43 seconds by participant number 1 with an average bandwidth of 64 kilobits per second while examining slide 2. User Perception of the VPS Participants were asked to assess their own computer competency and the frequency with which they use a telepathology system. Participants described themselves as "advanced" (18.75%), "competent" (18.75%), or "adequately competent" (62.5%) with computers, while 44% of participants indicated they had never used a telepathology system prior to the study. illustrates that 68.75% of participants rated the VPS "easy" (62.5%) to use or "very easy" to use (6.25%). Participants were requested to rate their degree of confidence in making a diagnostic decision using the VPS. illustrates that 80.25% of participants expressed confidence in using the VPS with 56.25% indicating they were "reasonably confident," while 18.75% were "confident," and 6.25% were "very confident" in making a diagnosis. illustrates that 87.5% of participants expressed satisfaction with the image quality with 43.75% indicating the quality as "adequate," 25% as "good," and 18.75% of participants indicating the image quality as "excellent."
shows strong diagnostic agreement between original glass-slide diagnosis and the most-common diagnosis offered by users of the VPS, with agreement being reached in 9 out of the 10 slides. Disagreement by 1 diagnostic degree occurred with slide 8 (glass slide diagnosis was B3; most- common VPS diagnosis was B4). The diagnostic classification of slide 8 had the lowest level of agreement between participants at 38.5%. The second most popular choice for slide 8 was split between B3 and B2, 6 participants (35.3% of users) classified it as B4 while 4 participants (23.5% of users) classified it as B3 and 4 participants (23.5% of users) classified it as B2). Participants with the 4 highest Kappa scores (23.5% of users) classified slide 8 as B4. A more-detailed analysis of the diagnostic classifications made by participants is described in . The average percentage concordance between participants on all cases was 66.5%. Of the 17 participants, 14 attained a percentage concordance of between 90% and 60%. The average Kappa value achieved by participants was 0.76. Participants 36 and 6 achieved a Kappa of 0.26 and 0.23 respectively indicating "fair agreement" [ - ] with other participants while the remaining 15 participants achieved a Kappa of between 0.97 and 0.65. The average percentage concordance for slides was 66.5% with a minimum concordance of 35.3% for slide 8 and a maximum concordance of 100% for slide 6. The percentage concordance for slide 5 was 47%. For all remaining slides there was greater than 50% agreement between participants. The average number of fields of view examined by each participant was 23 per slide. Participant number 5, who achieved the highest Kappa, examined 321 views, while participant number 6, who had the lowest Kappa, examined 418 fields of view. The highest number of number of fields of view examined for a particular slide was 118 by participant number 6 while examining slide 10. This slide had a percentage concordance between participants of 52.9%. The lowest number of views examined while examining a slide was 3; this was by participant 10 who achieved a Kappa score of 0.91 and agreed with the group consensus for slide 2. Diagnosis for slide 2 had a percentage concurrence amongst participants of 94%. The average time taken for participants to examine a slide was recorded as 6 minutes 11 seconds. The maximum time taken to examine a slide was recorded as 12 minutes 49 seconds by participant number 36 with an average bandwidth of 20 kilobits per second while examining slide 7. The minimum examination time was recorded as 43 seconds by participant number 1 with an average bandwidth of 64 kilobits per second while examining slide 2.
Participants were asked to assess their own computer competency and the frequency with which they use a telepathology system. Participants described themselves as "advanced" (18.75%), "competent" (18.75%), or "adequately competent" (62.5%) with computers, while 44% of participants indicated they had never used a telepathology system prior to the study. illustrates that 68.75% of participants rated the VPS "easy" (62.5%) to use or "very easy" to use (6.25%). Participants were requested to rate their degree of confidence in making a diagnostic decision using the VPS. illustrates that 80.25% of participants expressed confidence in using the VPS with 56.25% indicating they were "reasonably confident," while 18.75% were "confident," and 6.25% were "very confident" in making a diagnosis. illustrates that 87.5% of participants expressed satisfaction with the image quality with 43.75% indicating the quality as "adequate," 25% as "good," and 18.75% of participants indicating the image quality as "excellent."
The VPS system is a realistic alternative to dynamic telepathology, in terms of its ability to mimic a conventional microscope, its accessibility via the Internet, and its simplicity of operation. Of the 17 participants, 15 achieved a Kappa of between 0.97 and 0.65 and 14 attained a percentage concordance of between 90% and 60%. This demonstrates "substantial" agreement between users when using the VPS [ - ]. The calculation of Kappa was weighted to reflect the degree of variation of a participant's diagnostic decision from the most popular choice. For example, participant 18 achieved a high Kappa of 0.86 despite being in agreement with other participants for 4 out of the 10 slides. This is because for each of the other 6 slides, participant 18 was inconsistent with the popular choice by one degree. Participant 36 achieved the same percentage concordance as participant 18 but only achieved a Kappa of 0.26. This is because the diagnostic categories selected by participant 36 deviated to a greater degree from the popular choice than those selected by participant 18 [ - ]. Participant 36 and participant 6 attained the lowest Kappa scores of 0.26 and 0.23 respectively. This reduced the overall average Kappa value considerably. Confidence in using the VPS was described as "reasonably confident" by participant 36, who had 3 years experience in pathology and examined 201 fields of view while examining the entire set of slides. Further analysis of the images viewed is necessary to elucidate reasons for the diagnostic decisions made by participant 36; however, inexperience with breast pathology coupled with insufficient examination of the slides may have contributed to poor performance. Use of telpathology was described as "infrequently" by participant 6 who was "confident" in making a diagnostic classification using the VPS and described the use of the VPS as "easy." However, participant 6 attributed some diagnostic uncertainty to "problems with assessing significance of small subtle lesions without having the whole slide to look at." Participant 6 examined 418 fields of view, the highest number examined by any participant. The average percentage concordance for the entire set of slides was 66.5%. Full agreement between participants was achieved for slide 6, which demonstrates that full agreement can be achieved using the VPS. The average number of views examined by participants while examining the entire set of slides was 230. The percentage concordance for a particular slide decreases as the average number of fields examined for that slide increases. For example, the average number of fields examined for slide 6 (100% concordance amongst participants) was 14.3, while the average number of fields examined for slide 10 (52.9% concordance amongst participants) was 34.8. Conversely, participants with a high Kappa score tend to view a greater number of fields of view than participants with a low Kappa score, suggesting that the greater the amount of tissue viewed by a pathologist, the more likely they are to make a correct diagnosis. Slide 8 had the lowest level of concordance at 35.3%. This reduced the average percentage concordance for the set of slides by 3.46%. shows there is a broad distribution of diagnostic categorization for slide 8 by participants. As shown in , for slide 8 the number of fields of view examined by participants is low (299) given the apparent complexity of the case. It is apparent that users are rapidly coming to a conclusion that usually does not concur with the original glass slide diagnosis. Further study of the examination traces from this slide will be required to evaluate the reasons for the diagnostic spread. Participants with 3 years or less experience did not have access to broadband Internet connection and recorded bandwidth speeds of less than 15 kilobits per second. These participants expressed least satisfaction with the VPS in terms of ease of use, image quality, and diagnostic confidence. All 3 participants, who indicated they were "not confident," attributed difficulty in using the VPS to poor download speed, with comments such as "Poor download speed was extremely slow and made the viewing experience disjointed and basically unworkable." Of these 3 participants, 2 had a working bandwidth of 12.6 kilobits per second and 31.5 kilobits per second respectively. A bandwidth could not be determined for the third, however the third did offer such comments as "too long to download images" and "problem was on my end, slow connection." High-speed broadband Internet connectivity is still unavailable to many pathologists. This is a major limiting factor for acceptability of Web-driven telepathology due to the time taken to download large image files over the Internet [ , , ]. We have attempted to overcome this with the development and deployment of a CD-ROM VPS system to selected participants. This facilitates rapid retrieval of images from a CD while data pertaining to the examination is transmitted and stored on the VPS web server. Participants were asked to comment on improvements to the VPS that they would like implemented. A number of participants suggested they would like additional magnification ranges. For example, "Navigation within the slide was disjointed and it was difficult to maintain perspective whilst moving from field to field. The range of magnifications was too limited, especially in the intermediate magnification range." There are a growing number of interactive pathology sites available via the Internet [ - ]. The diversity in their principle of operation, their application in telepathology, and their degree of sophistication promises an encouraging future in telepathology. The contribution of the VPS to the field of telepathology is notable in that it records the diagnostic pathway of a pathologists slide examination. We now have the diagnostic traces of 17 pathologists examining 10 cases. We intend to utilize this data to elucidate the cognitive and decision-making process of pathologists as they render a diagnosis when using a microscope. This will provide valuable insight into interobserver variability and the subjective process of microscopic diagnosis.
|
Effects of culturally-appropriate group education for migrants with type 2 diabetes in primary healthcare: pre-test-post-test design | 234ec0d1-50b0-4dbb-bf1d-4eb63c8395cc | 11699762 | Patient Education as Topic[mh] | Type 2 diabetes has rapidly developed into what has been considered a pandemia, particularly affecting migrants (refugees and immigrants) residing in developed countries . This surge in Type 2 diabetes cases will lead to an increase in the utilisation of medical services for diabetes treatment, thereby having significant economic implications for the healthcare system, aside from the implications for affected individuals and their families. The most important cornerstone in self-management of type 2 diabetes is active participation in self-care, based on knowledge about the disease . Thus, patient education should aim to enhance a patient’s knowledge and skills regarding management, empowering them to take an active role in their treatment , to achieve optimal glycaemic control to prevent complications related to diabetes . However, presently there is a discussion about what kind of teaching method gives the best result, but few studies have evaluated different methods for teaching ethnic minority groups or migrant groups . In the UK and Australia a structured group-based educational programme for individuals with type 2 diabetes entitled DESMOND, The Diabetes Education and Self-Management for Ongoing and Newly Diagnosed, is used. It is one of the few initiatives that has been evaluated and shown a variety of health improvements. Some of the courses in the programme have been adapted to be used in South Asian ethnic populations, but the effect on ethnic minority groups needs to be evaluated . However, neither this study nor those included in previous reviews of culturally appropriate health educations for type 2 diabetes are focused on migrants but on ethnic minority groups without considering their migratory background or history. Migrants are particularly vulnerable while trying to adapt to a new life-style and environment in the new country in the acculturation process . A previous Danish study explored the impact of a culturally sensitive diabetes self-management education and support intervention on mental and physical health of immigrants with type 2 diabetes with primary language in Urdu, Arabic and Turkish . The six week programme utilizing person-centered dialogue tools showed that it effectively improved health but did not measure knowledge. Thus, this study is the only to evaluate the effects of a culturally appropriate diabetes intervention for migrants, on diabetes knowledge and health outcomes, adding a novel perspective to the existing literature. Health education that is tailored to the cultural and religious beliefs, as well as the linguistic skills of the targeted community, and also considering literacy skills can be defined as culturally appropriate health education . Research in this area has increased over the last decennium, indicating that culturally appropriate diabetes education has consistent benefits compared to conventional care with improved diabetes knowledge and glycaemic control. However, further studies to investigate successful aspects of culturally tailored education models for migrants with type 2 diabetes are needed. Additionally, new models for diabetes education should be developed and tested to determine their clinical significance . Despite a previously expressed need, culturally tailored diabetes education models for migrants are scarce or have not been evaluated , and their effects remain untested. Thus, the model to be tested here (see Hadziabdic et al., 2020 for further details is important and aimed at filling a knowledge gap. The model differs from previous attempts as it focus on migrants and starts from the participants’ own beliefs about health and illness, based on their knowledge. It is conducted through focus group discussions to reach individual beliefs. Since beliefs are culturally determined and learned through socialisation , the model is culturally tailored and person-centred, delivered by a multi-professional team instead of having education sessions consisting of structured lectures, where the educator, usually a healthcare professional, teaches the patient about diabetes care. It also differs from previous studies as the multi-professional team also includes a physician to get a comprehensive knowledge in diabetes management. Previous research has shown that group-based education leads to improvements in patients’ knowledge about diabetes and glycaemic control . Previous qualitative studies have indicated that migrants have limited knowledge about diabetes and tend to underestimate its seriousness, which negatively influences their self-care compared to Swedish-born persons . A survey assessing diabetes knowledge confirmed this hypothesis . Furthermore, individuals from non-European countries exhibited the lowest level of knowledge about diabetes. There is ongoing debate about what kind of teaching method gives the best result, but few studies have evaluated different methods for teaching migrants. Previous studies lack a theoretical base and do not consider individuals’ own beliefs about health and illness, which are influenced by their knowledge and guide their health-related behaviour . Therefore, the aim of this study was to evaluate the effects on Diabetes Knowledge, HbA1c, and Self-rated health (SRH), of a previously developed culturally appropriate diabetes education model , based on individual beliefs about health and illness, underpinned by knowledge, and conducted through focus group discussions. Thus, the model is both individually and culturally tailored, with the aim of improving knowledge about type 2 diabetes among migrants and hereby promoting increased participation in self-care, leading to improved health outcomes. It was hypothesised that the group-based education model could change individuals’ levels of knowledge and risk awareness. This, in turn, was expected to increase their perceived self-efficacy and inclination to actively participate in self-care among foreign-born individuals diagnosed type 2 diabetes living in Sweden.
Design An observational study evaluating an intervention using a pre-test-post-test design was conducted . Individual structured interviews and HbA1c measurements were obtained before the intervention, at baseline, immediately after and 3 months after the group sessions. The group-based culturally appropriate diabetes education model for migrants with type 2 diabetes to be evaluated has previously been described . Sample and setting Individuals diagnosed with type 2 diabetes who were migrants (immigrants and refugees) residing in Sweden were recruited by healthcare staff from healthcare centres in primary care ( n = 3) located in immigrant-dense areas. Data were collected at baseline, immediately after and 3 months after the group sessions. Inclusion criteria for the study were: individuals diagnosed with type 2 diabetes (ICD E, 11; WHO ), aged ≥ 18 years, and with a duration of diabetes ≥ 1 year. Participants with known psychiatric diagnoses (ICD F 00- F29/F60-F 99), registered in the medical records, were excluded on the grounds that cognitive deficiency might influence the results. Sixty-three individuals were invited, and 33 had signed up for the intervention. Of the 30 individuals (13 females, 17 males) who were identified but who did not participate, reasons for non-participation differed. One could not participate due to not being immunised, another did not attend the scheduled meeting, and a third expressed being too occupied. Two individuals resigned due to illness, four were abroad, and seven declined to participate. Four individuals did not answer the phone/could not be contacted, and the status of the remaining (eleven) was unknown (whether they could be contacted /identified). After receiving information (either oral or written), the operation managers at the healthcare centres approved the study, and informed the diabetes specialist nurses about the study (orally or in writing). Subsequently, members of the research team participated in workplace meetings to provide additional information to the staff. The diabetes specialist nurses (DSN) then identified persons who met the inclusion criteria based on digital medical records. Invitation letters, translated into the language spoken by the individual, with information about the study were sent or given during visits to eligible participants. They were asked to fill in a response form and return it in a prepaid envelope by mail or to the staff at the healthcare centre, who then forwarded it to the researcher. The invitation letter was translated by authorised translators into the language spoken by the identified individuals. Follow-up calls, in presence of interpreter (by telephone), were made by the DSNs for reminders. Data collection Data were collected from March 2015 to March 2016 and from September 2019 to October 2023, including baseline, immediately after and 3 months after the group sessions. However, the study period was impacted by two years of the Covid-19 pandemic, ranging from March 2020 to May 2022. Thus, the study was cancelled at three different time points (in the start, middle, and end of the pandemic) due to visiting restrictions in healthcare facilities, which prohibited group education sessions to be started. This was particularly relevant as individuals with diabetes were considered a high-risk group, necessitating strict measures to protect them from infection. A registered nurse performed structured interviews (lasting about 45–60 min, including all instruments) in a secluded location at the primary healthcare centre, and in the presence of a professional authorised interpreter. Sequential interpretation techniques were applied (word-by word), with the interpreter translating what was being said literally, using the first person (I-form), remaining neutral and maintaining confidentiality . During the interview, glycosylated Haemoglobin levels, HbA1c, were measured at the healthcare centre. The structured interview guide for the whole project was developed based on previous research experiences by the research team, e.g. Hjelm et al., , literature review, and previously developed and tested instruments such as e.g. the Diabetes Knowledge Test (DKT) and Self-Rated Health . In this study findings from the Diabetes knowledge test (DKT; see ), Self-Rated Health (SRH; see ) and clinical and socio-demographic background data are reported. The interview guide was also pilot-tested, and its face and content validity were checked and found to be working well. Intervention This culturally appropriate diabetes education model is centred on individual beliefs about health and illness, based on knowledge, and conducted through focus group discussions comprising five sessions, held every second week, and the programme was completed within three months. These sessions were led by a diabetes specialist nurse in collaboration with a multi-professional team (diabetes specialist nurse, physician, dietician), (for details, see Hadziabdic et al., 2020 . Each focus group should include 4–5 persons and last approximately 90 min, in the presence of an interpreter. A thematic interview guide is used, with broad open-ended questions and descriptions of critical situations/health problems. Participants are encouraged to discuss their individual beliefs based on their own knowledge. Healthcare staff present at the sessions answer questions, provide additional information and ensure that basic principles for diabetes care are addressed when necessary. This diabetes education model is tailored to both individual and cultural aspects and has the potential to improve knowledge about type 2 diabetes among migrants, thus increasing self-care behaviours and improving health. The model was tested in eight focus groups comprising five education sessions, including 22 migrants (14 females, 8 males). Measures The participants’ self-reported demographic characteristics included age, gender, country of birth, migration background (employment, refugee, relative), duration of residence in Sweden, whether diagnosed in Sweden or abroad, duration of diabetes, treatment received, self-reported complications related to diabetes, educational level, employment status, and marital status. The outcome measures used for this study were HbA1c, Diabetes knowledge, and Self-rated health (SRH). The participants’ knowledge was assessed using the Diabetes Knowledge Test (DKT), developed by the Diabetes Research and Training Center at the University of Michigan . The test includes two subscales, with a total of 23 items appropriate for adults with type 2 or type 1 diabetes. In this study, only the first subscale (14 items; general part) was used, as the second subscale focuses on issues regarding insulin treatment, and only individuals diagnosed with type 2 diabetes were included. The DKT has shown good psychometric properties, with adequate validity and reliability (Cronbach’s alpha > 0.79) . The questionnaire has been adapted and used, following translations, in many countries around the world and among populations of different origins (e.g. ). Translation into Swedish was done in several steps to ensure preservation of the essential meaning of the items . The DKT was translated into Swedish and back-translated into English by two independent professional translators. The PI for the study (first author) then reviewed the two versions and confirmed their equivalence. Interviews were performed with the assistance of professional interpreters in the respective languages. When assessing diabetes knowledge using the DKT , each correct answer was awarded one point, and zero for a wrong answer or no response. The total score was calculated based on the sum of points for the general knowledge section, questions 1–14 . Self-rated health (SRH) was investigated with a single question—“How do you perceive your overall health status?”—which could be answered on an ordinal five-point scale with “very good”, “good”, “fairly good”, “bad”, or “very bad”. For data analysis, responses of “very good” and “good” were summarized, as were “bad” and “very bad”. This question has been well validated and serves as a valuable summary of individuals’ perceptions of their overall health status (or SRH) . The patient’s own self-rated health has been shown to predict future use of healthcare services, morbidity and mortality . In the data analysis, responses of “very good”, “good”, and “fairly good” were summarized, as were “bad” and “very bad”. Statistical analyses Demographic characteristics were reported as medians, ranges, numbers and percentages, while values were given as means (SD) . Differences between measurements were analysed using paired t-test comparisons. To increase robustness against potential violations of non-normality, Wilcoxon paired tests were also carried out. The analysis of SRH was based on a dichotomous SRH measurement, indicating low or really low (responses “bad” and “very bad”) SRH vs other levels of SRH (responses “very good”, “good”, and “fairly good”). To test for differences between measurements, McNemar’s test was applied. However, readers should be aware of the limited sample size, which may affect the interpretation of this test. Statistical significance was set at p < 0.05. Data were analysed using the Statistical Package for the Social Sciences version 27 (SPSS Inc., Chicago, IL, USA) and R (v 3.2). Ethical considerations The study was approved by the Swedish Ethical Review Authority (Dnr 2014/198–31, 2018/324–32) and was performed in accordance with the Declaration of Helsinki. Written informed consent was obtained from all participants (World Medical Association Declaration of Helsinki, 2013 .
An observational study evaluating an intervention using a pre-test-post-test design was conducted . Individual structured interviews and HbA1c measurements were obtained before the intervention, at baseline, immediately after and 3 months after the group sessions. The group-based culturally appropriate diabetes education model for migrants with type 2 diabetes to be evaluated has previously been described .
Individuals diagnosed with type 2 diabetes who were migrants (immigrants and refugees) residing in Sweden were recruited by healthcare staff from healthcare centres in primary care ( n = 3) located in immigrant-dense areas. Data were collected at baseline, immediately after and 3 months after the group sessions. Inclusion criteria for the study were: individuals diagnosed with type 2 diabetes (ICD E, 11; WHO ), aged ≥ 18 years, and with a duration of diabetes ≥ 1 year. Participants with known psychiatric diagnoses (ICD F 00- F29/F60-F 99), registered in the medical records, were excluded on the grounds that cognitive deficiency might influence the results. Sixty-three individuals were invited, and 33 had signed up for the intervention. Of the 30 individuals (13 females, 17 males) who were identified but who did not participate, reasons for non-participation differed. One could not participate due to not being immunised, another did not attend the scheduled meeting, and a third expressed being too occupied. Two individuals resigned due to illness, four were abroad, and seven declined to participate. Four individuals did not answer the phone/could not be contacted, and the status of the remaining (eleven) was unknown (whether they could be contacted /identified). After receiving information (either oral or written), the operation managers at the healthcare centres approved the study, and informed the diabetes specialist nurses about the study (orally or in writing). Subsequently, members of the research team participated in workplace meetings to provide additional information to the staff. The diabetes specialist nurses (DSN) then identified persons who met the inclusion criteria based on digital medical records. Invitation letters, translated into the language spoken by the individual, with information about the study were sent or given during visits to eligible participants. They were asked to fill in a response form and return it in a prepaid envelope by mail or to the staff at the healthcare centre, who then forwarded it to the researcher. The invitation letter was translated by authorised translators into the language spoken by the identified individuals. Follow-up calls, in presence of interpreter (by telephone), were made by the DSNs for reminders.
Data were collected from March 2015 to March 2016 and from September 2019 to October 2023, including baseline, immediately after and 3 months after the group sessions. However, the study period was impacted by two years of the Covid-19 pandemic, ranging from March 2020 to May 2022. Thus, the study was cancelled at three different time points (in the start, middle, and end of the pandemic) due to visiting restrictions in healthcare facilities, which prohibited group education sessions to be started. This was particularly relevant as individuals with diabetes were considered a high-risk group, necessitating strict measures to protect them from infection. A registered nurse performed structured interviews (lasting about 45–60 min, including all instruments) in a secluded location at the primary healthcare centre, and in the presence of a professional authorised interpreter. Sequential interpretation techniques were applied (word-by word), with the interpreter translating what was being said literally, using the first person (I-form), remaining neutral and maintaining confidentiality . During the interview, glycosylated Haemoglobin levels, HbA1c, were measured at the healthcare centre. The structured interview guide for the whole project was developed based on previous research experiences by the research team, e.g. Hjelm et al., , literature review, and previously developed and tested instruments such as e.g. the Diabetes Knowledge Test (DKT) and Self-Rated Health . In this study findings from the Diabetes knowledge test (DKT; see ), Self-Rated Health (SRH; see ) and clinical and socio-demographic background data are reported. The interview guide was also pilot-tested, and its face and content validity were checked and found to be working well.
This culturally appropriate diabetes education model is centred on individual beliefs about health and illness, based on knowledge, and conducted through focus group discussions comprising five sessions, held every second week, and the programme was completed within three months. These sessions were led by a diabetes specialist nurse in collaboration with a multi-professional team (diabetes specialist nurse, physician, dietician), (for details, see Hadziabdic et al., 2020 . Each focus group should include 4–5 persons and last approximately 90 min, in the presence of an interpreter. A thematic interview guide is used, with broad open-ended questions and descriptions of critical situations/health problems. Participants are encouraged to discuss their individual beliefs based on their own knowledge. Healthcare staff present at the sessions answer questions, provide additional information and ensure that basic principles for diabetes care are addressed when necessary. This diabetes education model is tailored to both individual and cultural aspects and has the potential to improve knowledge about type 2 diabetes among migrants, thus increasing self-care behaviours and improving health. The model was tested in eight focus groups comprising five education sessions, including 22 migrants (14 females, 8 males).
The participants’ self-reported demographic characteristics included age, gender, country of birth, migration background (employment, refugee, relative), duration of residence in Sweden, whether diagnosed in Sweden or abroad, duration of diabetes, treatment received, self-reported complications related to diabetes, educational level, employment status, and marital status. The outcome measures used for this study were HbA1c, Diabetes knowledge, and Self-rated health (SRH). The participants’ knowledge was assessed using the Diabetes Knowledge Test (DKT), developed by the Diabetes Research and Training Center at the University of Michigan . The test includes two subscales, with a total of 23 items appropriate for adults with type 2 or type 1 diabetes. In this study, only the first subscale (14 items; general part) was used, as the second subscale focuses on issues regarding insulin treatment, and only individuals diagnosed with type 2 diabetes were included. The DKT has shown good psychometric properties, with adequate validity and reliability (Cronbach’s alpha > 0.79) . The questionnaire has been adapted and used, following translations, in many countries around the world and among populations of different origins (e.g. ). Translation into Swedish was done in several steps to ensure preservation of the essential meaning of the items . The DKT was translated into Swedish and back-translated into English by two independent professional translators. The PI for the study (first author) then reviewed the two versions and confirmed their equivalence. Interviews were performed with the assistance of professional interpreters in the respective languages. When assessing diabetes knowledge using the DKT , each correct answer was awarded one point, and zero for a wrong answer or no response. The total score was calculated based on the sum of points for the general knowledge section, questions 1–14 . Self-rated health (SRH) was investigated with a single question—“How do you perceive your overall health status?”—which could be answered on an ordinal five-point scale with “very good”, “good”, “fairly good”, “bad”, or “very bad”. For data analysis, responses of “very good” and “good” were summarized, as were “bad” and “very bad”. This question has been well validated and serves as a valuable summary of individuals’ perceptions of their overall health status (or SRH) . The patient’s own self-rated health has been shown to predict future use of healthcare services, morbidity and mortality . In the data analysis, responses of “very good”, “good”, and “fairly good” were summarized, as were “bad” and “very bad”.
Demographic characteristics were reported as medians, ranges, numbers and percentages, while values were given as means (SD) . Differences between measurements were analysed using paired t-test comparisons. To increase robustness against potential violations of non-normality, Wilcoxon paired tests were also carried out. The analysis of SRH was based on a dichotomous SRH measurement, indicating low or really low (responses “bad” and “very bad”) SRH vs other levels of SRH (responses “very good”, “good”, and “fairly good”). To test for differences between measurements, McNemar’s test was applied. However, readers should be aware of the limited sample size, which may affect the interpretation of this test. Statistical significance was set at p < 0.05. Data were analysed using the Statistical Package for the Social Sciences version 27 (SPSS Inc., Chicago, IL, USA) and R (v 3.2).
The study was approved by the Swedish Ethical Review Authority (Dnr 2014/198–31, 2018/324–32) and was performed in accordance with the Declaration of Helsinki. Written informed consent was obtained from all participants (World Medical Association Declaration of Helsinki, 2013 .
Description of sample Participant characteristics are shown in Table . The intervention included 22 individuals diagnosed type 2 diabetes, comprising fourteen males and eight females, with a median age of 57 years (range 39–70 years). The majority originated from countries in the Middle East, although some originated from African countries. They had been residing in Sweden for a median duration of 11.5 years (range 3–37 years), with most being refugees and a few having immigrated due to family ties. Most were diagnosed with type 2 diabetes abroad, in their home country, receiving treatment through diet or oral agents, with a median duration of 11 years. Many of them reported complications related to diabetes affecting the eyes. Most had an educational level below primary school and were either unemployed or retired. For the intervention 33 people had signed up, but only 22 ended up participating. Thus, eleven individuals only participated in the baseline measurements and were subsequently interviewed. This group included seven females and four males of the same origin as in the intervention group (five from Syria, one from Palestine, three from Eritrea and two from Somalia). They were somewhat older than the participants, with a median age of 67 years (range 37–80 years), and had a longer duration of diabetes, with a median duration of 13 years (range 5–43 years). More individuals in this group were diagnosed in Sweden, and fewer reported complications related to diabetes themselves (data not shown). The reasons for not attending the intervention sessions included illness, or travelling abroad. Another reason was education sessions being cancelled, as staff in the healthcare centres expressed it was impossible to continue with the intervention due to a lack of staff and a heavy workload related to the pandemia. Five persons (three males, two females) were lost to follow up between baseline and 3 months post-intervention due to own or relative’s illness. They did not differ in origin, age and duration of diabetes (median 57; 11 years), or self-reported complications (mainly eyes n = 4). There were also some loss to follow up on the different outcome variables and time points for measurement, more so in SRH, for details see Tables , , and . Evaluation of the intervention: the culturally appropriate diabetes education model conducted in focus groups Changes in HbA1c The mean value of HbA1c improved from baseline to immediately after the intervention (62.5 [SD 17.6] vs 58.8 [SD 16.3]; paired mean difference −4.35 [SD 10.3], p = 0.074), but this improvement did not persist at the 3-month follow-up, where it returned to a level similar to the starting point (62.9 [SD 25.5] vs 62.5 [SD 17.6]; paired mean difference −0.56 [SD 17.7], p = 0.9) (Table ). Non-parametric analyses showed the same pattern. Although the results showed a possible initial change of on average −4.35 in HbA1c, this was not statistically significant and the result does not suggest a general improvement in HbA1c values over time. Changes in Diabetes Knowledge (DKT) The mean number of correct answers on the DKT showed that the level of knowledge significantly increased from baseline and post-intervention both immediately after and 3 months later (from a mean 6.8 [SD 3.3] to 9.4 [SD 1.6]; paired mean difference 2.59 [SD 3.07], p = 0.0007) and from 6.8 [SD 3.3] to 8.2 [3.3]; paired mean difference 1.40 [SD 2.77], p = 0.0125) (Table ). Non-parametric analyses showed the same results. Thus, the knowledge improved during the intervention. Changes in Self-rated Health (SRH) The majority of participants (65%) rated their health positively (expressed as “very good”, “good”, or “fairly good”), while one-third rated their health as low or really low (summarised as “bad” and “very bad”). No significant change was found in self-rated health from baseline to post-intervention, neither immediately after the intervention nor 3 months later ( p = 0.62 vs 0.68) (Table ). Thus, the SRH did not change during the intervention.
Participant characteristics are shown in Table . The intervention included 22 individuals diagnosed type 2 diabetes, comprising fourteen males and eight females, with a median age of 57 years (range 39–70 years). The majority originated from countries in the Middle East, although some originated from African countries. They had been residing in Sweden for a median duration of 11.5 years (range 3–37 years), with most being refugees and a few having immigrated due to family ties. Most were diagnosed with type 2 diabetes abroad, in their home country, receiving treatment through diet or oral agents, with a median duration of 11 years. Many of them reported complications related to diabetes affecting the eyes. Most had an educational level below primary school and were either unemployed or retired. For the intervention 33 people had signed up, but only 22 ended up participating. Thus, eleven individuals only participated in the baseline measurements and were subsequently interviewed. This group included seven females and four males of the same origin as in the intervention group (five from Syria, one from Palestine, three from Eritrea and two from Somalia). They were somewhat older than the participants, with a median age of 67 years (range 37–80 years), and had a longer duration of diabetes, with a median duration of 13 years (range 5–43 years). More individuals in this group were diagnosed in Sweden, and fewer reported complications related to diabetes themselves (data not shown). The reasons for not attending the intervention sessions included illness, or travelling abroad. Another reason was education sessions being cancelled, as staff in the healthcare centres expressed it was impossible to continue with the intervention due to a lack of staff and a heavy workload related to the pandemia. Five persons (three males, two females) were lost to follow up between baseline and 3 months post-intervention due to own or relative’s illness. They did not differ in origin, age and duration of diabetes (median 57; 11 years), or self-reported complications (mainly eyes n = 4). There were also some loss to follow up on the different outcome variables and time points for measurement, more so in SRH, for details see Tables , , and .
Changes in HbA1c The mean value of HbA1c improved from baseline to immediately after the intervention (62.5 [SD 17.6] vs 58.8 [SD 16.3]; paired mean difference −4.35 [SD 10.3], p = 0.074), but this improvement did not persist at the 3-month follow-up, where it returned to a level similar to the starting point (62.9 [SD 25.5] vs 62.5 [SD 17.6]; paired mean difference −0.56 [SD 17.7], p = 0.9) (Table ). Non-parametric analyses showed the same pattern. Although the results showed a possible initial change of on average −4.35 in HbA1c, this was not statistically significant and the result does not suggest a general improvement in HbA1c values over time. Changes in Diabetes Knowledge (DKT) The mean number of correct answers on the DKT showed that the level of knowledge significantly increased from baseline and post-intervention both immediately after and 3 months later (from a mean 6.8 [SD 3.3] to 9.4 [SD 1.6]; paired mean difference 2.59 [SD 3.07], p = 0.0007) and from 6.8 [SD 3.3] to 8.2 [3.3]; paired mean difference 1.40 [SD 2.77], p = 0.0125) (Table ). Non-parametric analyses showed the same results. Thus, the knowledge improved during the intervention. Changes in Self-rated Health (SRH) The majority of participants (65%) rated their health positively (expressed as “very good”, “good”, or “fairly good”), while one-third rated their health as low or really low (summarised as “bad” and “very bad”). No significant change was found in self-rated health from baseline to post-intervention, neither immediately after the intervention nor 3 months later ( p = 0.62 vs 0.68) (Table ). Thus, the SRH did not change during the intervention.
The mean value of HbA1c improved from baseline to immediately after the intervention (62.5 [SD 17.6] vs 58.8 [SD 16.3]; paired mean difference −4.35 [SD 10.3], p = 0.074), but this improvement did not persist at the 3-month follow-up, where it returned to a level similar to the starting point (62.9 [SD 25.5] vs 62.5 [SD 17.6]; paired mean difference −0.56 [SD 17.7], p = 0.9) (Table ). Non-parametric analyses showed the same pattern. Although the results showed a possible initial change of on average −4.35 in HbA1c, this was not statistically significant and the result does not suggest a general improvement in HbA1c values over time.
The mean number of correct answers on the DKT showed that the level of knowledge significantly increased from baseline and post-intervention both immediately after and 3 months later (from a mean 6.8 [SD 3.3] to 9.4 [SD 1.6]; paired mean difference 2.59 [SD 3.07], p = 0.0007) and from 6.8 [SD 3.3] to 8.2 [3.3]; paired mean difference 1.40 [SD 2.77], p = 0.0125) (Table ). Non-parametric analyses showed the same results. Thus, the knowledge improved during the intervention.
The majority of participants (65%) rated their health positively (expressed as “very good”, “good”, or “fairly good”), while one-third rated their health as low or really low (summarised as “bad” and “very bad”). No significant change was found in self-rated health from baseline to post-intervention, neither immediately after the intervention nor 3 months later ( p = 0.62 vs 0.68) (Table ). Thus, the SRH did not change during the intervention.
The present study is unique as it evaluates a previously developed culturally appropriate diabetes education model for migrants , which is based on individual beliefs about health and illness, underpinned by knowledge, and conducted in focus group discussions integrated into daily practice in primary healthcare. The findings showed that participation in the diabetes education led to an increase in knowledge levels and resulted in an initial change in HbA1C and possible short-term improvement in HbA1c levels (better immediately post-intervention), albeit not statistically significant, but no change in glycaemic control over time and in the SRH. Thus, the findings supported the hypothesis of improved knowledge but gave no overall effect on glycaemic control and perceived (self-rated) health. As observed in previous research, this study found that group-based education resulted in improvements in patients’ knowledge about diabetes but despite a previously expressed need, culturally tailored diabetes education models for migrants are scarce and have not been evaluated and their effects tested . Previous intervention studies (in groups or individually) are focused on ethnic minority groups (mainly in the USA, African-Americans), and neither distinguish migrants from ethnic minority groups nor discuss influence of migratory background, with the exception of a study on immigrants in Denmark (Urdu, Arabic, Turkish language) . However, this culturally sensitive diabetes self-management education and support intervention studied the impact on health, both physically and mentally, but not on knowledge. Thus, this study fills an important knowledge gap, and only partial comparisons with previous studies are feasible. The initial change in HbA1c, on average −4.35 although not statistically significant, observed in this study may align with findings from previous group educations interventions, albeit not culturally adapted, which have demonstrated a decline in the improvement of HbA1c over time, both in the short- and long-term . It was only with ongoing education sessions or other inputs that these benefits were sustained over a longer period. This might explain why the culturally sensitive intervention for immigrants given during six weeks, and without any follow-ups , did not show a statistically significant improvement in HbA1c six months after the intervention. However, previous reviews on culturally appropriate health education interventions in ethnic minority groups (individually, and/or in groups) showed sustained improvements in glycaemic control in short- to midterm (3 months) and group educations to be more effective. The individual changes in HbA1C post-intervention (mean −4.35 [95% CI 9.17; 0.87; SD 10.3] after and −0.56, [95%CI −10.0;8.88; SD 17.72] at 3 months) was similar to results in these studies (−0.4 [95% CI −0,5; 0.2] and −4.3 [95% CI −1.4; 7.0] at three months ), and the culturally sensitive intervention for immigrants (−1.91 [SD 4.32] after and −1.6 [SD 10.49] at three months) . The level of knowledge significantly improved during the intervention and the individual changes (2.59 [95% CI 1.23; 3.95) after, 1.40 [95% CI 0.54; 3.74] at three months) even showed a better development of knowledge than in the culturally appropriate health education interventions in ethnic minority groups at three months post-intervention (0.35 [95% CI 0.10; 0.59] . Given the sample size and the confidence intervals it is likely that the improvements would have been even stronger in a bigger sample. Self-rated health (SRH) remained unchanged during the intervention and previous studies have shown diverging results; neutral effects on health-related quality of life measures (albeit limited studied) in culturally appropriate interventions in ethnic minorities and better self-reported general health (measured vid SF12) in the culturally sensitive intervention for immigrants . However, comparisons with previous studies are difficult due to clinical and study methodology heterogeneity, and loss-to follow up. The studied education model led to improved knowledge and better development of knowledge than previous culturally appropriate models for ethnic minority groups . With few exceptions the previous models did not use purely interactive patient-centred methods , many lacked a sound theoretical base, varied in length (from single session to 24 months), and were mainly delivered by lay community health workers, or by nurses, sometimes in combinations with dieticians, but no physicians involved . Thus, this model differ as it focus on migrants, is conducted through focus-group discussions to reach individual beliefs and thereby, is both individually and culturally tailored, and includes a multi-professional team also involving a physician (except a nurse, dietician, and interpreter). It has been argued that suitable education programmes should be an integral part of every treatment plan for persons with diabetes, and also include medical aspects . Further, it has the characteristics found giving most sustainable results ; ongoing education sessions (every second week) in groups during three months, led by a nurse attending all sessions for continuity and developing trustful relations . According to the theoretical base of the model health education should be executed in a learner-centred manner respecting, and being based on, cultural, social and religious values to have the greatest impact. The role of the staff is to facilitate learning by eliciting the person’s individual beliefs, stimulating interactions/discussions, and supporting with information when needed. The patient is the expert on their health, has an active role, and should be in the centre. The chosen methodology for teaching, focus-group discussions, not only facilitate learning but also support the participants by letting them share their experiences of living with diabetes and how to learn to cope with it . Thus, the model includes support, shown to improve health in a culturally sensitive intervention for immigrants . Although knowledge significantly improved due to the studied intervention the challenge still remains on how that improved knowledge can be translated into better physical health outcomes, and how translation of that knowledge into action can be aided. The previous culturally sensitive intervention for immigrants showed that active involvement through co-creation of the target group in the development of the education, and emphasized during the implementation, lead to improved health outcomes (physically and mentally) and self-management activities of healthy diet and physical activity. The co-creation was a way to ensure that the education met the preferences, needs and resources of the group, and thereby increased the cultural sensitivity influencing health behaviour. In the present study we have developed the education model based on experiences from previous studies on individual beliefs about health and illness in different migrant groups , and in a forthcoming study the participants evaluations of the model will be reported (used for audit) but the feature of co-creation can be strengthened. Also auditing the data collection process can be added to reach high quality data on chosen outcomes . Further, other outcome variables focused on health behaviours and diabetes self-management activities need to be considered and added for long-term follow up of the intervention. Confidence in selecting appropriate food and being able to exercise are particular elements of self-efficacy (measuring behavioural change) shown to be related to HbA1C . Even though the benefits of group education in terms of peer support and by sharing experiences on improved glycaemic control have been shown, persons from different communities may benefit differently from various styles of education. Group sessions, as chosen here, might be beneficial in those focused on social relationships, while not in others preferring individual sessions, e.g. due to traditions of privacy and experienced stigma of the disease . Also the attitude from the healthcare provider towards the participants, whether consultative or decisive (authoritarian), need to be further studied in migrants of different origin. To have the greatest impact health education should be implemented in a manner that respects cultural, social and religious values why the present model proceeds from the participants’ individual beliefs about health and illness determined by cultural background . Thus, it is tailored to the patients understanding/needs and aimed to develop risk awareness to influence self-care behaviour and health. In the standards of care in diabetes a systematic approach to supporting patient behaviour change efforts is recommended and whether the education model need to be complemented with additional aids in teaching, further follow-ups, and other teaching methods, as e.g. cooking classes and exercise groups, to transfer knowledge into action need to be evaluated. Finally, the introduction of the education model in the clinical area need to be given particular attention with staff being trained in changing into a person-centered approach moving from delivering information towards listening to and address individual beliefs, obstacles and motivational needs . They also have to learn to moderate groups, define their own roles in the team, and that diabetes is a complex disease that need to be understood and managed in a holistic way . In the focus group discussions both the influence of psychosocial factors and social determinants on health ( the economic, political, environmental, and social conditions in which people live) should be addressed and advice adapted to leading to better physical health outcomes . It is highly important to provide everyone with diabetes education being socially and culturally appropriate for their individual situation . However, whether the team then is to be supplemented with other skills or professions, the future will tell. The present study might have started processes of knowledge development that need to be further supported. Diabetes knowledge is a prerequisite for good self-care and can act as a mediator for behavioural change and, thus, HbA1c levels. However, knowledge achievement alone might be insufficient to promote behavioural change . Furthermore, when considering these results together with the initial change of HbA1C and possible short-term improvement (better immediate post-intervention), albeit statistically insignificant, and the lack of changes in perceived or self-rated health (SRH), it cannot be ruled out that there are inferences not reached due to the limited sample size affecting the study’s power. Thus, further studies involving a larger population and long-term follow-ups are needed. Strengths and limitations The main strength of this study is that we evaluated a newly developed model based on a sound theoretical foundation, which proceeds from individuals’ own beliefs about health and illness, based on their knowledge, guiding their health-related behaviour (see Hadziabdic et al.,) . The results contribute to the generalisability and the applicability of the model in a clinical setting within primary healthcare. A methodological limitation is that this study had no control group , and any causal interpretation of the changes found must be done with care. The results align with what was hypothesized but the design of the study, unfortunately has weaknesses. The original plan was to conduct a randomised controlled trial (RCT) with a Swedish control group. However, the Covid-19 pandemic heavily influenced the implementation of the study, and the post-Covid situation in primary healthcare with staff shortages and work overload further hampered the situation . The reality presented barriers impossible to influence why the study design had to be changed. Thus, we cannot make definitive statements about the cause of the observed changes. On the other hand, another strength of the study was the use of an observational study design and the collection of clinical data for research purposes . There was a substantial attrition rate , with 22 of 33 persons starting the intervention after accepting taking part in the study. The reasons were related to health status, beliefs about health and illness and risk awareness of disease, as well as time constraints associated with family and job responsibilities, or staff shortages and work overload. Factors shown to be of importance for participation of culturally and linguistically diverse populations in clinical interventions . Furthermore, there was some loss-to follow-up for the outcomes studied from baseline to 3 months after the intervention. Three main factors might have influenced; time, the research process, and the person investigated . As the 3 months follow-up was not part of the diabetes education, the participants might not have seen the relevance to participate. The follow-up interviews were time-consuming and might have compromised time constraints associated with social responsibilities in family, work etc. and prevented the person to go for tests afterwards. Doubt about giving the correct knowledge test answers (whether right or wrong can not be determined) could have jeopardized self-perception of own diabetes knowledge level, threatening the wish to give a social desirable response (interviewer-bias), resulting in a non-response. Interviewer failed to record data giving missing values, as for SRH. Finally, a previous review identified several factors affecting diabetes self-management (e.g. attending appointments with health-care providers, glucose monitoring) among immigrants (Arabic-speaking), including beliefs (cultural, social, religious), lack of understanding and knowledge of diabetes self-management, education level, diabetes-related distress and social factors . Thus, both challenges related to the individual, interpersonal dynamics, and foremost the institutional context, heavily influenced by the infrastructural context . However, a strength of the study is the use of a design with an within individual analysis studying changes within individuals . A strength of the study was the use of the Diabetes Knowledge Test (DKT) with good psychometric properties shown to be a valid and reliable measure for estimating patients general understanding of diabetes . It has been further adapted and used, following translations , in populations of different origins (non European, European, Scandinavians; immigrants and not; industrialized and developing countries) around the world (e.g. ). It might be seen as a limitation that the psychometric properties of the Swedish-translated DKT was not assessed but on the other hand recommended processes were followed ; translation-back-translation by independent professional translators, review of a researcher/clinician (expert), pilot-test (face/content validity checked; well functioning), and interviews assisted by professional interpreters in the individuals respective language, and considered sufficient. The sample size is restricted, but to increase robustness against distributional violations that could occur in small samples and change the findings, non-parametric methods were also used. Moreover, the sample included mainly individuals originating from the Middle East and some from North Africa, predominantly refugees, with a median time of residence in Sweden of 11.5 years, making them representative of the migrant population of the mid-2000s , encompassing the two largest migrant groups.
The main strength of this study is that we evaluated a newly developed model based on a sound theoretical foundation, which proceeds from individuals’ own beliefs about health and illness, based on their knowledge, guiding their health-related behaviour (see Hadziabdic et al.,) . The results contribute to the generalisability and the applicability of the model in a clinical setting within primary healthcare. A methodological limitation is that this study had no control group , and any causal interpretation of the changes found must be done with care. The results align with what was hypothesized but the design of the study, unfortunately has weaknesses. The original plan was to conduct a randomised controlled trial (RCT) with a Swedish control group. However, the Covid-19 pandemic heavily influenced the implementation of the study, and the post-Covid situation in primary healthcare with staff shortages and work overload further hampered the situation . The reality presented barriers impossible to influence why the study design had to be changed. Thus, we cannot make definitive statements about the cause of the observed changes. On the other hand, another strength of the study was the use of an observational study design and the collection of clinical data for research purposes . There was a substantial attrition rate , with 22 of 33 persons starting the intervention after accepting taking part in the study. The reasons were related to health status, beliefs about health and illness and risk awareness of disease, as well as time constraints associated with family and job responsibilities, or staff shortages and work overload. Factors shown to be of importance for participation of culturally and linguistically diverse populations in clinical interventions . Furthermore, there was some loss-to follow-up for the outcomes studied from baseline to 3 months after the intervention. Three main factors might have influenced; time, the research process, and the person investigated . As the 3 months follow-up was not part of the diabetes education, the participants might not have seen the relevance to participate. The follow-up interviews were time-consuming and might have compromised time constraints associated with social responsibilities in family, work etc. and prevented the person to go for tests afterwards. Doubt about giving the correct knowledge test answers (whether right or wrong can not be determined) could have jeopardized self-perception of own diabetes knowledge level, threatening the wish to give a social desirable response (interviewer-bias), resulting in a non-response. Interviewer failed to record data giving missing values, as for SRH. Finally, a previous review identified several factors affecting diabetes self-management (e.g. attending appointments with health-care providers, glucose monitoring) among immigrants (Arabic-speaking), including beliefs (cultural, social, religious), lack of understanding and knowledge of diabetes self-management, education level, diabetes-related distress and social factors . Thus, both challenges related to the individual, interpersonal dynamics, and foremost the institutional context, heavily influenced by the infrastructural context . However, a strength of the study is the use of a design with an within individual analysis studying changes within individuals . A strength of the study was the use of the Diabetes Knowledge Test (DKT) with good psychometric properties shown to be a valid and reliable measure for estimating patients general understanding of diabetes . It has been further adapted and used, following translations , in populations of different origins (non European, European, Scandinavians; immigrants and not; industrialized and developing countries) around the world (e.g. ). It might be seen as a limitation that the psychometric properties of the Swedish-translated DKT was not assessed but on the other hand recommended processes were followed ; translation-back-translation by independent professional translators, review of a researcher/clinician (expert), pilot-test (face/content validity checked; well functioning), and interviews assisted by professional interpreters in the individuals respective language, and considered sufficient. The sample size is restricted, but to increase robustness against distributional violations that could occur in small samples and change the findings, non-parametric methods were also used. Moreover, the sample included mainly individuals originating from the Middle East and some from North Africa, predominantly refugees, with a median time of residence in Sweden of 11.5 years, making them representative of the migrant population of the mid-2000s , encompassing the two largest migrant groups.
This evaluation of the developed culturally appropriate diabetes education model, conducted in focus groups, showed a significantly improved knowledge level and a possible initial change in glycaemic control but no overall effect. Moreover, there were no observed changes in self-rated health for at least 3 months post-intervention. The findings supported the hypothesis of improved knowledge but gave no overall effect on glycaemic control and did not change perceived (self-rated) health. However, due to the limited sample size and the selected study population, both with regard to attrition and loss of follow up, generalisability of the results must be done with care. Thus, further studies involving a larger population and long-term follow-ups are needed. Practice implications Despite a previously expressed need, the effects of culturally tailored diabetes education models has, with few exceptions, not been evaluated in migrants. Thus, this study fills an important knowledge gap. The model, which is based on individual beliefs about health and illness, underpinned by their knowledge, and conducted in focus group discussions, is recommended for use in daily practice within primary healthcare settings. Its aim is to increase knowledge and thereby improve self-care behaviour to promote health among migrants with type 2 diabetes.
Despite a previously expressed need, the effects of culturally tailored diabetes education models has, with few exceptions, not been evaluated in migrants. Thus, this study fills an important knowledge gap. The model, which is based on individual beliefs about health and illness, underpinned by their knowledge, and conducted in focus group discussions, is recommended for use in daily practice within primary healthcare settings. Its aim is to increase knowledge and thereby improve self-care behaviour to promote health among migrants with type 2 diabetes.
|
Visual and patient-reported outcomes of an enhanced versus monofocal intraocular lenses in cataract surgery: a systematic review and meta-analysis | 83cd58e3-5173-4442-a3c0-4f787da9a74c | 11933469 | Ophthalmologic Surgical Procedures[mh] | Cataract surgery is one of the most common procedures worldwide, with approximately 30 million surgeries conducted in 2021 . Most involve the implantation of monofocal intraocular lenses (IOLs) to restore far-distance vision, but these lenses are limited by a narrow range of field (RoF), which can impair intermediate vision . In 2019, the Tecnis Eyhance IOL (Johnson & Johnson) received CE marking, claiming to enhance RoF without increasing the risk of adverse events, such as reduced far-distance visual acuity or photic phenomena, which are seen in other technologies that extend the RoF . Since the introduction of Eyhance, new IOLs labeled as “Plus” have emerged, suggesting enhanced performance, but this broad term can be misleading and lead to biases when grouping IOLs in research . These inconsistencies may affect the validity of systematic reviews and meta-analyses, which provide the highest level of evidence by synthesizing data across studies. Previous systematic reviews have investigated enhanced monofocal IOLs, focusing on outcomes like distance, intermediate, and near vision, spectacle independence, and adverse effects compared to conventional IOLs . However, inconsistencies in study design and reporting have limited the ability to analyze all endpoints comprehensively. Some key outcomes, like the monocular defocus curve with the best distance correction, remain underexplored but are essential for classifying IOLs while minimizing confounding factors such as residual refraction and micro-monovision techniques . Evaluating visual acuity alone is insufficient to capture the functional benefits of an IOL. Assessing defocus curves and contrast sensitivity offers a more comprehensive measure of visual quality across a wider range of the RoF. Similarly, patient-reported outcomes like spectacle independence, photic phenomena, and satisfaction are crucial for determining whether Eyhance meets patient expectations, which is essential for clinical recommendations and patient decision-making. Given that the most recent systematic reviews of Eyhance included studies only up to 2022 , this updated systematic review and meta-analysis will compare the effectiveness of Eyhance and monofocal IOLs in cataract surgery patients, focusing on visual (e.g., visual acuity, defocus curves, contrast sensitivity) and patient-reported outcomes (e.g., spectacle independence, photic phenomena, dysphotopsia, satisfaction, and likelihood to recommend the procedure).
This study adhered to the PRISMA 2020 guidelines for reporting systematic reviews and meta-analyses, including the PRISMA-Search extension recommendations . The review protocol was prospectively registered on PROSPERO (CRD42024561611) in July 2024, with no subsequent changes made post-registration. Eligibility criteria Framework The eligibility criteria and search algorithm were structured around a PICO-ST question: In cataract surgery patients (Population), how does the Eyhance IOL (Intervention) compare to other monofocal IOLs (Comparator) in terms of visual and patient-reported outcomes (Outcome), based on randomized and non-randomized studies (Study Design) with at least one month of follow-up, published between 2019 and 2024 (Timeline). Population The included studies focused on patients who underwent binocular cataract surgery, with no exclusions based on comorbidities, specific IOL models, or variations in surgical techniques, such as corneal incisions or micro-monovision adjustments. This inclusive approach ensured a comprehensive evaluation of the intervention across diverse clinical settings, making the results broadly applicable. Subgroups of studies were analyzed based on variables pre-specified in the protocol (i.e., IOL model, magnitude of astigmatism, programmed target, and comorbidities), along with new variables identified during data extraction, to explore potential sources of bias. Intervention and Comparator The intervention group consisted of patients receiving the Eyhance IOL, selected due to its significant representation in the literature, allowing a robust analysis of its effectiveness compared to other IOLs. In the comparator group, any other monofocal IOLs were included to broaden the comparison and minimize potential biases from narrow or author-defined classifications like “Plus,” “Enhance,” or “Conventional” monofocals. This wide inclusion of comparators ensured an adequate number of studies for analysis. Outcomes Primary outcomes measured the efficacy of the IOLs, focusing on monocular distance-corrected visual acuities at far (CDVA), intermediate (DCIVA), and near (DCNVA) distances, defocus curves (DC) to evaluate visual performance across different focal points, and far-distance contrast sensitivity (CSF). Secondary outcomes included procedure efficacy and patient-reported outcomes, evaluated through binocular distance-uncorrected measures such as visual acuities at different distances (UDVA, UIVA, and UNVA), defocus curves (bDC), and contrast sensitivity (bCSF). Patient-reported outcomes captured subjective experiences, including the degree of spectacle independence (SI) achieved at far, intermediate, and near distances, and the frequency (PP) and bothersome to photic phenomena (PD) such as halos, glare, and starbursts. Additionally, overall patient satisfaction was evaluated, along with whether patients would recommend the intervention or undergo the same procedure again. Studies were included for data extraction regardless of the clinical endpoint provided; for example, binocular defocus curves were extracted whether or not authors described them as distance corrected. This differed from the protocol which was planned to extract uncorrected binocular defocus curves, but very few studies accomplished this condition. Other criteria The included studies were randomized and non-randomized, both prospective and retrospective, which provided a broad methodological base to analyze the intervention’s effects. To ensure relevant and up-to-date data, only studies published between 2019 and 2024 were considered, aligning with the introduction of the Eyhance IOL in 2019. A minimum follow-up period of around 1 month was required, excluding studies with less than 3 weeks of follow-up, which was deemed insufficient to observe reliable outcomes. Studies published in any language were included, while unpublished reports (such as conference abstracts) were not considered. Search strategy and information sources A systematic search was conducted across several electronic databases and clinical trial registries to identify studies meeting the eligibility criteria. The initial search was carried out using the IOLEvidence Database (IOL Evidence App, Indaloftal SL) to find studies related to Eyhance IOL. After retrieving relevant studies, keyword exploration and index term analysis was performed using PubReMiner (Jan Koster, AMC) to identify common terms used in titles and abstracts. These terms were incorporated into the final search algorithm. The search strategy was developed using a PICO-ST framework (see Supplementary File ) and applied to optimize both the scope and sensitivity of the searches in MEDLINE (PubMed), with no language restrictions. A date range filter was applied from 2019 (the launch date of Eyhance) to 2024 (the search date). To maintain consistency across databases, the search strategy was translated for use in EMBASE (Ovid). Additionally, clinical trial registries, including ClinicalTrials.gov, the Cochrane Central Register of Controlled Trials (CENTRAL), and the World Health Organization (WHO, https://trialsearch.who.int ), were searched to identify ongoing and unpublished studies. This was done to minimize publication bias and ensure the inclusion of relevant data not yet published in peer-reviewed journals. The database searches were completed on June 24th, 2024. Study selection and data collection After the search, all identified citations were imported into Rayyan (Qatar Computing Research Institute, Doha, Qatar) for reference management. No automation features were employed. Duplicates were identified and removed to ensure the uniqueness of each reference. Two independent reviewers initially screened the titles and abstracts of the remaining references against predefined eligibility criteria. Studies that did not meet these criteria were excluded. Full-text articles of the remaining citations were subsequently reviewed in detail by the same two reviewers, who rigorously applied the inclusion criteria. Any disagreements arising during the selection process were resolved through discussion, with a third reviewer acting as an arbitrator when required. For data collection, two independent reviewers extracted relevant data from each included study using a pre-designed data extraction tool. The extracted data focused on primary and secondary outcomes, as well as other variables and potential sources of bias, including confounding factors (e.g., comorbidities, procedure modifications, among others, dataset available from the authors upon request). In cases where plots such as defocus curves and contrast sensitivity charts were provided, one reviewer used WebPlotDigitizer (Ankit Rohatgi) to digitize the data, which was then validated by a second reviewer. Any discrepancies between reviewers were resolved through consensus, with a third reviewer available for arbitration when necessary. If any data were missing or unavailable, it was labeled as “Not Available” (NA). Additional data was not required to be asked to study investigators. Risk of bias The risk of bias was assessed independently by two reviewers using the RoB 2.0 tool for randomized controlled trials (RCTs) and the ROBINS-I tool for non-randomized studies of interventions . For RCTs, the RoB 2.0 tool evaluated five domains: bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in the measurement of the outcome, and bias in the selection of the reported result. Each study was classified into one of three categories: ‘low’, ‘some concerns’, or ‘high’. Studies identified as having a ‘high’ risk in at least one domain or ‘some concerns’ across multiple domains were deemed to have a higher overall risk of bias. Data entry and visualizations were facilitated using the RoB 2 Excel tool . For non-randomized studies, the ROBINS-I tool assessed the risk of bias across seven domains, including bias due to confounding, participant selection, classification of interventions, deviations from intended interventions, missing data, measurement of outcomes, and selection of reported results. Each study was classified into one of four categories: ‘Low’, ‘Moderate’, ‘Serious’, or ‘Critical’. Discrepancies between reviewers were resolved through discussion, with a third reviewer consulted when necessary. The results were synthesized in risk of bias tables and considered in the interpretation of study findings to inform the strength of the evidence. Data synthesis and analyses Eligible studies were assessed by tabulating key characteristics, such as interventions and outcomes, and comparing these with the predefined eligibility criteria outlined in the protocol. Only studies that met these criteria were included in the final synthesis. A protocol modification was made to account for variations in the reporting of binocular defocus curves. Although the original protocol specified the inclusion of uncorrected binocular defocus curves, many studies reported outcomes using best distance correction. As a result, both uncorrected and distance-corrected outcomes were included in the analysis, and any potential bias arising from this variation was carefully examined. The characteristics of the included studies were summarized in a table, and statistical meta-analyses were performed using Comprehensive Meta-Analysis (CMA, Version 4.0). Custom tools developed by one of the authors were used to visualize the data, integrating risk of bias assessments into forest plots. For dichotomous outcomes, effect sizes were expressed as odds ratios, while mean differences were used for continuous outcomes, with 95% confidence intervals calculated for each analysis. Visual acuity and contrast sensitivity outcomes were analyzed in standard units, using logMAR and logCS, respectively. In cases where studies did not report standard deviations for these metrics, commonly used values of 0.1 logMAR and 0.2 logCS were imputed, as these are frequently reported in clinical studies and are typically used for sample size calculations . Patient-reported outcomes, often measured using Likert scales, were dichotomized by grouping responses into categories such as “bothered” or “very bothered” for photic phenomena and “satisfied” or “very satisfied” for overall satisfaction. Spectacle independence was analyzed as a dichotomous outcome, with patients either reporting complete independence from spectacles or not. A random-effects model using the DerSimonian and Laird estimator was employed for the meta-analysis, as variability in the IOLs used in the comparison groups, differences in clinic populations, testing protocols, and age distributions were expected to contribute to heterogeneity beyond sampling error. The inverse-variance method was used to pool effect sizes, and statistical heterogeneity was assessed using 95% prediction intervals, the χ 2 test, and the I 2 statistic. These heterogeneity measures were presented in the forest plots for each analysis. Subgroup analyses were conducted using a mixed-effects model, which applied a random-effects model for within-group comparisons and a pooled tau-squared value for the overall effect . Sensitivity analyses were performed to assess the robustness of the results by examining sources of bias identified during the review. These included studies that involved patients with several magnitudes of corneal astigmatism, as well as patients with ocular comorbidities (e.g., glaucoma, diabetic retinopathy, macular degeneration). Additional sources of variation included differences in the IOLs used in the comparator group, mean age of participants, outcome measurement methods, and follow-up durations. Furthermore, meta-regression analyses were performed to explore the impact of continuous variables, such as age, on the study outcomes. Meta-bias and confidence in cumulative evidence A funnel plot was created and examined to investigate potential small-study biases if more than ten studies could be pooled in a single meta-analysis. The number of studies missing from the funnel plot was estimated using the trim-and-fill method . Additionally, publication bias was assessed using Begg’s rank correlation and Egger’s weighted regression method test . These methods were selected to detect potential asymmetry in the funnel plot and quantify the likelihood of smart study bias. Any discrepancies in assessments were resolved by consensus between two independent reviewers. In cases where statistical pooling was not feasible, findings are presented in narrative form. The quality of evidence for each outcome was assessed using the GRADE approach . The criteria for downgrading the quality of evidence included risk of bias, inconsistency, imprecision, indirectness, and others. Where appropriate, evidence was upgraded based on factors such as the magnitude of effect, and the absence of plausible confounding. Two independent reviewers conducted the GRADE assessments, with disagreements resolved through discussion or referral to a third reviewer.
Framework The eligibility criteria and search algorithm were structured around a PICO-ST question: In cataract surgery patients (Population), how does the Eyhance IOL (Intervention) compare to other monofocal IOLs (Comparator) in terms of visual and patient-reported outcomes (Outcome), based on randomized and non-randomized studies (Study Design) with at least one month of follow-up, published between 2019 and 2024 (Timeline). Population The included studies focused on patients who underwent binocular cataract surgery, with no exclusions based on comorbidities, specific IOL models, or variations in surgical techniques, such as corneal incisions or micro-monovision adjustments. This inclusive approach ensured a comprehensive evaluation of the intervention across diverse clinical settings, making the results broadly applicable. Subgroups of studies were analyzed based on variables pre-specified in the protocol (i.e., IOL model, magnitude of astigmatism, programmed target, and comorbidities), along with new variables identified during data extraction, to explore potential sources of bias. Intervention and Comparator The intervention group consisted of patients receiving the Eyhance IOL, selected due to its significant representation in the literature, allowing a robust analysis of its effectiveness compared to other IOLs. In the comparator group, any other monofocal IOLs were included to broaden the comparison and minimize potential biases from narrow or author-defined classifications like “Plus,” “Enhance,” or “Conventional” monofocals. This wide inclusion of comparators ensured an adequate number of studies for analysis. Outcomes Primary outcomes measured the efficacy of the IOLs, focusing on monocular distance-corrected visual acuities at far (CDVA), intermediate (DCIVA), and near (DCNVA) distances, defocus curves (DC) to evaluate visual performance across different focal points, and far-distance contrast sensitivity (CSF). Secondary outcomes included procedure efficacy and patient-reported outcomes, evaluated through binocular distance-uncorrected measures such as visual acuities at different distances (UDVA, UIVA, and UNVA), defocus curves (bDC), and contrast sensitivity (bCSF). Patient-reported outcomes captured subjective experiences, including the degree of spectacle independence (SI) achieved at far, intermediate, and near distances, and the frequency (PP) and bothersome to photic phenomena (PD) such as halos, glare, and starbursts. Additionally, overall patient satisfaction was evaluated, along with whether patients would recommend the intervention or undergo the same procedure again. Studies were included for data extraction regardless of the clinical endpoint provided; for example, binocular defocus curves were extracted whether or not authors described them as distance corrected. This differed from the protocol which was planned to extract uncorrected binocular defocus curves, but very few studies accomplished this condition. Other criteria The included studies were randomized and non-randomized, both prospective and retrospective, which provided a broad methodological base to analyze the intervention’s effects. To ensure relevant and up-to-date data, only studies published between 2019 and 2024 were considered, aligning with the introduction of the Eyhance IOL in 2019. A minimum follow-up period of around 1 month was required, excluding studies with less than 3 weeks of follow-up, which was deemed insufficient to observe reliable outcomes. Studies published in any language were included, while unpublished reports (such as conference abstracts) were not considered.
The eligibility criteria and search algorithm were structured around a PICO-ST question: In cataract surgery patients (Population), how does the Eyhance IOL (Intervention) compare to other monofocal IOLs (Comparator) in terms of visual and patient-reported outcomes (Outcome), based on randomized and non-randomized studies (Study Design) with at least one month of follow-up, published between 2019 and 2024 (Timeline).
The included studies focused on patients who underwent binocular cataract surgery, with no exclusions based on comorbidities, specific IOL models, or variations in surgical techniques, such as corneal incisions or micro-monovision adjustments. This inclusive approach ensured a comprehensive evaluation of the intervention across diverse clinical settings, making the results broadly applicable. Subgroups of studies were analyzed based on variables pre-specified in the protocol (i.e., IOL model, magnitude of astigmatism, programmed target, and comorbidities), along with new variables identified during data extraction, to explore potential sources of bias.
The intervention group consisted of patients receiving the Eyhance IOL, selected due to its significant representation in the literature, allowing a robust analysis of its effectiveness compared to other IOLs. In the comparator group, any other monofocal IOLs were included to broaden the comparison and minimize potential biases from narrow or author-defined classifications like “Plus,” “Enhance,” or “Conventional” monofocals. This wide inclusion of comparators ensured an adequate number of studies for analysis.
Primary outcomes measured the efficacy of the IOLs, focusing on monocular distance-corrected visual acuities at far (CDVA), intermediate (DCIVA), and near (DCNVA) distances, defocus curves (DC) to evaluate visual performance across different focal points, and far-distance contrast sensitivity (CSF). Secondary outcomes included procedure efficacy and patient-reported outcomes, evaluated through binocular distance-uncorrected measures such as visual acuities at different distances (UDVA, UIVA, and UNVA), defocus curves (bDC), and contrast sensitivity (bCSF). Patient-reported outcomes captured subjective experiences, including the degree of spectacle independence (SI) achieved at far, intermediate, and near distances, and the frequency (PP) and bothersome to photic phenomena (PD) such as halos, glare, and starbursts. Additionally, overall patient satisfaction was evaluated, along with whether patients would recommend the intervention or undergo the same procedure again. Studies were included for data extraction regardless of the clinical endpoint provided; for example, binocular defocus curves were extracted whether or not authors described them as distance corrected. This differed from the protocol which was planned to extract uncorrected binocular defocus curves, but very few studies accomplished this condition.
The included studies were randomized and non-randomized, both prospective and retrospective, which provided a broad methodological base to analyze the intervention’s effects. To ensure relevant and up-to-date data, only studies published between 2019 and 2024 were considered, aligning with the introduction of the Eyhance IOL in 2019. A minimum follow-up period of around 1 month was required, excluding studies with less than 3 weeks of follow-up, which was deemed insufficient to observe reliable outcomes. Studies published in any language were included, while unpublished reports (such as conference abstracts) were not considered.
A systematic search was conducted across several electronic databases and clinical trial registries to identify studies meeting the eligibility criteria. The initial search was carried out using the IOLEvidence Database (IOL Evidence App, Indaloftal SL) to find studies related to Eyhance IOL. After retrieving relevant studies, keyword exploration and index term analysis was performed using PubReMiner (Jan Koster, AMC) to identify common terms used in titles and abstracts. These terms were incorporated into the final search algorithm. The search strategy was developed using a PICO-ST framework (see Supplementary File ) and applied to optimize both the scope and sensitivity of the searches in MEDLINE (PubMed), with no language restrictions. A date range filter was applied from 2019 (the launch date of Eyhance) to 2024 (the search date). To maintain consistency across databases, the search strategy was translated for use in EMBASE (Ovid). Additionally, clinical trial registries, including ClinicalTrials.gov, the Cochrane Central Register of Controlled Trials (CENTRAL), and the World Health Organization (WHO, https://trialsearch.who.int ), were searched to identify ongoing and unpublished studies. This was done to minimize publication bias and ensure the inclusion of relevant data not yet published in peer-reviewed journals. The database searches were completed on June 24th, 2024.
After the search, all identified citations were imported into Rayyan (Qatar Computing Research Institute, Doha, Qatar) for reference management. No automation features were employed. Duplicates were identified and removed to ensure the uniqueness of each reference. Two independent reviewers initially screened the titles and abstracts of the remaining references against predefined eligibility criteria. Studies that did not meet these criteria were excluded. Full-text articles of the remaining citations were subsequently reviewed in detail by the same two reviewers, who rigorously applied the inclusion criteria. Any disagreements arising during the selection process were resolved through discussion, with a third reviewer acting as an arbitrator when required. For data collection, two independent reviewers extracted relevant data from each included study using a pre-designed data extraction tool. The extracted data focused on primary and secondary outcomes, as well as other variables and potential sources of bias, including confounding factors (e.g., comorbidities, procedure modifications, among others, dataset available from the authors upon request). In cases where plots such as defocus curves and contrast sensitivity charts were provided, one reviewer used WebPlotDigitizer (Ankit Rohatgi) to digitize the data, which was then validated by a second reviewer. Any discrepancies between reviewers were resolved through consensus, with a third reviewer available for arbitration when necessary. If any data were missing or unavailable, it was labeled as “Not Available” (NA). Additional data was not required to be asked to study investigators.
The risk of bias was assessed independently by two reviewers using the RoB 2.0 tool for randomized controlled trials (RCTs) and the ROBINS-I tool for non-randomized studies of interventions . For RCTs, the RoB 2.0 tool evaluated five domains: bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in the measurement of the outcome, and bias in the selection of the reported result. Each study was classified into one of three categories: ‘low’, ‘some concerns’, or ‘high’. Studies identified as having a ‘high’ risk in at least one domain or ‘some concerns’ across multiple domains were deemed to have a higher overall risk of bias. Data entry and visualizations were facilitated using the RoB 2 Excel tool . For non-randomized studies, the ROBINS-I tool assessed the risk of bias across seven domains, including bias due to confounding, participant selection, classification of interventions, deviations from intended interventions, missing data, measurement of outcomes, and selection of reported results. Each study was classified into one of four categories: ‘Low’, ‘Moderate’, ‘Serious’, or ‘Critical’. Discrepancies between reviewers were resolved through discussion, with a third reviewer consulted when necessary. The results were synthesized in risk of bias tables and considered in the interpretation of study findings to inform the strength of the evidence.
Eligible studies were assessed by tabulating key characteristics, such as interventions and outcomes, and comparing these with the predefined eligibility criteria outlined in the protocol. Only studies that met these criteria were included in the final synthesis. A protocol modification was made to account for variations in the reporting of binocular defocus curves. Although the original protocol specified the inclusion of uncorrected binocular defocus curves, many studies reported outcomes using best distance correction. As a result, both uncorrected and distance-corrected outcomes were included in the analysis, and any potential bias arising from this variation was carefully examined. The characteristics of the included studies were summarized in a table, and statistical meta-analyses were performed using Comprehensive Meta-Analysis (CMA, Version 4.0). Custom tools developed by one of the authors were used to visualize the data, integrating risk of bias assessments into forest plots. For dichotomous outcomes, effect sizes were expressed as odds ratios, while mean differences were used for continuous outcomes, with 95% confidence intervals calculated for each analysis. Visual acuity and contrast sensitivity outcomes were analyzed in standard units, using logMAR and logCS, respectively. In cases where studies did not report standard deviations for these metrics, commonly used values of 0.1 logMAR and 0.2 logCS were imputed, as these are frequently reported in clinical studies and are typically used for sample size calculations . Patient-reported outcomes, often measured using Likert scales, were dichotomized by grouping responses into categories such as “bothered” or “very bothered” for photic phenomena and “satisfied” or “very satisfied” for overall satisfaction. Spectacle independence was analyzed as a dichotomous outcome, with patients either reporting complete independence from spectacles or not. A random-effects model using the DerSimonian and Laird estimator was employed for the meta-analysis, as variability in the IOLs used in the comparison groups, differences in clinic populations, testing protocols, and age distributions were expected to contribute to heterogeneity beyond sampling error. The inverse-variance method was used to pool effect sizes, and statistical heterogeneity was assessed using 95% prediction intervals, the χ 2 test, and the I 2 statistic. These heterogeneity measures were presented in the forest plots for each analysis. Subgroup analyses were conducted using a mixed-effects model, which applied a random-effects model for within-group comparisons and a pooled tau-squared value for the overall effect . Sensitivity analyses were performed to assess the robustness of the results by examining sources of bias identified during the review. These included studies that involved patients with several magnitudes of corneal astigmatism, as well as patients with ocular comorbidities (e.g., glaucoma, diabetic retinopathy, macular degeneration). Additional sources of variation included differences in the IOLs used in the comparator group, mean age of participants, outcome measurement methods, and follow-up durations. Furthermore, meta-regression analyses were performed to explore the impact of continuous variables, such as age, on the study outcomes.
A funnel plot was created and examined to investigate potential small-study biases if more than ten studies could be pooled in a single meta-analysis. The number of studies missing from the funnel plot was estimated using the trim-and-fill method . Additionally, publication bias was assessed using Begg’s rank correlation and Egger’s weighted regression method test . These methods were selected to detect potential asymmetry in the funnel plot and quantify the likelihood of smart study bias. Any discrepancies in assessments were resolved by consensus between two independent reviewers. In cases where statistical pooling was not feasible, findings are presented in narrative form. The quality of evidence for each outcome was assessed using the GRADE approach . The criteria for downgrading the quality of evidence included risk of bias, inconsistency, imprecision, indirectness, and others. Where appropriate, evidence was upgraded based on factors such as the magnitude of effect, and the absence of plausible confounding. Two independent reviewers conducted the GRADE assessments, with disagreements resolved through discussion or referral to a third reviewer.
Included studies A total of 148 records were retrieved from the initial systematic search and clinical trial databases, which was reduced to 82 after removing duplicates. The first screening of titles and abstracts resulted in the exclusion of 48 references for various reasons, as outlined in Fig. . The most common reasons for exclusion were interventions different from the Eyhance IOL and outcomes unrelated to those described in the eligibility criteria. An additional three studies were excluded after full-text screening, leaving a total of 31 studies for data extraction. Studies description Eligible trials were described as randomized clinical trials ( n = 8) and comparative case series, either prospective ( n = 10), retrospective ( n = 12) or cross-sectional ( n = 1). Two studies compared Eyhance with more than one monofocal IOL, Mencucci et al. and Giglio et al. , these were differentiated by adding an additional letter after the year (i.e. Mencucci et al. a or Mencucci et al. b for Tecnis PCB00 or Clareon CNA0T0 comparison, respectively). Characteristics of the study population such as inclusion criteria for corneal astigmatism, astigmatism management with different types of IOLs and corneal incisions, mean age of the intervention group, eye comorbidities, and variations of targeting such with techniques such as micro-monovision are detailed in Table . The complete extraction for all the variables is available from authors upon request. From studies for which data were extracted only Hwang et al. did not report any of the primary or secondary endpoints described in the protocol. In addition, Singh was not considered in the DCIVA analysis since the outcome (better than 0 logMAR in both groups) suggests that corrected intermediate visual acuity (CIVA) was measured instead even though it was reported as DCIVA. This also was done for the DCNVA and Donoso study for the same reason but for near distance. Risk of bias The risk of bias was evaluated at the outcome level, recognizing that different confounders can affect specific endpoints, potentially limiting confidence in efficacy estimates. For example, the failure to report postoperative residual refractive error contributed to the risk of bias for UIVA but not for DCIVA. Therefore, the risk of bias was incorporated into the forest plots using both the RoB 2 and ROBINS-I tools, acknowledging differences in the number of domains assessed. Specifically, domains F and G were marked as “not available” (NA) when applying the RoB 2 tool to RCTs. Supplementary File provides detailed reasons for the judgments in each domain and endpoint, while Supplementary File displays weighted bar plots showing the distribution of risk-of-bias judgments within each domain. Critical risk of bias was more frequent in case series than in RCTs, primarily due to confounding, and more common in studies reporting binocular outcomes without distance correction or patient-reported outcomes (~50%) compared to those reporting monocular outcomes with distance correction (~25%) (see Supplementary File ). In non-randomized studies assessed with the ROBINS-I tool, confounding was frequently observed, such as a lack of reporting on demographic variables or comorbidities that might influence outcomes. Selection bias was also common, with many studies failing to detail participant selection or excluding patients with complications. Some studies introduced classification issues when multiple types of IOLs were used for the comparator group. Deviations from intended interventions were often poorly described, particularly regarding how surgeries were performed or re-interventions were handled. Additionally, some studies failed to report the absence of missing data, raising concerns about moderate bias. Standard methods for measuring outcomes were sometimes not properly reported, leading to critical bias. Incomplete reporting of outcomes also emerged as a recurring issue, with some studies omitting key outcomes or using non-standard analysis methods. In RCTs evaluated with the RoB 2 tool, common sources of bias included insufficient reporting on randomization procedures, with missing details on sequence generation and allocation concealment. Baseline differences between intervention groups, such as age, were also noted. Deviations from intended interventions were observed, including unclear masking of participants, carers, and assessors, and underreporting of adverse events. While missing outcome data were often deemed negligible, they were not always clearly addressed. Measurement bias was also a concern, especially when assessors, aware of the intervention, may have influenced outcome thresholds, despite using standard charts. Lastly, concerns about selective reporting arose, as some studies lacked publicly available protocols. These biases, particularly confounding, suggest caution when interpreting Eyhance efficacy, especially for binocular outcomes without distance correction or patient-reported outcomes. Primary outcomes and sensitivity analysis Monocular distance-corrected visual acuities Monocular distance-corrected visual acuities are a key indicator of the IOL’s efficacy, as procedural modifications are minimized, reducing confounding factors such as postoperative residual refractive errors or binocular summation. However, the I 2 was high (≥64%, p < 0.0001) across the three distances, indicating that the variance in effects was greater than expected from random variability alone. There was no clinically relevant bias for CDVA, with prediction intervals ranging from −0.02 to 0.03 logMAR (see Fig. ), suggesting differences of one letter or less on a visual acuity chart with five letters per row. In contrast, differences in DCIVA and DCNVA were clinically relevant, with pooled estimates of −0.09 and −0.08 logMAR, respectively, favoring Eyhance for both DCIVA (see Fig. ) and DCNVA (see Fig. ) ( p < 0.0001). A subgroup analysis demonstrated that classifying Zoe Primus-HD (Corbelli et al. ), Vivinex Impress (Mencucci et al. a), and IsoPure (Mencucci et al. b) as Enhanced IOLs, along with the Eyhance IOL, revealed no significant differences in either DCIVA or DCNVA (see Supplementary Figs. and , respectively) and led to a decrease in I 2 . However, when these Enhanced IOLs were compared with the remaining monofocal IOLs (Tecnis ZCB00, PCB00, or AAB00; Clareon CNA0T0 or CCA0T0; Vivinex iSert; RayOne Monofocal; AcrySof SN60WF; and SofPort, enVista, or Toric), the differences increased to −0.11 logMAR and −0.12 logMAR for both DCIVA and DCNVA ( p < 0.0001), respectively. Exploration of other variables, including age, corneal astigmatism inclusion criteria, and comorbidities, revealed that only the use of standard measurement methods (ETDRS at 85 cd/m²) significantly reduced I ² from 54% ( p = 0.005) to 20% ( p = 0.27) for DCIVA in the subgroup excluding the three previous described IOLs. However, no clinically relevant differences (<0.02 logMAR) were found between studies using standard versus non-standard testing methods. Overall, the included studies were at low to moderate risk of bias (see Supplementary File ). A subgroup analysis of risk of bias revealed an underestimation of mean differences for DCIVA in studies with serious, high, or critical bias (−0.08, CI: −0.12 to −0.05) compared to other bias levels (−0.12, CI: −0.14 to −0.10) logMAR, and similarly for DCNVA (−0.06, CI: −0.16 to 0.03) versus (−0.12, CI: −0.17 to −0.06). Monocular distance-corrected defocus curve Figure displays the forest plot for subgroups based on the defocus lens from the monocular DC, using the best distance correction. The defocus range analyzed was 0 to −2 D, as all studies reporting the defocus curve covered this range, whereas some lacked data beyond −2.0 D. Notably, only five studies reported standard deviations. Therefore, a standard deviation of 0.1 logMAR was assumed for studies without this information, as outlined in the “Methods” section. A subgroup analysis, limited to the studies that reported standard deviations, confirmed the same conclusions as the overall analysis. Statistically significant differences between Eyhance and monofocal IOLs were observed from −0.5 D to −2.0 D, with a slight increase in effect as defocus increased. Clinically relevant differences were found for defocus levels beyond −1.0 D, and these differences were also significant when compared to IsoPure (Salgado-Borges ). Similar to the measurement of proximal visual acuities, the I ² value was high (≥68%, p < 0.0001), indicating substantial heterogeneity and the risk of bias was balanced between serious, high, or critical bias and low to moderate bias. Some studies showing less pronounced differences between groups involved in either critical bias or a population with ocular comorbidities (Nam ). Monocular distance-corrected contrast sensitivity Few studies reported monocular distance-corrected CSF, particularly under mesopic or glare conditions, with no more than three studies providing outcomes. The photopic condition without glare had the most data, with five studies, but significant limitations were noted, as only two studies reported standard deviations. As a result, a standard deviation of 0.2 logCS was assumed to pool the data. Figure shows no significant differences between Eyhance and monofocal IOLs, particularly for low and middle spatial frequencies. Secondary outcomes Binocular uncorrected visual acuities Binocular uncorrected visual acuities reflect the efficacy of the procedure, with procedural modifications potentially acting as confounders that should be taken into consideration. Supplementary Fig. shows no significant differences in UDVA (0.00 logMAR, 95% CI: −0.01 to 0.01 logMAR, z = 0.71, p = 0.48) between Eyhance and monofocal lenses, which may be attributed to the lack of differences in postoperative spherical equivalent (−0.01 D, 95% CI: −0.08 to 0.07 D, z = −0.51, p = 0.81), with a prediction interval ranging from −0.34 to 0.33 D. For binocular UIVA and UNVA, differences favoring Eyhance were observed, with −0.14 logMAR for UIVA and −0.15 logMAR for UNVA when subgroup analyses were conducted based on classification type (see Supplementary Figs. and , respectively). The binocular outcomes followed a similar trend to the monocular outcomes reported earlier but with a higher prevalence of serious or critical risk of bias (see Supplementary File ). In contrast to studies reporting monocular visual acuities, the use of non-standard measurement methods led to an underestimation of pooled differences for UIVA (−0.12, CI: −0.16 to −0.07 logMAR) compared to standard methods (−0.16, CI: −0.21 to −0.11 logMAR). However, this underestimation was not observed for UNVA. A slight underestimation due to a higher risk of bias was observed for UIVA (−0.13, CI: −0.18 to −0.08 logMAR) in studies with higher bias, compared to those with a lower risk of bias (−0.16, CI: −0.20 to −0.10 logMAR). Similarly, for UNVA, the underestimation was −0.13 (CI: −0.22 to −0.05 logMAR) in higher bias studies compared to −0.17 (CI: −0.25 to −0.09 logMAR) in studies with lower bias. No other significant confounders were identified, including age, postoperative spherical equivalent, or corneal astigmatism inclusion criteria. Binocular defocus curves Supplementary Fig. presents the forest plot of the subgroup analysis based on the defocus lens in the binocular defocus curve. All studies assessed the binocular defocus curve with the best distance correction, except for Choi , who reported the uncorrected distance defocus curve, whereas Lopes , Elbakry , and Unsal did not clearly specify this in their manuscripts. The analysis focused on the defocus range from 0 to −2.00 D, with an assumed standard deviation of 0.1 logMAR in up to 12 comparisons. Statistically significant differences between Eyhance and monofocal IOLs were observed from −0.5 D to −2.00 D, with the differences increasing slightly as defocus increased. Clinically relevant differences were noted at defocus levels beyond −1.0 D. No statistically significant differences were reported in four studies: Mencucci (a, Vivinex Impress; b, IsoPure), Corbelli (, Zoe Primus-HD), Elbakry (, ZCB00) and Micheletti , (Clareon CCA0T0 or CNA0T0). These outcomes were consistent with the measurements of proximal visual acuity in the first two studies, but not for Micheletti and Elbakry which reported significant differences in favor of Eyhance for binocular DCIVA (−0.05 logMAR, p < 0.001) and binocular UIVA (−0.39 logMAR, p < 0.001), respectively. Additionally, both studies were rated as having a critical risk of bias (see Supplementary File ). In the selection domain for Micheletti, there was an inherent bias due to the inclusion of patients with astigmatism requiring toric IOLs. However, these lenses were exclusively utilized in the Eyhance group. Additionally, there was a measurement and reporting bias, as discrepancies exist between the manuscript and the protocol regarding the method of measuring visual acuity, specifically with respect to the corrected visual acuity for a targeted refractive error of −0.25 D for Clareon. Whereas critical risk of bias for Elbakry was attributed to confounding, as well as to the methods used for measuring and reporting outcomes. Patients were targeted non-uniformly across groups, which likely led to underestimations of UIVA. A subgroup analysis excluding these studies did not result in a significant change in the overall mean differences, which shifted slightly from −0.05 logMAR (CI: −0.06 to −0.04; PI: −0.15 to 0.04) to −0.06 logMAR (CI: −0.07 to −0.05; PI: −0.15 to 0.03). Binocular contrast sensitivity function Only four studies reported binocular CSF comparing Eyhance with five monofocal IOLs. However, three of these five comparisons (Mencucci a, Vivinex Impress; b, IsoPure; Corbelli , Zoe Primus-HD) showed no differences in intermediate visual acuities, as discussed in previous sections. Therefore, only two studies remained relevant: Steinmüller , which reported with best distance correction, and Corbelli , which did not specify. The differences between groups were less than 0.1 logCS across all spatial frequencies, lacking clinical relevance. Spectacle independence Six or seven comparisons across different distances were pooled to evaluate spectacle independence, with studies ranging from serious to critical risk, as well as low to moderate risk (see Supplementary File ). No differences between Eyhance and other monofocals were found for far distance 0.83 (95% CI: 0.28–2.48) (see Fig. ). Conversely, Eyhance significantly increased the odds of achieving spectacle independence compared to other monofocals for intermediate distance, with an odds ratio of 7.85 (95% CI: 4.08–15.09) (see Fig. ). The smallest effect was observed in the study by Corbelli , (Zoe Primus-HD). After excluding this study, the odds ratio increased to 11.5 (95% CI: 6.13–20.94), indicating a greater increase in spectacle independence. For near spectacle independence, the odds were also significantly higher in favor of Eyhance after this sub-analysis, though the effect size was considerably lower (OR: 2.16, 95% CI: 1.21–3.85). Photic phenomena and dysphotopsia The odds ratio for an increased likelihood of experiencing frequent or very frequent photic phenomena (halo, glare, or starburst) was not elevated by Eyhance. The overall odds ratio was 0.62 (CI: 0.32–1.20; PI: 0.30–1.28) (see Supplementary Fig. ). Similarly, the odds ratio for an increased likelihood of being bothered or very bothered by these phenomena was comparable between the groups, with a value of 1.13 (CI: 0.79–1.63; PI: 0.76–1.69) (see Fig. ). As with other outcomes measured binocularly without distance correction, the risk of bias was higher compared to outcomes assessed monocularly with distance correction (see Supplementary File ). Satisfaction and undergoing the same intervention Satisfaction after the procedure was only reported by three studies: Dell , Lopes , and Donoso . In both groups, more than 90% of patients were satisfied or very satisfied, with no clinically relevant differences observed (≤3%). Additionally, only Lopes reported 0% dissatisfied or very dissatisfied patients, while Dell was the only study to report the likelihood of recommending or undergoing the same procedure, again showing no clinically relevant differences (equal to 3%). Due to the poor reporting frequency and inconsistent data across studies, it was not possible to pool the outcomes for this analysis. Small-study effect evaluation Funnel plots were inspected to investigate potential small-study biases for visual acuity and defocus curve measurements, as ten or more studies were pooled for these variables. The number of studies missing to the left or right of the mean was ≤2 for all variables, with minimal impact on the outcomes (<0.01 logMAR). Begg’s rank correlation was tested instead of Egger’s test, as the latter can have low power and assumes a linear relationship between standard error and effect size, which might not always hold. Begg’s rank correlation was significant for intermediate and near visual acuity measurements, either monocular with distance correction or binocular without correction ( p < 0.05), indicating a larger effect in studies with smaller sample sizes. In total, fourteen clinical trials were identified, of which five were published and included in the analysis. Four trials are ongoing or recently completed, while four others have remained unpublished for more than a year since their estimated completion date (Supplementary File ). One randomized trial (NCT05025345, comparator ZCB00) reported outcomes on the registration page, showing a difference of −0.11 logMAR in favor of Eyhance for DCIVA. The delay in the publication of the four studies raises concerns about selective outcome reporting and non-reporting. However, the available evidence suggests that these potential biases have had little to no impact on the overall conclusions of the meta-analysis. Certainty of evidence Evidence was graded for the primary and secondary meta-analyzed outcomes, with downgrades in the certainty of evidence due to imprecision, risk of bias, and indirectness where applicable (see Table ). For distance visual acuity, Eyhance showed no significant differences compared to monofocal IOLs. However, for monocular vision with the best distance correction, Eyhance improved by one line of visual acuity at intermediate and near distances, based on a chart with five letters per row. For binocular vision without distance correction, Eyhance improved by one-and a-half lines at both intermediate and near distances, with high-certainty evidence supporting these findings, except for monocular DCNVA, where moderate-certainty evidence was observed (downgraded due to inconsistency). Monocular and binocular defocus curves showed differences between the comparison groups, though the effect was smaller than that observed for visual acuity at proximal distances. These results were supported by moderate-certainty evidence, downgraded due to the risk of bias and inconsistency. Certain endpoints, such as monocular contrast sensitivity function and photic phenomena, showed no significant differences between groups, supported by low-certainty evidence. For spectacle independence at far and near distances, moderate-certainty evidence indicated no differences between groups. However, spectacle independence at intermediate distances showed an increased odds ratio, though this result was based on low-certainty evidence. Finally, no differences were noted for positive dysphotopsia, with moderate-certainty evidence supporting this finding.
A total of 148 records were retrieved from the initial systematic search and clinical trial databases, which was reduced to 82 after removing duplicates. The first screening of titles and abstracts resulted in the exclusion of 48 references for various reasons, as outlined in Fig. . The most common reasons for exclusion were interventions different from the Eyhance IOL and outcomes unrelated to those described in the eligibility criteria. An additional three studies were excluded after full-text screening, leaving a total of 31 studies for data extraction.
Eligible trials were described as randomized clinical trials ( n = 8) and comparative case series, either prospective ( n = 10), retrospective ( n = 12) or cross-sectional ( n = 1). Two studies compared Eyhance with more than one monofocal IOL, Mencucci et al. and Giglio et al. , these were differentiated by adding an additional letter after the year (i.e. Mencucci et al. a or Mencucci et al. b for Tecnis PCB00 or Clareon CNA0T0 comparison, respectively). Characteristics of the study population such as inclusion criteria for corneal astigmatism, astigmatism management with different types of IOLs and corneal incisions, mean age of the intervention group, eye comorbidities, and variations of targeting such with techniques such as micro-monovision are detailed in Table . The complete extraction for all the variables is available from authors upon request. From studies for which data were extracted only Hwang et al. did not report any of the primary or secondary endpoints described in the protocol. In addition, Singh was not considered in the DCIVA analysis since the outcome (better than 0 logMAR in both groups) suggests that corrected intermediate visual acuity (CIVA) was measured instead even though it was reported as DCIVA. This also was done for the DCNVA and Donoso study for the same reason but for near distance.
The risk of bias was evaluated at the outcome level, recognizing that different confounders can affect specific endpoints, potentially limiting confidence in efficacy estimates. For example, the failure to report postoperative residual refractive error contributed to the risk of bias for UIVA but not for DCIVA. Therefore, the risk of bias was incorporated into the forest plots using both the RoB 2 and ROBINS-I tools, acknowledging differences in the number of domains assessed. Specifically, domains F and G were marked as “not available” (NA) when applying the RoB 2 tool to RCTs. Supplementary File provides detailed reasons for the judgments in each domain and endpoint, while Supplementary File displays weighted bar plots showing the distribution of risk-of-bias judgments within each domain. Critical risk of bias was more frequent in case series than in RCTs, primarily due to confounding, and more common in studies reporting binocular outcomes without distance correction or patient-reported outcomes (~50%) compared to those reporting monocular outcomes with distance correction (~25%) (see Supplementary File ). In non-randomized studies assessed with the ROBINS-I tool, confounding was frequently observed, such as a lack of reporting on demographic variables or comorbidities that might influence outcomes. Selection bias was also common, with many studies failing to detail participant selection or excluding patients with complications. Some studies introduced classification issues when multiple types of IOLs were used for the comparator group. Deviations from intended interventions were often poorly described, particularly regarding how surgeries were performed or re-interventions were handled. Additionally, some studies failed to report the absence of missing data, raising concerns about moderate bias. Standard methods for measuring outcomes were sometimes not properly reported, leading to critical bias. Incomplete reporting of outcomes also emerged as a recurring issue, with some studies omitting key outcomes or using non-standard analysis methods. In RCTs evaluated with the RoB 2 tool, common sources of bias included insufficient reporting on randomization procedures, with missing details on sequence generation and allocation concealment. Baseline differences between intervention groups, such as age, were also noted. Deviations from intended interventions were observed, including unclear masking of participants, carers, and assessors, and underreporting of adverse events. While missing outcome data were often deemed negligible, they were not always clearly addressed. Measurement bias was also a concern, especially when assessors, aware of the intervention, may have influenced outcome thresholds, despite using standard charts. Lastly, concerns about selective reporting arose, as some studies lacked publicly available protocols. These biases, particularly confounding, suggest caution when interpreting Eyhance efficacy, especially for binocular outcomes without distance correction or patient-reported outcomes.
Monocular distance-corrected visual acuities Monocular distance-corrected visual acuities are a key indicator of the IOL’s efficacy, as procedural modifications are minimized, reducing confounding factors such as postoperative residual refractive errors or binocular summation. However, the I 2 was high (≥64%, p < 0.0001) across the three distances, indicating that the variance in effects was greater than expected from random variability alone. There was no clinically relevant bias for CDVA, with prediction intervals ranging from −0.02 to 0.03 logMAR (see Fig. ), suggesting differences of one letter or less on a visual acuity chart with five letters per row. In contrast, differences in DCIVA and DCNVA were clinically relevant, with pooled estimates of −0.09 and −0.08 logMAR, respectively, favoring Eyhance for both DCIVA (see Fig. ) and DCNVA (see Fig. ) ( p < 0.0001). A subgroup analysis demonstrated that classifying Zoe Primus-HD (Corbelli et al. ), Vivinex Impress (Mencucci et al. a), and IsoPure (Mencucci et al. b) as Enhanced IOLs, along with the Eyhance IOL, revealed no significant differences in either DCIVA or DCNVA (see Supplementary Figs. and , respectively) and led to a decrease in I 2 . However, when these Enhanced IOLs were compared with the remaining monofocal IOLs (Tecnis ZCB00, PCB00, or AAB00; Clareon CNA0T0 or CCA0T0; Vivinex iSert; RayOne Monofocal; AcrySof SN60WF; and SofPort, enVista, or Toric), the differences increased to −0.11 logMAR and −0.12 logMAR for both DCIVA and DCNVA ( p < 0.0001), respectively. Exploration of other variables, including age, corneal astigmatism inclusion criteria, and comorbidities, revealed that only the use of standard measurement methods (ETDRS at 85 cd/m²) significantly reduced I ² from 54% ( p = 0.005) to 20% ( p = 0.27) for DCIVA in the subgroup excluding the three previous described IOLs. However, no clinically relevant differences (<0.02 logMAR) were found between studies using standard versus non-standard testing methods. Overall, the included studies were at low to moderate risk of bias (see Supplementary File ). A subgroup analysis of risk of bias revealed an underestimation of mean differences for DCIVA in studies with serious, high, or critical bias (−0.08, CI: −0.12 to −0.05) compared to other bias levels (−0.12, CI: −0.14 to −0.10) logMAR, and similarly for DCNVA (−0.06, CI: −0.16 to 0.03) versus (−0.12, CI: −0.17 to −0.06). Monocular distance-corrected defocus curve Figure displays the forest plot for subgroups based on the defocus lens from the monocular DC, using the best distance correction. The defocus range analyzed was 0 to −2 D, as all studies reporting the defocus curve covered this range, whereas some lacked data beyond −2.0 D. Notably, only five studies reported standard deviations. Therefore, a standard deviation of 0.1 logMAR was assumed for studies without this information, as outlined in the “Methods” section. A subgroup analysis, limited to the studies that reported standard deviations, confirmed the same conclusions as the overall analysis. Statistically significant differences between Eyhance and monofocal IOLs were observed from −0.5 D to −2.0 D, with a slight increase in effect as defocus increased. Clinically relevant differences were found for defocus levels beyond −1.0 D, and these differences were also significant when compared to IsoPure (Salgado-Borges ). Similar to the measurement of proximal visual acuities, the I ² value was high (≥68%, p < 0.0001), indicating substantial heterogeneity and the risk of bias was balanced between serious, high, or critical bias and low to moderate bias. Some studies showing less pronounced differences between groups involved in either critical bias or a population with ocular comorbidities (Nam ). Monocular distance-corrected contrast sensitivity Few studies reported monocular distance-corrected CSF, particularly under mesopic or glare conditions, with no more than three studies providing outcomes. The photopic condition without glare had the most data, with five studies, but significant limitations were noted, as only two studies reported standard deviations. As a result, a standard deviation of 0.2 logCS was assumed to pool the data. Figure shows no significant differences between Eyhance and monofocal IOLs, particularly for low and middle spatial frequencies.
Monocular distance-corrected visual acuities are a key indicator of the IOL’s efficacy, as procedural modifications are minimized, reducing confounding factors such as postoperative residual refractive errors or binocular summation. However, the I 2 was high (≥64%, p < 0.0001) across the three distances, indicating that the variance in effects was greater than expected from random variability alone. There was no clinically relevant bias for CDVA, with prediction intervals ranging from −0.02 to 0.03 logMAR (see Fig. ), suggesting differences of one letter or less on a visual acuity chart with five letters per row. In contrast, differences in DCIVA and DCNVA were clinically relevant, with pooled estimates of −0.09 and −0.08 logMAR, respectively, favoring Eyhance for both DCIVA (see Fig. ) and DCNVA (see Fig. ) ( p < 0.0001). A subgroup analysis demonstrated that classifying Zoe Primus-HD (Corbelli et al. ), Vivinex Impress (Mencucci et al. a), and IsoPure (Mencucci et al. b) as Enhanced IOLs, along with the Eyhance IOL, revealed no significant differences in either DCIVA or DCNVA (see Supplementary Figs. and , respectively) and led to a decrease in I 2 . However, when these Enhanced IOLs were compared with the remaining monofocal IOLs (Tecnis ZCB00, PCB00, or AAB00; Clareon CNA0T0 or CCA0T0; Vivinex iSert; RayOne Monofocal; AcrySof SN60WF; and SofPort, enVista, or Toric), the differences increased to −0.11 logMAR and −0.12 logMAR for both DCIVA and DCNVA ( p < 0.0001), respectively. Exploration of other variables, including age, corneal astigmatism inclusion criteria, and comorbidities, revealed that only the use of standard measurement methods (ETDRS at 85 cd/m²) significantly reduced I ² from 54% ( p = 0.005) to 20% ( p = 0.27) for DCIVA in the subgroup excluding the three previous described IOLs. However, no clinically relevant differences (<0.02 logMAR) were found between studies using standard versus non-standard testing methods. Overall, the included studies were at low to moderate risk of bias (see Supplementary File ). A subgroup analysis of risk of bias revealed an underestimation of mean differences for DCIVA in studies with serious, high, or critical bias (−0.08, CI: −0.12 to −0.05) compared to other bias levels (−0.12, CI: −0.14 to −0.10) logMAR, and similarly for DCNVA (−0.06, CI: −0.16 to 0.03) versus (−0.12, CI: −0.17 to −0.06).
Figure displays the forest plot for subgroups based on the defocus lens from the monocular DC, using the best distance correction. The defocus range analyzed was 0 to −2 D, as all studies reporting the defocus curve covered this range, whereas some lacked data beyond −2.0 D. Notably, only five studies reported standard deviations. Therefore, a standard deviation of 0.1 logMAR was assumed for studies without this information, as outlined in the “Methods” section. A subgroup analysis, limited to the studies that reported standard deviations, confirmed the same conclusions as the overall analysis. Statistically significant differences between Eyhance and monofocal IOLs were observed from −0.5 D to −2.0 D, with a slight increase in effect as defocus increased. Clinically relevant differences were found for defocus levels beyond −1.0 D, and these differences were also significant when compared to IsoPure (Salgado-Borges ). Similar to the measurement of proximal visual acuities, the I ² value was high (≥68%, p < 0.0001), indicating substantial heterogeneity and the risk of bias was balanced between serious, high, or critical bias and low to moderate bias. Some studies showing less pronounced differences between groups involved in either critical bias or a population with ocular comorbidities (Nam ).
Few studies reported monocular distance-corrected CSF, particularly under mesopic or glare conditions, with no more than three studies providing outcomes. The photopic condition without glare had the most data, with five studies, but significant limitations were noted, as only two studies reported standard deviations. As a result, a standard deviation of 0.2 logCS was assumed to pool the data. Figure shows no significant differences between Eyhance and monofocal IOLs, particularly for low and middle spatial frequencies.
Binocular uncorrected visual acuities Binocular uncorrected visual acuities reflect the efficacy of the procedure, with procedural modifications potentially acting as confounders that should be taken into consideration. Supplementary Fig. shows no significant differences in UDVA (0.00 logMAR, 95% CI: −0.01 to 0.01 logMAR, z = 0.71, p = 0.48) between Eyhance and monofocal lenses, which may be attributed to the lack of differences in postoperative spherical equivalent (−0.01 D, 95% CI: −0.08 to 0.07 D, z = −0.51, p = 0.81), with a prediction interval ranging from −0.34 to 0.33 D. For binocular UIVA and UNVA, differences favoring Eyhance were observed, with −0.14 logMAR for UIVA and −0.15 logMAR for UNVA when subgroup analyses were conducted based on classification type (see Supplementary Figs. and , respectively). The binocular outcomes followed a similar trend to the monocular outcomes reported earlier but with a higher prevalence of serious or critical risk of bias (see Supplementary File ). In contrast to studies reporting monocular visual acuities, the use of non-standard measurement methods led to an underestimation of pooled differences for UIVA (−0.12, CI: −0.16 to −0.07 logMAR) compared to standard methods (−0.16, CI: −0.21 to −0.11 logMAR). However, this underestimation was not observed for UNVA. A slight underestimation due to a higher risk of bias was observed for UIVA (−0.13, CI: −0.18 to −0.08 logMAR) in studies with higher bias, compared to those with a lower risk of bias (−0.16, CI: −0.20 to −0.10 logMAR). Similarly, for UNVA, the underestimation was −0.13 (CI: −0.22 to −0.05 logMAR) in higher bias studies compared to −0.17 (CI: −0.25 to −0.09 logMAR) in studies with lower bias. No other significant confounders were identified, including age, postoperative spherical equivalent, or corneal astigmatism inclusion criteria. Binocular defocus curves Supplementary Fig. presents the forest plot of the subgroup analysis based on the defocus lens in the binocular defocus curve. All studies assessed the binocular defocus curve with the best distance correction, except for Choi , who reported the uncorrected distance defocus curve, whereas Lopes , Elbakry , and Unsal did not clearly specify this in their manuscripts. The analysis focused on the defocus range from 0 to −2.00 D, with an assumed standard deviation of 0.1 logMAR in up to 12 comparisons. Statistically significant differences between Eyhance and monofocal IOLs were observed from −0.5 D to −2.00 D, with the differences increasing slightly as defocus increased. Clinically relevant differences were noted at defocus levels beyond −1.0 D. No statistically significant differences were reported in four studies: Mencucci (a, Vivinex Impress; b, IsoPure), Corbelli (, Zoe Primus-HD), Elbakry (, ZCB00) and Micheletti , (Clareon CCA0T0 or CNA0T0). These outcomes were consistent with the measurements of proximal visual acuity in the first two studies, but not for Micheletti and Elbakry which reported significant differences in favor of Eyhance for binocular DCIVA (−0.05 logMAR, p < 0.001) and binocular UIVA (−0.39 logMAR, p < 0.001), respectively. Additionally, both studies were rated as having a critical risk of bias (see Supplementary File ). In the selection domain for Micheletti, there was an inherent bias due to the inclusion of patients with astigmatism requiring toric IOLs. However, these lenses were exclusively utilized in the Eyhance group. Additionally, there was a measurement and reporting bias, as discrepancies exist between the manuscript and the protocol regarding the method of measuring visual acuity, specifically with respect to the corrected visual acuity for a targeted refractive error of −0.25 D for Clareon. Whereas critical risk of bias for Elbakry was attributed to confounding, as well as to the methods used for measuring and reporting outcomes. Patients were targeted non-uniformly across groups, which likely led to underestimations of UIVA. A subgroup analysis excluding these studies did not result in a significant change in the overall mean differences, which shifted slightly from −0.05 logMAR (CI: −0.06 to −0.04; PI: −0.15 to 0.04) to −0.06 logMAR (CI: −0.07 to −0.05; PI: −0.15 to 0.03). Binocular contrast sensitivity function Only four studies reported binocular CSF comparing Eyhance with five monofocal IOLs. However, three of these five comparisons (Mencucci a, Vivinex Impress; b, IsoPure; Corbelli , Zoe Primus-HD) showed no differences in intermediate visual acuities, as discussed in previous sections. Therefore, only two studies remained relevant: Steinmüller , which reported with best distance correction, and Corbelli , which did not specify. The differences between groups were less than 0.1 logCS across all spatial frequencies, lacking clinical relevance. Spectacle independence Six or seven comparisons across different distances were pooled to evaluate spectacle independence, with studies ranging from serious to critical risk, as well as low to moderate risk (see Supplementary File ). No differences between Eyhance and other monofocals were found for far distance 0.83 (95% CI: 0.28–2.48) (see Fig. ). Conversely, Eyhance significantly increased the odds of achieving spectacle independence compared to other monofocals for intermediate distance, with an odds ratio of 7.85 (95% CI: 4.08–15.09) (see Fig. ). The smallest effect was observed in the study by Corbelli , (Zoe Primus-HD). After excluding this study, the odds ratio increased to 11.5 (95% CI: 6.13–20.94), indicating a greater increase in spectacle independence. For near spectacle independence, the odds were also significantly higher in favor of Eyhance after this sub-analysis, though the effect size was considerably lower (OR: 2.16, 95% CI: 1.21–3.85). Photic phenomena and dysphotopsia The odds ratio for an increased likelihood of experiencing frequent or very frequent photic phenomena (halo, glare, or starburst) was not elevated by Eyhance. The overall odds ratio was 0.62 (CI: 0.32–1.20; PI: 0.30–1.28) (see Supplementary Fig. ). Similarly, the odds ratio for an increased likelihood of being bothered or very bothered by these phenomena was comparable between the groups, with a value of 1.13 (CI: 0.79–1.63; PI: 0.76–1.69) (see Fig. ). As with other outcomes measured binocularly without distance correction, the risk of bias was higher compared to outcomes assessed monocularly with distance correction (see Supplementary File ). Satisfaction and undergoing the same intervention Satisfaction after the procedure was only reported by three studies: Dell , Lopes , and Donoso . In both groups, more than 90% of patients were satisfied or very satisfied, with no clinically relevant differences observed (≤3%). Additionally, only Lopes reported 0% dissatisfied or very dissatisfied patients, while Dell was the only study to report the likelihood of recommending or undergoing the same procedure, again showing no clinically relevant differences (equal to 3%). Due to the poor reporting frequency and inconsistent data across studies, it was not possible to pool the outcomes for this analysis.
Binocular uncorrected visual acuities reflect the efficacy of the procedure, with procedural modifications potentially acting as confounders that should be taken into consideration. Supplementary Fig. shows no significant differences in UDVA (0.00 logMAR, 95% CI: −0.01 to 0.01 logMAR, z = 0.71, p = 0.48) between Eyhance and monofocal lenses, which may be attributed to the lack of differences in postoperative spherical equivalent (−0.01 D, 95% CI: −0.08 to 0.07 D, z = −0.51, p = 0.81), with a prediction interval ranging from −0.34 to 0.33 D. For binocular UIVA and UNVA, differences favoring Eyhance were observed, with −0.14 logMAR for UIVA and −0.15 logMAR for UNVA when subgroup analyses were conducted based on classification type (see Supplementary Figs. and , respectively). The binocular outcomes followed a similar trend to the monocular outcomes reported earlier but with a higher prevalence of serious or critical risk of bias (see Supplementary File ). In contrast to studies reporting monocular visual acuities, the use of non-standard measurement methods led to an underestimation of pooled differences for UIVA (−0.12, CI: −0.16 to −0.07 logMAR) compared to standard methods (−0.16, CI: −0.21 to −0.11 logMAR). However, this underestimation was not observed for UNVA. A slight underestimation due to a higher risk of bias was observed for UIVA (−0.13, CI: −0.18 to −0.08 logMAR) in studies with higher bias, compared to those with a lower risk of bias (−0.16, CI: −0.20 to −0.10 logMAR). Similarly, for UNVA, the underestimation was −0.13 (CI: −0.22 to −0.05 logMAR) in higher bias studies compared to −0.17 (CI: −0.25 to −0.09 logMAR) in studies with lower bias. No other significant confounders were identified, including age, postoperative spherical equivalent, or corneal astigmatism inclusion criteria.
Supplementary Fig. presents the forest plot of the subgroup analysis based on the defocus lens in the binocular defocus curve. All studies assessed the binocular defocus curve with the best distance correction, except for Choi , who reported the uncorrected distance defocus curve, whereas Lopes , Elbakry , and Unsal did not clearly specify this in their manuscripts. The analysis focused on the defocus range from 0 to −2.00 D, with an assumed standard deviation of 0.1 logMAR in up to 12 comparisons. Statistically significant differences between Eyhance and monofocal IOLs were observed from −0.5 D to −2.00 D, with the differences increasing slightly as defocus increased. Clinically relevant differences were noted at defocus levels beyond −1.0 D. No statistically significant differences were reported in four studies: Mencucci (a, Vivinex Impress; b, IsoPure), Corbelli (, Zoe Primus-HD), Elbakry (, ZCB00) and Micheletti , (Clareon CCA0T0 or CNA0T0). These outcomes were consistent with the measurements of proximal visual acuity in the first two studies, but not for Micheletti and Elbakry which reported significant differences in favor of Eyhance for binocular DCIVA (−0.05 logMAR, p < 0.001) and binocular UIVA (−0.39 logMAR, p < 0.001), respectively. Additionally, both studies were rated as having a critical risk of bias (see Supplementary File ). In the selection domain for Micheletti, there was an inherent bias due to the inclusion of patients with astigmatism requiring toric IOLs. However, these lenses were exclusively utilized in the Eyhance group. Additionally, there was a measurement and reporting bias, as discrepancies exist between the manuscript and the protocol regarding the method of measuring visual acuity, specifically with respect to the corrected visual acuity for a targeted refractive error of −0.25 D for Clareon. Whereas critical risk of bias for Elbakry was attributed to confounding, as well as to the methods used for measuring and reporting outcomes. Patients were targeted non-uniformly across groups, which likely led to underestimations of UIVA. A subgroup analysis excluding these studies did not result in a significant change in the overall mean differences, which shifted slightly from −0.05 logMAR (CI: −0.06 to −0.04; PI: −0.15 to 0.04) to −0.06 logMAR (CI: −0.07 to −0.05; PI: −0.15 to 0.03).
Only four studies reported binocular CSF comparing Eyhance with five monofocal IOLs. However, three of these five comparisons (Mencucci a, Vivinex Impress; b, IsoPure; Corbelli , Zoe Primus-HD) showed no differences in intermediate visual acuities, as discussed in previous sections. Therefore, only two studies remained relevant: Steinmüller , which reported with best distance correction, and Corbelli , which did not specify. The differences between groups were less than 0.1 logCS across all spatial frequencies, lacking clinical relevance.
Six or seven comparisons across different distances were pooled to evaluate spectacle independence, with studies ranging from serious to critical risk, as well as low to moderate risk (see Supplementary File ). No differences between Eyhance and other monofocals were found for far distance 0.83 (95% CI: 0.28–2.48) (see Fig. ). Conversely, Eyhance significantly increased the odds of achieving spectacle independence compared to other monofocals for intermediate distance, with an odds ratio of 7.85 (95% CI: 4.08–15.09) (see Fig. ). The smallest effect was observed in the study by Corbelli , (Zoe Primus-HD). After excluding this study, the odds ratio increased to 11.5 (95% CI: 6.13–20.94), indicating a greater increase in spectacle independence. For near spectacle independence, the odds were also significantly higher in favor of Eyhance after this sub-analysis, though the effect size was considerably lower (OR: 2.16, 95% CI: 1.21–3.85).
The odds ratio for an increased likelihood of experiencing frequent or very frequent photic phenomena (halo, glare, or starburst) was not elevated by Eyhance. The overall odds ratio was 0.62 (CI: 0.32–1.20; PI: 0.30–1.28) (see Supplementary Fig. ). Similarly, the odds ratio for an increased likelihood of being bothered or very bothered by these phenomena was comparable between the groups, with a value of 1.13 (CI: 0.79–1.63; PI: 0.76–1.69) (see Fig. ). As with other outcomes measured binocularly without distance correction, the risk of bias was higher compared to outcomes assessed monocularly with distance correction (see Supplementary File ).
Satisfaction after the procedure was only reported by three studies: Dell , Lopes , and Donoso . In both groups, more than 90% of patients were satisfied or very satisfied, with no clinically relevant differences observed (≤3%). Additionally, only Lopes reported 0% dissatisfied or very dissatisfied patients, while Dell was the only study to report the likelihood of recommending or undergoing the same procedure, again showing no clinically relevant differences (equal to 3%). Due to the poor reporting frequency and inconsistent data across studies, it was not possible to pool the outcomes for this analysis.
Funnel plots were inspected to investigate potential small-study biases for visual acuity and defocus curve measurements, as ten or more studies were pooled for these variables. The number of studies missing to the left or right of the mean was ≤2 for all variables, with minimal impact on the outcomes (<0.01 logMAR). Begg’s rank correlation was tested instead of Egger’s test, as the latter can have low power and assumes a linear relationship between standard error and effect size, which might not always hold. Begg’s rank correlation was significant for intermediate and near visual acuity measurements, either monocular with distance correction or binocular without correction ( p < 0.05), indicating a larger effect in studies with smaller sample sizes. In total, fourteen clinical trials were identified, of which five were published and included in the analysis. Four trials are ongoing or recently completed, while four others have remained unpublished for more than a year since their estimated completion date (Supplementary File ). One randomized trial (NCT05025345, comparator ZCB00) reported outcomes on the registration page, showing a difference of −0.11 logMAR in favor of Eyhance for DCIVA. The delay in the publication of the four studies raises concerns about selective outcome reporting and non-reporting. However, the available evidence suggests that these potential biases have had little to no impact on the overall conclusions of the meta-analysis.
Evidence was graded for the primary and secondary meta-analyzed outcomes, with downgrades in the certainty of evidence due to imprecision, risk of bias, and indirectness where applicable (see Table ). For distance visual acuity, Eyhance showed no significant differences compared to monofocal IOLs. However, for monocular vision with the best distance correction, Eyhance improved by one line of visual acuity at intermediate and near distances, based on a chart with five letters per row. For binocular vision without distance correction, Eyhance improved by one-and a-half lines at both intermediate and near distances, with high-certainty evidence supporting these findings, except for monocular DCNVA, where moderate-certainty evidence was observed (downgraded due to inconsistency). Monocular and binocular defocus curves showed differences between the comparison groups, though the effect was smaller than that observed for visual acuity at proximal distances. These results were supported by moderate-certainty evidence, downgraded due to the risk of bias and inconsistency. Certain endpoints, such as monocular contrast sensitivity function and photic phenomena, showed no significant differences between groups, supported by low-certainty evidence. For spectacle independence at far and near distances, moderate-certainty evidence indicated no differences between groups. However, spectacle independence at intermediate distances showed an increased odds ratio, though this result was based on low-certainty evidence. Finally, no differences were noted for positive dysphotopsia, with moderate-certainty evidence supporting this finding.
This meta-analysis included 31 studies, examining the effectiveness of the Eyhance IOL compared to other monofocals across several visual outcomes. The findings consistently showed that Eyhance improved intermediate and near monocular and binocular visual acuities by one to one-and-a-half lines, as measured on a visual acuity chart with five letters per row. Notably, while spectacle independence at an intermediate distance was significantly improved with Eyhance, no difference was observed at a far distance. At a near distance, although the difference reached statistical significance, it was smaller, and the lower range of the confidence interval approached the threshold for no difference, indicating that further studies are necessary to increase certainty regarding near spectacle independence. In contrast, far-distance contrast sensitivity, photic phenomena, and positive dysphotopsia outcomes did not significantly differ between Eyhance and comparator lenses. Subgroup analyses revealed that heterogeneity among studies was partly driven by differences in comparator lenses and testing conditions, with some monofocal IOLs that might offer similar outcomes to Eyhance (Vivinex Impress; IsoPure and Zoe Primus-HD) but with only one non-randomized study per lens and graded by a moderately or high risk of bias according to ROBINS-I tool. Our findings align with early meta-analyses on Eyhance IOL, which have demonstrated improved intermediate and near vision without sacrificing distance visual acuity . However, our meta-analysis addresses several methodological limitations present in the previous literature. For example, the risk of bias should be assessed at the outcome level, as this can vary depending on the endpoint assessed, but prior meta-analyses evaluated bias at the study level . Additionally, the certainty of the evidence has not been evaluated in previous meta-analyses . Considering these methodological limitations and the inclusion of 31 studies compared to the 12 analyzed previously, our work not only confirms earlier evidence but also introduces a more robust methodological approach that has not been adequately emphasized in previous meta-analyses. Additionally, small-study effects were identified in monocular and binocular intermediate and near visual acuity measurements, indicating higher effects in studies with smaller sample sizes. In addition, the population pool evaluated in this meta-analysis is broader than what has been addressed in previous literature. This likely explains the high heterogeneity found in several endpoints. Although some researchers believe that heterogeneity diminishes the utility of an analysis, this is a misconception . Heterogeneity simply indicates that the true effect size varies across studies. In our analyses, data were extracted on several potential confounders, including IOLs that might be classified similarly to Eyhance as PARTIAL-RoF-Enhanced IOLs , patients with comorbidities, and variations in procedures. This provides a more comprehensive view of the intervention’s effectiveness in these specific situations. Furthermore, the inclusion of comparative case series, which often carry a higher risk of bias, offers evidence that aligns with real-world clinical practice, where both the intervention and comparator are utilized. Several limitations of the current evidence must be acknowledged. First, the risk of bias was generally higher in non-randomized studies, with issues such as confounding, selection bias, and deviations from intended interventions frequently observed. Failure to use standardized methods for measuring visual outcomes and inadequate reporting of postoperative conditions (e.g., residual refractive errors) in several studies contributed to the heterogeneity in the results and limited our ability to draw firm conclusions for some outcomes, particularly contrast sensitivity and patient-reported outcomes. Additionally, while the meta-analysis incorporated data from 31 studies, the evidence for certain secondary outcomes, such as binocular contrast sensitivity and patient-reported satisfaction, was sparse. Only a few studies addressed these outcomes, and many were at risk of bias. Another limitation arises from the exclusion of unpublished trials or those with incomplete data, which raises the possibility of publication bias, particularly for ongoing trials that have yet to report their outcomes. Finally, the assumption of standard deviations for defocus curves and contrast sensitivity in studies lacking reported data may have introduced imprecision into the pooled estimates, although sensitivity analyses suggest this had minimal impact on the overall conclusions. The results of this meta-analysis suggest that Eyhance IOL offers a clinically relevant improvement in intermediate and near vision compared to conventional monofocal IOLs. However, moderate evidence suggests comparable spectacle independence at near, which means that the improvement of visual acuity at near might not be as high enough as to decrease the spectacle independence, and therefore cost-effectiveness for the intervention in near tasks could be questionable. While no significant differences in far-distance contrast sensitivity, photic phenomena, and positive dysphotopsia were found, there remains a low certainty in some of these variables, and future research should aim to improve the quality and consistency of reporting. This includes standardizing outcome measurements and ensuring the adequate reporting of postoperative conditions and adverse events . Furthermore, additional studies are needed to investigate the performance of Eyhance IOLs in comparison to other PARTIAL-RoF-Enhanced IOLs, as well as to confirm their effectiveness in specific subpopulations, including patients with ocular comorbidities. Future and ongoing trials should prioritize robust design with appropriate randomization, masking, and reporting, especially considering the high risk of bias identified in several non-randomized studies.
The findings of this meta-analysis provide high-certainty evidence that the Eyhance IOL significantly improves intermediate and near visual acuity, with patients experiencing an increase of one to one-and-a-half lines in visual acuity compared to conventional monofocal IOLs depending on the monocular or binocular testing conditions. However, the evidence for other outcomes, such as contrast sensitivity and patient-reported outcomes, remains moderate or low. This underscores the need for adherence to standardized data collection and reporting methods to strengthen the evidence base. Additionally, newer IOLs, such as Vivinex Impress, IsoPure, and Zoe Primus-HD, may offer similar visual outcomes to Eyhance, but few comparative studies exist, many of which are graded with a high risk of bias. Future research on new PARTIAL-RoF-Enhanced IOLs should aim to confirm these findings and establish their superiority over traditional monofocal IOLs while demonstrating non-inferiority to Eyhance.
What is known Enhanced monofocal intraocular lenses improve intermediate visual acuity compared to standard monofocals, while maintaining similar distance vision. Further research was needed to clarify the effectiveness of enhanced monofocals at near distances and patient-reported outcomes. New Information This paper increases reliability by grading evidence certainty and assessing bias at the outcome level, offering more precise and trustworthy conclusions on enhanced IOL outcomes. There is high-certainty evidence that enhanced IOLs improve intermediate and near visual acuity over conventional monofocal IOLs, with moderate to low certainty for benefits in defocus curves, contrast sensitivity, and patient-reported outcomes such as spectacle independence and photic phenomena.
Enhanced monofocal intraocular lenses improve intermediate visual acuity compared to standard monofocals, while maintaining similar distance vision. Further research was needed to clarify the effectiveness of enhanced monofocals at near distances and patient-reported outcomes.
This paper increases reliability by grading evidence certainty and assessing bias at the outcome level, offering more precise and trustworthy conclusions on enhanced IOL outcomes. There is high-certainty evidence that enhanced IOLs improve intermediate and near visual acuity over conventional monofocal IOLs, with moderate to low certainty for benefits in defocus curves, contrast sensitivity, and patient-reported outcomes such as spectacle independence and photic phenomena.
Supplementary Fig. A: Forest Plot of Subgroup Analysis by Author-Attributed IOL Functional Classification for DCIVA Outcome Supplementary Fig. B: Forest Plot of Subgroup Analysis by Author-Attributed IOL Functional Classification for DCNVA Outcome Supplementary Fig. C: Forest Plot of UDVA Outcome Supplementary Fig. D: Forest Plot of Subgroup Analysis by Author-Attributed IOL Functional Classification for UIVA Outcome Supplementary Fig. E: Forest Plot of Subgroup Analysis by Author-Attributed IOL Functional Classification for UNVA Outcome Supplementary Fig. F: Forest Plot of Subgroup Analysis by DC Outcome Supplementary Fig. G: Forest Plot of Subgroup Analysis by Type for PP Outcome Supplementary File A: Search strategy Supplementary File B: Risk of Bias Assessment with Robins-I or Rob2 Supplementary File C: Risk of Bias Assessment Plots Supplementary File D: Registered studies
|
Outcomes of deep brain stimulation surgery in the management of dystonia in glutaric aciduria type 1 | a8daac15-9405-4faf-b28a-ff6502fb991c | 11872982 | Surgical Procedures, Operative[mh] | First described in 1975, glutaric aciduria (GA1) is an autosomal-recessive organic acidaemia due to deficiency of glutaryl-CoA dehydrogenase (GCDH) which results in abnormal metabolism of lysine, hydroxylysine and tryptophan . GA1 affects 1 in 100,000 newborns . Children and young people (CAYP) with GA1 are usually asymptomatic until the development of an acute encephalopathic crisis between the age of 2 and 36 months. Catabolic episodes, often triggered by a febrile illness, give rise to these crises, following which striatal injury results in a complex motor disorder with prominent dystonia . For 10–20% of patients with GA1, in the absence of an acute decompensation there is an insidious onset of a movement disorder, typically resulting in less severe dystonia Dystonia in CAYP is intrusive, impairing function, interfering with the delivery of daily care, and causing pain . Pharmacological interventions offer limited efficacy in the management of childhood dystonia and, even when effective, use is often limited by significant side effects . Consequently, there has been major interest in the past 20 years in the application of deep brain stimulation (DBS) for CAYP with dystonia . A recent meta-analysis of 321 children undergoing DBS reported improvement in dystonic symptoms in 86.3% of cases , highlighting the potential benefits of this intervention. Dystonia in CAYP is aetiologically heterogenous , and data on outcomes following DBS for CAYP with rare causes of dystonia such as GA1 are often limited. We have previously reported the short-term outcomes of three CAYP with GA1 undergoing bilateral pallidal DBS for the management of their medication-refractory movement disorder . Only six other single case reports have been published . To the best of our knowledge, a total of only nine patients with outcome data following DBS in GA1 have been reported to date. The aim of this study is to expand upon published data by presenting a retrospective analysis of CAYP and adults with GA1 undergoing DBS at two institutions, including baseline clinical characteristics, response to DBS and reported complications.
This was a retrospective analysis of imaging and assessments performed as part of standard clinical practise, and thus formal ethical approval was not required under National Health Service (NHS) research governance arrangement. All families gave written consent for imaging and surgical procedures. Patient ascertainment Individuals with a confirmed diagnosis of GA1 undergoing DBS between July 2005 and July 2022 were identified from the institutional database at the Evelina London Children’s Hospital (ELCH), Guy’s and St Thomas’ NHS Foundation Trust, London UK. One additional case, undergoing surgery in adulthood, at the Salford Royal University Hospital (SRUH) NHS Foundation Trust, was also included (Case 16). In all cases, a diagnosis of GA1 had been made on the basis of biochemical testing for elevations in plasma glutaric acid, 3-hydroxyglutaric acid, glutaconic acid, and glutarylcarninitine, also confirmed with GCDH gene analysis. Clinical assessment Demographical and clinical data were extracted for each individual identified, including clinical onset, age at surgery, baseline measures of functional ability (Gross Motor Function Classification System (GMFCS) level equivalent and Manual Ability Classification System (MACS) level equivalent), and baseline dystonia severity as measured by the Burke–Fahn–Marsden Dystonia Rating Scale (BFMDRS) All CAYP at ELCH had medication-refractory generalised dystonia leading to consideration of DBS surgery, suitability for which was assessed by an experienced multi-disciplinary team. Case 16 underwent surgery in adulthood following assessment of suitablility for DBS by the multi-disciplinary team SRUH. FDG-PET-CT image acquisition A total of 14/16 cases underwent resting FDG-PET CT imaging prior to surgery. FDG-PET imaging provides information as to the metabolic activity of brain tissue and has been routinely used as part of the assessment process of CAYP undergoing evaluation as potential candidates for DBS surgery at the ELCH. FDG-PET imaging can help with the qualitative assessment of the target nuclei for DBS insertion, particularly where structural neuroimaging demonstrates areas of brain injury. As previously described, [ 18 F]2-fluoro-2-deoxy- d -glucose (FDG)-PET imaging has been used to help assess eligibility for DBS surgery since 2005 at the ELCH. Prior to October 2013 all CAYP underwent FDG-PET-CT imaging on a GE (General Electric Medical Systems, Waukesha, WI) Discovery ST and a Discovery VCT scanner. Thereafter, scans were conducted using a GE Discovery 710 scanner at the King’s College London and Guy’s and St Thomas’ PET Centre. The FDG dose injected was scaled relative to a 250-MBq dose for a 70-kg adult as 250/70*child’s weight [kg]. FDG was injected after a 3-h fast and followed by a 30-min uptake period in a quiet room. Brief general anaesthesia was then induced, only for the duration of the 15-min PET-CT image acquisition, to eliminate dystonia-related motion artefact during scanning. General anaesthesia was initiated after the uptake period so FDG uptake reflects brain metabolism during wakefulness. For one case (Case 1 in Table ) a continuous infusion of intravenous 10% dextrose was delivered during FDG-PET acquisition. Structural imaging assessment During the 17 years over which data were reviewed, MRI sequences routinely acquired as part of pre-operative work-up evolved, with acquisition on 1.5 T General Electric or Siemens scanners, or more recently a 3 T Siemens scanner. In all cases T 1 -, T 2 - and proton density weighted sequences were available for assessment for cases operated at the ELCH. Available pre-operative MRI images were reviewed and assessed for the presence of regional abnormalities (signal change, and/or volume loss) including in the pallidum, putamen, caudate or thalamus by an experienced Consultant Neuroradiologist (author JC). Each of these regions was classified as "normal" or "abnormal", as was the white matter. Pre-operative neurophysiological assessment Central motor conduction times (CMCT) and somatosensory evoked potentials (SEPs) were obtained as part of the pre-operative assessment at ELCH and analysed as previously reported . CMCT and SEP measurements provide information on the intergrity of major motor and sensory pathways in the brain. We have previously demonstrated in a cohort of CAYP undergoing assessment for DBS that integrity of these pathways may be demonstrated in CAYP for whom MRI neuroimaging might suggest white matter or other brain injury which would potentially preclude the application of DBS . Furthermore, abnormalities in either CMCT or SEP recordings predicts a poorer outcome for CAYP following DBS CMCT recordings were available for all children, but as SEPs were added to routine patient assessment later than CMCTs, they were not recorded in all cases. CMCTs and SEPs were recorded from all four limbs with each patient classified as having “abnormal” testing if the recordings from one or more limbs were abnormal . Peri-operative metabolic management Individualised peri-operative metabolic management plans were followed in line with British Inherited Metabolic Disease Group guidance . Surgical procedure For all cases undergoing surgery following assessment at the ELCH, the surgery entailed bilateral implantation of quadripolar DBS electrodes (Medtronic Model 3389) with direct MRI-guided targeting technique under general anaesthesia and with the use of the Leksell frame (pre-2016) or the Neuromate (Renishaw, UK) robot (post-2016). Intraoperative microelectrode recording was performed for most cases until 2016. Lead placement was confirmed by intra-operative CT imaging for cases up to 2019, following which the O-arm intra-operative imaging system was used. DBS programming was initiated on the day of surgery by a movement disorder paediatric neurologist. For Case 16 electrode placement was confirmed with peri-operative MRI. In all cases bilateral DBS was performed, targeting the GPi. All cases operated on at the ELCH received Medtronic Activa RC neurostimulators, except Case 1 (bilateral Medtronic Soletra neurostimulators) and Case 2 (Medtronic Kinetra neurostimulator). Case 16 received a Boston Scientific Vercise neurostimulator. DBS programming DBS programming for the ELCH cohort followed a previously described progression . Pulse generators were initially programmed with a voltage of 0.5 V, pulse width of 450 μs, and stimulation frequency of 130 Hz, with bilateral single monopolar contacts. Based on clinical response, voltages were then increased over subsequent months as indicated and tolerated, before pragmatic activation of second, third and, much less commonly, a fourth monopolar contact. For Case 16 a similar stepwise approach was used, with stimulation initiated 6–8 weeks post-operatively, but with a starting pulse width of 60 microseconds. Outcome measures following surgery For CAYP from ELCH, outcome following DBS was assessed using two standardised measures: the percentage change in BMFDRS and the change in Canadian Occupational Performance Measure (COPM) . Both were conducted by highly specialised therapists within the ELCH multi-disciplinary team. BFMDRS motor scores assessed at baseline, 1-year post-surgery and at last available follow-up were collected, with percentage change from baseline BMFDRS score calculated at each time-point. Where available, change in Canadian Occupational Performance Measure (COPM) at these time points was also collected. The COPM is a client-centred tool used for CYP and their carers, to identify what daily life problem areas to address with interventions, setting personalised goals. Each “goal” is scored at baseline and following intervention, with a change in score of two points averaged over 5 goal areas being considered clinically relevant . Complications following surgery were identified for each CAYP from a prospectively maintained database . Standardised measures were not recorded for Case 16.
Individuals with a confirmed diagnosis of GA1 undergoing DBS between July 2005 and July 2022 were identified from the institutional database at the Evelina London Children’s Hospital (ELCH), Guy’s and St Thomas’ NHS Foundation Trust, London UK. One additional case, undergoing surgery in adulthood, at the Salford Royal University Hospital (SRUH) NHS Foundation Trust, was also included (Case 16). In all cases, a diagnosis of GA1 had been made on the basis of biochemical testing for elevations in plasma glutaric acid, 3-hydroxyglutaric acid, glutaconic acid, and glutarylcarninitine, also confirmed with GCDH gene analysis.
Demographical and clinical data were extracted for each individual identified, including clinical onset, age at surgery, baseline measures of functional ability (Gross Motor Function Classification System (GMFCS) level equivalent and Manual Ability Classification System (MACS) level equivalent), and baseline dystonia severity as measured by the Burke–Fahn–Marsden Dystonia Rating Scale (BFMDRS) All CAYP at ELCH had medication-refractory generalised dystonia leading to consideration of DBS surgery, suitability for which was assessed by an experienced multi-disciplinary team. Case 16 underwent surgery in adulthood following assessment of suitablility for DBS by the multi-disciplinary team SRUH.
A total of 14/16 cases underwent resting FDG-PET CT imaging prior to surgery. FDG-PET imaging provides information as to the metabolic activity of brain tissue and has been routinely used as part of the assessment process of CAYP undergoing evaluation as potential candidates for DBS surgery at the ELCH. FDG-PET imaging can help with the qualitative assessment of the target nuclei for DBS insertion, particularly where structural neuroimaging demonstrates areas of brain injury. As previously described, [ 18 F]2-fluoro-2-deoxy- d -glucose (FDG)-PET imaging has been used to help assess eligibility for DBS surgery since 2005 at the ELCH. Prior to October 2013 all CAYP underwent FDG-PET-CT imaging on a GE (General Electric Medical Systems, Waukesha, WI) Discovery ST and a Discovery VCT scanner. Thereafter, scans were conducted using a GE Discovery 710 scanner at the King’s College London and Guy’s and St Thomas’ PET Centre. The FDG dose injected was scaled relative to a 250-MBq dose for a 70-kg adult as 250/70*child’s weight [kg]. FDG was injected after a 3-h fast and followed by a 30-min uptake period in a quiet room. Brief general anaesthesia was then induced, only for the duration of the 15-min PET-CT image acquisition, to eliminate dystonia-related motion artefact during scanning. General anaesthesia was initiated after the uptake period so FDG uptake reflects brain metabolism during wakefulness. For one case (Case 1 in Table ) a continuous infusion of intravenous 10% dextrose was delivered during FDG-PET acquisition.
During the 17 years over which data were reviewed, MRI sequences routinely acquired as part of pre-operative work-up evolved, with acquisition on 1.5 T General Electric or Siemens scanners, or more recently a 3 T Siemens scanner. In all cases T 1 -, T 2 - and proton density weighted sequences were available for assessment for cases operated at the ELCH. Available pre-operative MRI images were reviewed and assessed for the presence of regional abnormalities (signal change, and/or volume loss) including in the pallidum, putamen, caudate or thalamus by an experienced Consultant Neuroradiologist (author JC). Each of these regions was classified as "normal" or "abnormal", as was the white matter.
Central motor conduction times (CMCT) and somatosensory evoked potentials (SEPs) were obtained as part of the pre-operative assessment at ELCH and analysed as previously reported . CMCT and SEP measurements provide information on the intergrity of major motor and sensory pathways in the brain. We have previously demonstrated in a cohort of CAYP undergoing assessment for DBS that integrity of these pathways may be demonstrated in CAYP for whom MRI neuroimaging might suggest white matter or other brain injury which would potentially preclude the application of DBS . Furthermore, abnormalities in either CMCT or SEP recordings predicts a poorer outcome for CAYP following DBS CMCT recordings were available for all children, but as SEPs were added to routine patient assessment later than CMCTs, they were not recorded in all cases. CMCTs and SEPs were recorded from all four limbs with each patient classified as having “abnormal” testing if the recordings from one or more limbs were abnormal .
Individualised peri-operative metabolic management plans were followed in line with British Inherited Metabolic Disease Group guidance .
For all cases undergoing surgery following assessment at the ELCH, the surgery entailed bilateral implantation of quadripolar DBS electrodes (Medtronic Model 3389) with direct MRI-guided targeting technique under general anaesthesia and with the use of the Leksell frame (pre-2016) or the Neuromate (Renishaw, UK) robot (post-2016). Intraoperative microelectrode recording was performed for most cases until 2016. Lead placement was confirmed by intra-operative CT imaging for cases up to 2019, following which the O-arm intra-operative imaging system was used. DBS programming was initiated on the day of surgery by a movement disorder paediatric neurologist. For Case 16 electrode placement was confirmed with peri-operative MRI. In all cases bilateral DBS was performed, targeting the GPi. All cases operated on at the ELCH received Medtronic Activa RC neurostimulators, except Case 1 (bilateral Medtronic Soletra neurostimulators) and Case 2 (Medtronic Kinetra neurostimulator). Case 16 received a Boston Scientific Vercise neurostimulator.
DBS programming for the ELCH cohort followed a previously described progression . Pulse generators were initially programmed with a voltage of 0.5 V, pulse width of 450 μs, and stimulation frequency of 130 Hz, with bilateral single monopolar contacts. Based on clinical response, voltages were then increased over subsequent months as indicated and tolerated, before pragmatic activation of second, third and, much less commonly, a fourth monopolar contact. For Case 16 a similar stepwise approach was used, with stimulation initiated 6–8 weeks post-operatively, but with a starting pulse width of 60 microseconds.
For CAYP from ELCH, outcome following DBS was assessed using two standardised measures: the percentage change in BMFDRS and the change in Canadian Occupational Performance Measure (COPM) . Both were conducted by highly specialised therapists within the ELCH multi-disciplinary team. BFMDRS motor scores assessed at baseline, 1-year post-surgery and at last available follow-up were collected, with percentage change from baseline BMFDRS score calculated at each time-point. Where available, change in Canadian Occupational Performance Measure (COPM) at these time points was also collected. The COPM is a client-centred tool used for CYP and their carers, to identify what daily life problem areas to address with interventions, setting personalised goals. Each “goal” is scored at baseline and following intervention, with a change in score of two points averaged over 5 goal areas being considered clinically relevant . Complications following surgery were identified for each CAYP from a prospectively maintained database . Standardised measures were not recorded for Case 16.
Clinical cases From a total 235 CAYP undergoing primary DBS implantation at the ELCH over the study time period, 15 CAYP with GA1 were identified, representing 6.4% of the implanted population. One additional case operated on in adulthood at the SRUH was also included i.e. a total of 16 cases. Clinical details for these participants are summarised in Table . Short-term outcomes for cases 1–3 have been previously reported . CAYP ranged in age from 3–17.5 (median 11.25 years) at the time of surgery. Case 16 was 31 years old at the time of surgery. In 3/16 cases no acute episode of encephalopathic decompensation had been noted prior to the development of the movement disorder. Baseline BMFDRS motor score ranged from 58.5–114, median 105, with a GMFCS-equivalence level of V for 10/16 cases, GMFCS IV for 4/15 and GMFCS II for 1/15 cases, demonstrating the severity of movement disorder in these CAYP. Surgery was well tolerated in all cases, with no metabolic complications encountered in the peri-operative period for any CAYP. Imaging findings Structural MRI imaging was available for review in all CAYP operated at the ELCH, with FDG-PET available in 14/15 CAYP (Table ). In all cases abnormalities of the deep grey nuclei were bilateral in nature (i.e. unilateral structural changes were not observed). Structural abnormalities and reduced or absent FDG PET imaging glucose uptake in the putamen was seen in all 14 CAYP with FDG PET. Structural abnormalities were also identified in the pallidum (14/15 CAYP), the caudate (11/15 CAYP), the thalamus (4/15 CAYP), and in the white matter in 12/15 CAYP. Abnormal FDG uptake in the caudate was seen in 4/14 CAYP (all of whom also demonstrated structural MRI lesions within the caudate), and abnormalities of FDG uptake in the thalamus was seen in 2 CAYP (1 of whom did not demonstrate structural changes in either thalamus). Examples of progression in MRI findings and FDG-PET images for a single case are illustrated by Fig. a. A comparison of FDG-PET and MRI findings for Cases 6–15 is shown in Fig. . Neurophysiological findings Despite white matter MRI changes in all ELCH cases, CMCT measurements were only abnormal in 2/15 cases (Table ). Notably, CMCT measurements were normal in 11/12 CAYP with white matter changes and abnormal CMCT measurements were obtained in 1 CAYP without evidence of white matter changes on MR imaging. Abnormal SSEPs where found in only 1/11 CAYP (who exhibited no abnormalities of the thalami on FDG-PET or MRI). SSEP measurements were normal in the 4 CAYP with abnormalities of the thalami on FDG-PET and/or MRI. Neurophysiological measurements were not available for Case 16. Outcomes following surgery No outcome data were available for one child who moved away from the UK before 1 year post-operatively or for one child who experienced infection of the implant at 9 months post-surgery, resulting in complete removal of the implanted system. For the remaining 13 CAYP from ELCH, outcome data available ranged from 1 to 5 years post-surgery. Four CAYP had transitioned to adult services since DBS surgery. Follow-up data were available for two of these cases (Case 4 with 9 years of follow-up data at SRUH before death at the age of 30 years, and Case 5 with 3 years of follow-up data at King’s College Hospital NHS Foundation Trust, London). Death occurred prior to transition in 2 CAYP at 1.5- and 5.5-years following surgery, respectively (in neither case related to DBS). The remaining 8 CAYP continue under active follow-up at the ELCH. BFMDRS and COPM scores were available in 12 and 11 CAYP, respectively and are shown in Table and Fig. . BFMDRS motor score 1-year post-surgery ranged from 57.5–108.5 (median 97.25) and at last follow-up ranged from 57.5–112 (median 104). There was no statistically significant change compared to baseline at either time point, P > 0.05. In contrast, COPM scores demonstrated a clinically significant improvement in 7/11 CAYP at 1 year, and a clinically significant improvement in 8/11 CAYP at last follow-up. Standardised outcome measures were not available for Case 16 but whilst limited improvement in oromandibular dystonia was observed, a greater than 50% reduction in truncal and limb dystonia was estimated from clinical assessments, with significant improvement in mobility. Prior to surgery Case 16 had required multiple hospital admissions for dystonic crises, but no admissions have been required in the 9 years since DBS insertion. A total of six complications requiring re-operation occurred in 4 CAYP during management at the ELCH (outlined in Table ). Following transition to adult services, Case 4 required surgery to remove an infected Activa RC implant 10 years following the original surgery. Case 5 required a routine battery change 1 year following transition, complicated by infection needing repositioning of the implant after one month, and then repositioning of a “flipped” battery 1 year later. Notably, Case 16 is now 9 years following surgery in adulthood, with no revision surgeries required to date. Relationship between neuroimaging findings and outcome No clear relationship between neuroimaging findings and outcome was identified. The single case with thalamic abnormalities soley on FDG-PET was amongst the 3/11 CAYP who did not show a significant improvement in COPM scores. All three of these CAYP demonstrated structural changes in all basal ganglia structures and white matter, with one child also demonstrating structural changes in the thalami.
From a total 235 CAYP undergoing primary DBS implantation at the ELCH over the study time period, 15 CAYP with GA1 were identified, representing 6.4% of the implanted population. One additional case operated on in adulthood at the SRUH was also included i.e. a total of 16 cases. Clinical details for these participants are summarised in Table . Short-term outcomes for cases 1–3 have been previously reported . CAYP ranged in age from 3–17.5 (median 11.25 years) at the time of surgery. Case 16 was 31 years old at the time of surgery. In 3/16 cases no acute episode of encephalopathic decompensation had been noted prior to the development of the movement disorder. Baseline BMFDRS motor score ranged from 58.5–114, median 105, with a GMFCS-equivalence level of V for 10/16 cases, GMFCS IV for 4/15 and GMFCS II for 1/15 cases, demonstrating the severity of movement disorder in these CAYP. Surgery was well tolerated in all cases, with no metabolic complications encountered in the peri-operative period for any CAYP.
Structural MRI imaging was available for review in all CAYP operated at the ELCH, with FDG-PET available in 14/15 CAYP (Table ). In all cases abnormalities of the deep grey nuclei were bilateral in nature (i.e. unilateral structural changes were not observed). Structural abnormalities and reduced or absent FDG PET imaging glucose uptake in the putamen was seen in all 14 CAYP with FDG PET. Structural abnormalities were also identified in the pallidum (14/15 CAYP), the caudate (11/15 CAYP), the thalamus (4/15 CAYP), and in the white matter in 12/15 CAYP. Abnormal FDG uptake in the caudate was seen in 4/14 CAYP (all of whom also demonstrated structural MRI lesions within the caudate), and abnormalities of FDG uptake in the thalamus was seen in 2 CAYP (1 of whom did not demonstrate structural changes in either thalamus). Examples of progression in MRI findings and FDG-PET images for a single case are illustrated by Fig. a. A comparison of FDG-PET and MRI findings for Cases 6–15 is shown in Fig. .
Despite white matter MRI changes in all ELCH cases, CMCT measurements were only abnormal in 2/15 cases (Table ). Notably, CMCT measurements were normal in 11/12 CAYP with white matter changes and abnormal CMCT measurements were obtained in 1 CAYP without evidence of white matter changes on MR imaging. Abnormal SSEPs where found in only 1/11 CAYP (who exhibited no abnormalities of the thalami on FDG-PET or MRI). SSEP measurements were normal in the 4 CAYP with abnormalities of the thalami on FDG-PET and/or MRI. Neurophysiological measurements were not available for Case 16.
No outcome data were available for one child who moved away from the UK before 1 year post-operatively or for one child who experienced infection of the implant at 9 months post-surgery, resulting in complete removal of the implanted system. For the remaining 13 CAYP from ELCH, outcome data available ranged from 1 to 5 years post-surgery. Four CAYP had transitioned to adult services since DBS surgery. Follow-up data were available for two of these cases (Case 4 with 9 years of follow-up data at SRUH before death at the age of 30 years, and Case 5 with 3 years of follow-up data at King’s College Hospital NHS Foundation Trust, London). Death occurred prior to transition in 2 CAYP at 1.5- and 5.5-years following surgery, respectively (in neither case related to DBS). The remaining 8 CAYP continue under active follow-up at the ELCH. BFMDRS and COPM scores were available in 12 and 11 CAYP, respectively and are shown in Table and Fig. . BFMDRS motor score 1-year post-surgery ranged from 57.5–108.5 (median 97.25) and at last follow-up ranged from 57.5–112 (median 104). There was no statistically significant change compared to baseline at either time point, P > 0.05. In contrast, COPM scores demonstrated a clinically significant improvement in 7/11 CAYP at 1 year, and a clinically significant improvement in 8/11 CAYP at last follow-up. Standardised outcome measures were not available for Case 16 but whilst limited improvement in oromandibular dystonia was observed, a greater than 50% reduction in truncal and limb dystonia was estimated from clinical assessments, with significant improvement in mobility. Prior to surgery Case 16 had required multiple hospital admissions for dystonic crises, but no admissions have been required in the 9 years since DBS insertion. A total of six complications requiring re-operation occurred in 4 CAYP during management at the ELCH (outlined in Table ). Following transition to adult services, Case 4 required surgery to remove an infected Activa RC implant 10 years following the original surgery. Case 5 required a routine battery change 1 year following transition, complicated by infection needing repositioning of the implant after one month, and then repositioning of a “flipped” battery 1 year later. Notably, Case 16 is now 9 years following surgery in adulthood, with no revision surgeries required to date.
No clear relationship between neuroimaging findings and outcome was identified. The single case with thalamic abnormalities soley on FDG-PET was amongst the 3/11 CAYP who did not show a significant improvement in COPM scores. All three of these CAYP demonstrated structural changes in all basal ganglia structures and white matter, with one child also demonstrating structural changes in the thalami.
This report presents multi-modal clinical, imaging and neurophysiological data from a cohort of 16 individuals with GA1, together with clinical outcomes from pallidal DBS delivered to manage their refractory dystonia. To our knowledge this is the largest GA-1 functional neurosurgery case-series of its kind. Our key findings are (i) despite small and statistically non-significant changes in BMFDRS motor score, significant functional improvement, as measured by the COPM, was observed in 8/11 (> 70%) CAYP for whom this measure was collected, and (ii) abnormalities of basal ganglia, thalami and white matter on structural MRI or FDG-PET imaging did not preclude improvement in COPM score. Prior to this report, outcomes following DBS had been reported for only 9 CAYP with GA1, including 3 from our centre . Consistent with previous findings, only small changes in BMFDRS score were seen in our expanded cohort, with changes compared to baseline at the group level not reaching statistical significance. The BFMDRS was originally developed and validated in adults with idiopathic or genetic forms of isolated dystonia . Significant limitations have been identified in the application of the BFMDRS to CAYP with acquired forms of dystonia , and we have previously demonstrated that CAYP may experience functional benefits following DBS surgery which are not captured by changes in BFMDRS score . This is consistent with the observation of improvement in COPM in 8/11 CAYP despite minimal BFDMRS change in our cohort. Furthermore, Case 16 has experienced a sustained period free from dystonic crises since surgery. We have previously reported the application of the COPM to demonstrate improvements in individualised functional goal areas for a cohort of 30 CAYP with dystonia undergoing DBS . The COPM is an evidence-based tool designed to capture self-perception of performance in everyday living over time. The reproducibility and validity of COPM has been demonstrated in a large cohort of CAYP (median age 3.7 years) . Table provides a summary of DBS outcomes in the 6 previously reported cases not included in our current case series. Of note, a substantial reduction in both Barry-Albright Dystonia Scale (BADS) and BFMDRS score (~ 50%) was reported for one child receiving bilateral Globus Pallidus Interna DBS in combination with stimulation of the pedunculopontine nucleus , with similar reduction in one further child receiving bilateral pallidal stimulation . Limited data are available to support the use of other neurosurgical interventions in the management of dystonia in CAYP with GA1 . Intrathecal Baclofen was reported to be of benefit in a 15-year old girl with GA1, with a reduction in BADS score from 12 to 9. Her movement disorder was described as fixed and mobile dystonia and spastic quadriplegic cerebral palsy without parkinsonism . Positive outcomes with the use of intraventricular baclofen have also been described in 2 patients with GA1 (aged 10 and 23 years old, with reduction in BADS score from 30.7 to 5.0 and from 29.7 to 24.3, respectively) , both of whom were described as exhibiting generalised dystonia. Finally, outcomes following bilateral pallidotomy have been reported for 3 cases , with reductions in BFMDRS score from 113 to 99 reported in a 12-year old , and from 115 to 98 in a 6-year old . Qualitative improvement in dystonia was also reported for a severely affected 18-month old . More formal standardised functional or non-impairment based scores have not been reported for previous cases out with our current series, consistent with our previous finding that outcome measures of interventions in childhood dystonia do not typically focus on the priorities of CAYP or their families . We have previously reported normal CMCT in 50/62 children with dystonia, despite abnormal MRI imaging in 40/62 of that cohort . That previous cohort included four of our currently presented CAYP with GA1. In an expanded cohort of 180 CAYP with dystonia, abnormalities in SSEP measurements were more commonly identified than changes in CMCT (47% versus 19%), with better outcomes seen following DBS when both measurements where normal . Neurophysiological testing performed as part of routine pre-operative assessment for the 15 CAYP with GA1 in this cohort again identified abnormal values in only a handful of cases, despite MRI abnormalities, demonstrating the importance of a multi-modal assessment of motor and sensory pathway integrity. Structural MR imaging in this cohort demonstrated injury of the putamen in all cases, with very common involvement also of the pallidum and caudate, and less frequent abnormalities of thalamus and white matter (Table ). It should be noted that the patients in our cohort are a selected group within the GA1 population, all having severe, medically refractory dystonia. In this context, it is pertinent that analysis of a large cohort of MRI scans from 180 individuals with GA1 found putaminal changes to be the most reliable predictor of movement disorder . In a large pattern-recognition approach to basal ganglia abnormalities in an international cohort of 305 MRI scans, GA1 was assigned on the basis of cluster analysis to a cluster of disorders with predominant T 2 -weighted hyperintensities in the striatum . This cluster also included other metabolic disorders (e.g. proprionic acidaemia). Interestingly, pallidal abnormalities were a relatively infrequent finding in the cases grouped into this cluster , in contrast to the relatively high frequency with which they were seen in our cohort. Again, this could be explained in part by the selective nature of our cohort. Importantly, changes in basal ganglia and/or thalami did not preclude the potential to respond to DBS in the CAYP in our current series. In a previous statistical analysis we have demonstrated an FDG-PET pattern of relative regional hypometabolism in the posterior putamen and pallidum as characteristic of GA1 . Our qualitative analysis in the current cohort is consistent with this. As seen with structural imaging changes, hypometabolism of the basal ganglia and thalami did not preclude the possibility of a positive response to DBS. Newborn screening for GA1 is available in most of the more economically developed countries of the world. The timing of screening varies but it is exceptionally rare for a child to have suffered striatal injury prior to screening. Screening aims to reduce the occurrence of injury from brain accumulation of glutarate and 3-hydroxyglutarate by prompt treatment of illness, prescription of l -carnitine, and a lysine-restricted, arginine-supplemented diet . Despite the very clearly demonstrated benefits of this approach there remain a number of children who will develop striatal injury despite having received specialist metabolic management since newborn screening. Two biochemically distinct but clinically similar entities are recognised based on excretion of disease metabolites. Low excretors are a source of false-negatives at newborn screening but have the same risk of acute encephalopathic crisis with striatal injury . In addition, adherence to all aspects of this treatment strategy is challenging for families. In a review of patients screened as newborns, 32% of children did not receive treatment according to published guidelines, and overall, 30% of patients developed major motor symptoms. Despite a clear treatment effect, 7% of fully adherent patients also had serious striatal injury . In certain populations where full adherence is more challenging, 90% of children experience acute encephalopathic crises despite newborn screening and implementation of full guidelines. Striatal injury can also occur insidiously, without an apparent crisis. It has been suggested that this may be more common in children who are not fully adherent with the recommended diet . It seems likely, therefore, that even with current preventative strategies the need for intervention to manage distressing dystonia will remain. Several limitations to our present study must be acknowledged. First, this is a retrospective review, at risk of the attendant limitations of such study designs. Whilst representing a comparatively large cohort of CAYP, given the rarity of GA1 and limited access to DBS, the small number of cases precludes complex statistical analysis. SEP data were not available for all cases, nor were COPM scores. Post-operative follow-up was of limited duration, with 2/15 CAYP transitioned to adult services for whom no further data were available and 1 further CAYP having been lost to follow-up on leaving the UK. The limitations of the application of BFMDRS score in this population have been acknowledged above, further compounded by scoring performed in a clinical setting, not blinded to operative status.
In this retrospective cohort, a high proportion of CAYP with GA1 undergoing pallidal DBS for refractory dystonia achieved clinically significant functional improvements as measured by the COPM, despite failure of the BFMDRS scores to demonstrate an improvement in objective dystonia measurements. This may reflect the poor sensitivity of the BFMDRS to demonstrate change in this patient group. Importantly, functional improvement was achieved in the majority of patients despite imaging evidence of damage to the basal ganglia and/or white matter. Finally, the surgical complication rate was no different from previously published data from children undergoing DBS with a broader range of aetiologies. DBS may be considered as a management option for children with GA1 who have appropriately selected goals for intervention.
|
A Paper-Based Simulation Model for Teaching Inguinal Hernia Anatomy | 17f6696e-5214-413c-8897-c038ba204541 | 10132405 | Anatomy[mh] | Hernias of the abdominal wall, defined as the abnormal protrusion of intra-abdominal contents through the containing abdominal wall, is a common surgical pathology . They have a prevalence of about 4% in those over 45 years old . Inguinal hernias represent 75% of all abdominal wall hernias, and its repair remains one of the most common general surgical operations in the UK . However, the complex anatomy of the inguinal canal continues to make the understanding of this disease and its surgical repair challenging for medical students and junior surgical trainees . Traditionally, in the undergraduate curriculum, this topic is delivered using didactic lectures and tutorials or delivered in the operating theatre . These modes have inherent limitations; lectures are inherently descriptive and use 2-dimensional images, whereas intraoperative teaching is opportunistic and unstructured. The COVID-19 pandemic and its subsequent reprioritisation of healthcare resources have consequently led to a detrimental effect on the volume and quality of teaching opportunities in surgical training . This pandemic has highlighted the importance of the development of surgical training tools which can be complementary to traditional surgical training techniques or be used as effective contingency alternatives where normal workplace surgical training is reduced or suspended. This has led to the development and use of a 3D paper-based model for simulated teaching of inguinal hernia in our department.
Hernia model A paper-based model was developed comprising four overlapping paper panels simulating the anatomical layers of the inguinal canal and associated structures (Figs. and ). These paper panels display key anatomical structures of the inguinal canal in schematic fashion and allow for low-fidelity simulation of open groin hernia procedures (Figs. and ). These models can be easily modified using readily available adjunct materials such as surgical gauze, plastic tubing and glove material to simulate normal inguinal canal anatomy, various inguinal hernia pathologies and an open surgical mesh repair of an inguinal hernia (Figs. , and ). Learning sessions The use of these models was incorporated into a timetabled structured learning session delivered by the authors for 3 rd - and 4 th -year medical students rotating through their general surgical placement in a single teaching hospital site. Briefly, in these learning sessions, pertinent concepts surrounding the anatomy and pathology of inguinal hernia are discussed including surface and surgical anatomy, clinical examination, investigations including radiology, different pathological variants and surgical techniques involved in the repair of inguinal hernia. These learning sessions were designed and blueprinted based on Gagne’s instructional levels (Supplementary Table 1). Students are then provided with one each of a variety of completed models of the hernia, each constructed to simulate the normal inguinal canal, various inguinal hernia pathologies and a surgically repaired inguinal hernia (Figs. and ). Students are then allowed to make a ‘skin incision’ on the model and dissect down, simulating a surgical exposure of the inguinal canal in the paper model to the deepest layer and discuss what they find on these models and compare it with the other models (Fig. ). In models with simulated pathology, students can proceed to a repair of the hernia, including dissection of the sac and reducing it, and placing and securing the ‘mesh’. Students’ perceptions of their knowledge and understanding of inguinal hernia anatomy and pathology were assessed using anonymised surveys delivered immediately before and repeated immediately after the learning sessions. Additionally, students’ perceptions of the usefulness of the models and the sessions were assessed in the post-session questionnaires (Fig. ). These questionnaires incorporated three questions asking the learners to rate their confidence in describing the layers of the inguinal canal, identifying a direct and indirect inguinal hernia and in naming the contents of the inguinal canal on a 10-point semantic differential scale. Learners were also asked to rate the usefulness of the session and provide freehand comments. Ethics Proportional review has been sought from the University of Glasgow College of Medical, Veterinary and Life Sciences Ethics Committee who have advised that this research project does not need full ethical review and has waived the need for this. Data used and reported in this study are from routinely collected course evaluation data and do not include any personal identifiable details from students involved in these teaching sessions.
A paper-based model was developed comprising four overlapping paper panels simulating the anatomical layers of the inguinal canal and associated structures (Figs. and ). These paper panels display key anatomical structures of the inguinal canal in schematic fashion and allow for low-fidelity simulation of open groin hernia procedures (Figs. and ). These models can be easily modified using readily available adjunct materials such as surgical gauze, plastic tubing and glove material to simulate normal inguinal canal anatomy, various inguinal hernia pathologies and an open surgical mesh repair of an inguinal hernia (Figs. , and ).
The use of these models was incorporated into a timetabled structured learning session delivered by the authors for 3 rd - and 4 th -year medical students rotating through their general surgical placement in a single teaching hospital site. Briefly, in these learning sessions, pertinent concepts surrounding the anatomy and pathology of inguinal hernia are discussed including surface and surgical anatomy, clinical examination, investigations including radiology, different pathological variants and surgical techniques involved in the repair of inguinal hernia. These learning sessions were designed and blueprinted based on Gagne’s instructional levels (Supplementary Table 1). Students are then provided with one each of a variety of completed models of the hernia, each constructed to simulate the normal inguinal canal, various inguinal hernia pathologies and a surgically repaired inguinal hernia (Figs. and ). Students are then allowed to make a ‘skin incision’ on the model and dissect down, simulating a surgical exposure of the inguinal canal in the paper model to the deepest layer and discuss what they find on these models and compare it with the other models (Fig. ). In models with simulated pathology, students can proceed to a repair of the hernia, including dissection of the sac and reducing it, and placing and securing the ‘mesh’. Students’ perceptions of their knowledge and understanding of inguinal hernia anatomy and pathology were assessed using anonymised surveys delivered immediately before and repeated immediately after the learning sessions. Additionally, students’ perceptions of the usefulness of the models and the sessions were assessed in the post-session questionnaires (Fig. ). These questionnaires incorporated three questions asking the learners to rate their confidence in describing the layers of the inguinal canal, identifying a direct and indirect inguinal hernia and in naming the contents of the inguinal canal on a 10-point semantic differential scale. Learners were also asked to rate the usefulness of the session and provide freehand comments.
Proportional review has been sought from the University of Glasgow College of Medical, Veterinary and Life Sciences Ethics Committee who have advised that this research project does not need full ethical review and has waived the need for this. Data used and reported in this study are from routinely collected course evaluation data and do not include any personal identifiable details from students involved in these teaching sessions.
A total of 45 students participated in these sessions over a period of 6 months. Pre-learning session mean ratings for the learners’ confidence in their understanding of the layers of the inguinal canal, identifying indirect and direct inguinal hernias and in naming the contents of the inguinal canal were 2.5, 3.3 and 2.9, while post-learning session mean ratings were 8.0, 9.4 and 8.2, respectively. Paired samples Student’s t -tests for all three questions were statistically significant ( p < 0.001) (Fig. ). The mean rating for usefulness of the session was 9.6/10. Free comments from students emphasised the usefulness of the models as a visual learning aid (Fig. ).
Our results indicate that students found these sessions useful in improving their understanding of inguinal hernia anatomy, pathology and surgical repair. Simulation is increasingly used in surgical training and represents a shift from the traditional ‘see one, do one, teach one’ paradigm of surgical training in the past . There have been multiple drivers for this paradigm shift including increasingly steep learning curves associated with modern surgical techniques, an increased focus on patient safety and the adoption of modern educational pedagogical methods in surgical training . Simulation is pedagogically consistent with current understanding of surgical skill acquisition and development. Fitt and Posner describe the three-stage theory of skills acquisition as incorporating the three distinct stages of cognition, integration and automation, which respectively involve intellectualising the task using this and translating it into execution of the task, and thereafter developing automation of the task from continued practice of the task . Simulation allows trainees to develop and master the earliest stages of task acquisition in a safe environment away from the patient. The evidence for simulated models of hernia repair and their efficacy is scarce in the literature. Ansaloni et al. and Nazari et al. have both independently described different 3-dimensional models constructed from primarily a cardboard box and different fabrics, respectively . Mann et al. describe a full paper model similar to ours illustrated with realistic anatomy . However, unlike our model, Mann et al.’s model does not allow modification for the simulation of different surgical pathologies using adjunct material . Other ex vivo models include computer simulation models have been described but are more cost-intensive and often not available as open-source models which can be reproduced widely by readers and interested trainers . Our model is also considerably low fidelity with the use of paper and schematic anatomy; fidelity in the context of simulation being the level of realism (multidimensional) of a particular simulation activity to the learners . This design is deliberate. Indeed, current evidence suggests that educational outcomes are similar in high- and low-fidelity models and some studies suggest low-fidelity models are superior to high-fidelity models . A more unified interpretation of current evidence may be that training should constitute a range of fidelity levels, and this can be personalised to the individual needs of the learners. When considering this within the context of cognitive load theory, low fidelity, simpler models may be associated with minimising the intrinsic and extraneous cognitive loads (intrinsic load being the innate difficulty of the task itself; in this case, the hernia repair and extraneous load being any other external loads not related to the subject matter itself, e.g. the learning session and how it is designed) . This can therefore better aid in understanding the key concepts behind the task and therefore acquisition of learning . These suggest that these low-fidelity models are ideally suited towards introducing the concept of hernias and hernia repairs to relative novices such as medical students and surgical trainees at the beginning of training. Importantly, it is likely that the best learning programs will employ a mixed and perhaps stepwise manner of increasing fidelity and complexity; therefore, our low-fidelity model may be utilised as an introductory level model to introduce the concept to novices before progressing in a stepwise manner to more complex simulations, for example, computer, 3-dimensional, cadaveric and finally patients undergoing hernia repairs in the operating theatre . This work has some limitations. While we have assessed perceptions of knowledge and understanding of medical students, we have not assessed knowledge and understanding levels. Future research should assess knowledge and understanding levels of students before and after undergoing learning sessions using these models. These models and their efficacy also need to be validated across medical students at different training levels, as well as postgraduate training doctors at the early stages of surgical training. In conclusion, we describe a cost-effective paper-based model for the teaching of inguinal hernia which can be flexibly modified to represent normal anatomy and different surgical pathologies of the inguinal canal. The use of these models within a structured learning session has been associated with improved students’ perception of their knowledge and understanding of the anatomy, pathology, and management of inguinal hernias. This paper also provides the model in an electronic template (Supplementary File 2) detailed information on the design and construction of both the model and the associated lesson plans, making this an open-source model which can be evaluated and used by surgical trainers on a global basis.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 16 kb) Supplementary file2 (PDF 147 kb)
|
‘It surprised me a lot that there is a link’: a qualitative study of the acceptability of periodontal treatment for individuals at risk of rheumatoid arthritis | f8078735-2dd1-49fa-8da0-ae116d16549a | 10174022 | Dental[mh] | The impact of poor oral health may not be well understood by individuals at risk of RA and the health professionals involved in their care. Seeking dental treatment can be hindered by dental anxiety, costs and inequalities around access to dentists.
A clinical trial involving preventive periodontal treatment is potentially acceptable for individuals at risk of RA.
The initiation of rheumatoid arthritis (RA) is purported to occur at mucosal sites, including the oral cavity, lung and gastrointestinal tract. Here, local inflammation may occur due to a combination of genetic and environmental risk factors. In addition, a bacterial dysbiosis may exist; it is postulated that the combination of inflammation and dysbiosis may trigger a break in immune tolerance, in particular towards citrullinated proteins. Periodontal disease (PD) is a chronic inflammatory disease that destroys the tooth supporting tissues including the alveolar bone, periodontal ligament and fibres, and the overlying gingiva. Globally, it affects 20%–50% of people, with approximately 10% suffering from its severe form. There is mounting evidence associating PD with RA. Both share common genetic and environmental risk factors, including smoking, obesity and socioeconomic status. The prevalence of PD is higher in patients with RA compared with the general population. PD is also increased in individuals at risk of RA, before the onset of clinical arthritis, suggesting that periodontal inflammation precedes joint inflammation. One bacterium of interest is Porphyromonas gingivalis , which is enriched in PD and produces a peptidyl arginine deiminase enzyme; this citrullinates c-terminus arginine residues in a-elonase and fibrinogen—two peptide targets implicated in RA. PD can be effectively treated through surgical and non-surgical interventions. Interestingly, a recent trial reported an improvement in RA disease activity following the treatment of coexistent PD. Early intervention to improve periodontal health in people with early RA or at-risk individuals may, therefore, provide a unique opportunity to delay the progression of RA or potentially prevent its onset. Many individuals at risk of RA have expressed reluctance to take preventive medications, especially when asymptomatic, while making lifestyle changes is perceived to be more acceptable. Periodontal treatment and advice could be considered a non-invasive, low-risk intervention that may provide similar systemic benefits to drug therapy, but without the risk of drug side effects, and has the additional benefit of treating a coexisting disease with its own complications such as pain and tooth loss. Lifestyle change interventions have been successfully demonstrated in patients with type II diabetes, reversing disease onset. A previous qualitative study explored the experiences and priorities concerning oral health, and barriers and facilitators for periodontal trial participation, among patients with established RA. However, to our knowledge, no previous studies have explored perceptions of oral health among individuals at risk of developing RA, nor of the healthcare professionals involved in their care. As successful periodontal treatment is dependent on both adequate service provision and patient adherence, it is necessary to explore the barriers and facilitators for accessing periodontal care and maintenance among these groups. This study aimed to explore the acceptability of periodontal treatment as a measure to potentially prevent RA among at-risk individuals and relevant healthcare professionals.
This was a qualitative interview study employing a phenomenological approach to explore the meaning behind participants’ lived experiences. Our study is reported in line with the Consolidated Criteria for Reporting Qualitative Studies framework ( ). 10.1136/rmdopen-2023-003099.supp1 Supplementary data At-risk participants A purposeful sample of CCP+ at-risk individuals, with musculoskeletal (MSK) symptoms but no synovitis, were recruited from the Leeds CCP research cohort. Briefly, this is a national research cohort which recruits individuals presenting with new non-specific MSK symptoms but no clinical synovitis. Those who test positive for anti-CCP antibodies are at risk of developing RA and are followed in the Leeds CCP research clinic. At-risk participants aged 18 and above who were able to give informed consent, and able to speak and understand English, were eligible to participate. Some at-risk individuals who were invited to participate in our qualitative study had already undergone periodontal assessment delivered by a dentist, and had commenced or declined periodontal treatment as part of a separate CCP dental study (IRAS ID 213744). Participants were approached by telephone. Healthcare professional participants A wide range of healthcare professionals working in the planning and delivery of both medical and dental care services, including clinicians, commissioners and policy-makers were invited to take part in this study through purposive sampling using the authors’ professional networks. Some clinicians were involved in providing direct care to CCP+ at-risk individuals, whereas others were National Health Service (NHS) rheumatologists/nurses independent of the cohort/research team and would not be expected to have any specific knowledge of this area. Other healthcare professional participants had an indirect role through their senior leadership position in providing commissioning advice, training of health workforce, etc. Other healthcare professional participants did not have a clinical background, but had a role in policy-making and commissioning. Participants were approached by email. Data collection Individual semistructured interviews were conducted via video or telephone between February 2021 and August 2022, using topic guides ( ). Questions were open-ended and structured around the research aim. Each participant completed a single interview and provided written informed consent prior to their interview. Two female members of the research team (KV-C—a psychologist and senior qualitative researcher, and HS—a clinical academic podiatrist with experience in pre-RA research; both PhD) conducted the interviews with at-risk participants; both were previously unknown to the participants. The researchers conducted the first two interviews together to ensure consensus in the approach to questioning; the remaining interviews were undertaken by one of the two researchers. LSC observed two of the interviews. Healthcare professional participants were interviewed by one male member of the research team (SS—a specialist registrar in dental public health with experience in dental and RA research, PhD), who was known to nine of the 11 participants. All participants were briefed on the purpose of the study and the interviewing researcher’s background and personal motivation, and were given the opportunity to ask questions prior to the interview. All interviews were digitally recorded, transcribed verbatim and supplemented with field notes. The interview duration ranged from 23 to 45 min. While we did not aim for data saturation, which is arguably inappropriate for reflexive thematic analysis, we held ongoing discussions relating to recruitment to ensure our research aim was fully addressed and our final sample size was based on achieving adequate diversity of the sample and depth of data generated from participants. Patient and public involvement Patient and public involvement (PPI) contributors from local dental and rheumatology PPI groups were involved in shaping the research question, and developing the interview topic guide and participant information sheet (PIS) for at-risk participants. The wording in the topic guide and PIS changed as a result of involving PPI contributors, who suggested providing further information to participants about why the study was being conducted and what the interview data would inform. PPI contributors also informed our approach to data collection; we originally intended to conduct the interviews exclusively by telephone, but it was suggested that holding a telephone for an extended period may be difficult for participants with joint symptoms. As a result of PPI, we offered all participants the choice between a video or telephone interview. Analysis Data from interviews with at-risk participants were analysed using reflexive thematic analysis. Interviews were uploaded into NVivo V.12 (QSR International; 2018) and initially coded by one researcher (LSC), who read and reread the transcripts, generated initial codes and collated similar codes. Coding was inductive, with 10% of the transcripts second coded (KV-C), and regular coding discussions held with all other team members. Discrepancies were settled by group consensus. Codes were grouped into provisional themes through a team discussion, then reviewed against the whole data set by one other researcher (ZM). Data from interviews with healthcare professionals were then independently analysed by two researchers (LSC and ZM); coding was deductive, based on the preidentified set of constructs identified from the at-risk participant data. The two researchers discussed any discrepancies in coding until consensus was reached. The entire research team then reviewed and refined the healthcare professional content of each prespecified theme through group discussion.
A purposeful sample of CCP+ at-risk individuals, with musculoskeletal (MSK) symptoms but no synovitis, were recruited from the Leeds CCP research cohort. Briefly, this is a national research cohort which recruits individuals presenting with new non-specific MSK symptoms but no clinical synovitis. Those who test positive for anti-CCP antibodies are at risk of developing RA and are followed in the Leeds CCP research clinic. At-risk participants aged 18 and above who were able to give informed consent, and able to speak and understand English, were eligible to participate. Some at-risk individuals who were invited to participate in our qualitative study had already undergone periodontal assessment delivered by a dentist, and had commenced or declined periodontal treatment as part of a separate CCP dental study (IRAS ID 213744). Participants were approached by telephone.
A wide range of healthcare professionals working in the planning and delivery of both medical and dental care services, including clinicians, commissioners and policy-makers were invited to take part in this study through purposive sampling using the authors’ professional networks. Some clinicians were involved in providing direct care to CCP+ at-risk individuals, whereas others were National Health Service (NHS) rheumatologists/nurses independent of the cohort/research team and would not be expected to have any specific knowledge of this area. Other healthcare professional participants had an indirect role through their senior leadership position in providing commissioning advice, training of health workforce, etc. Other healthcare professional participants did not have a clinical background, but had a role in policy-making and commissioning. Participants were approached by email.
Individual semistructured interviews were conducted via video or telephone between February 2021 and August 2022, using topic guides ( ). Questions were open-ended and structured around the research aim. Each participant completed a single interview and provided written informed consent prior to their interview. Two female members of the research team (KV-C—a psychologist and senior qualitative researcher, and HS—a clinical academic podiatrist with experience in pre-RA research; both PhD) conducted the interviews with at-risk participants; both were previously unknown to the participants. The researchers conducted the first two interviews together to ensure consensus in the approach to questioning; the remaining interviews were undertaken by one of the two researchers. LSC observed two of the interviews. Healthcare professional participants were interviewed by one male member of the research team (SS—a specialist registrar in dental public health with experience in dental and RA research, PhD), who was known to nine of the 11 participants. All participants were briefed on the purpose of the study and the interviewing researcher’s background and personal motivation, and were given the opportunity to ask questions prior to the interview. All interviews were digitally recorded, transcribed verbatim and supplemented with field notes. The interview duration ranged from 23 to 45 min. While we did not aim for data saturation, which is arguably inappropriate for reflexive thematic analysis, we held ongoing discussions relating to recruitment to ensure our research aim was fully addressed and our final sample size was based on achieving adequate diversity of the sample and depth of data generated from participants.
Patient and public involvement (PPI) contributors from local dental and rheumatology PPI groups were involved in shaping the research question, and developing the interview topic guide and participant information sheet (PIS) for at-risk participants. The wording in the topic guide and PIS changed as a result of involving PPI contributors, who suggested providing further information to participants about why the study was being conducted and what the interview data would inform. PPI contributors also informed our approach to data collection; we originally intended to conduct the interviews exclusively by telephone, but it was suggested that holding a telephone for an extended period may be difficult for participants with joint symptoms. As a result of PPI, we offered all participants the choice between a video or telephone interview.
Data from interviews with at-risk participants were analysed using reflexive thematic analysis. Interviews were uploaded into NVivo V.12 (QSR International; 2018) and initially coded by one researcher (LSC), who read and reread the transcripts, generated initial codes and collated similar codes. Coding was inductive, with 10% of the transcripts second coded (KV-C), and regular coding discussions held with all other team members. Discrepancies were settled by group consensus. Codes were grouped into provisional themes through a team discussion, then reviewed against the whole data set by one other researcher (ZM). Data from interviews with healthcare professionals were then independently analysed by two researchers (LSC and ZM); coding was deductive, based on the preidentified set of constructs identified from the at-risk participant data. The two researchers discussed any discrepancies in coding until consensus was reached. The entire research team then reviewed and refined the healthcare professional content of each prespecified theme through group discussion.
Twenty-two individuals at risk of developing RA were approached about the study, and 19 participated. One declined participation due to ill health, while two were withdrawn from the study as they developed inflammatory arthritis prior to being interviewed. Eleven healthcare professionals were also approached about the study, all of whom participated. At-risk participant characteristics and healthcare professional characteristics are presented in , respectively. At-risk participant data were obtained from interview transcripts where possible; medical records were accessed for missing data as required. Three themes (six subthemes) were identified. A conceptual map identifying links between themes is displayed in and an example of the coding tree is provided in . Quotations supporting each theme are presented in ; quotes from at-risk participants are coded with the prefix PQ, while quotes from healthcare professionals are coded with the prefix HPQ.
Knowledge of shared at-risk factors Individuals at risk of RA Participants identified various perceived risk factors for RA, including genetics, diet, being overweight, lack of exercise and smoking. However, the majority of participants were unaware of any potential link between poor oral health and RA/the risk of developing RA prior to being invited to participate in a dental research study (PQ1). Some participants recognised the negative effects of smoking on oral health (PQ2, PQ3), and half of participants were aware that smoking would increase their risk of developing RA. Among at-risk participants who were unaware of the link between smoking and the risk of developing RA, the information was unsurprising (PQ4, PQ5). In contrast, one participant commented on the lack of public awareness of the link between poor oral health and RA, including the risk of developing RA (PQ6). Some participants also perceived a potential lack of knowledge among dentists regarding their at-risk status and regarding knowledge of the association between poor oral health and RA/risk of developing RA (PQ7, PQ8, PQ9), and highlighted that from a dental perspective, medical history was focused around any medications they were taking rather than specific conditions (PQ10). Healthcare professionals Healthcare professionals highlighted a disjoin between dentistry and medicine, with some from a medical background acknowledging that it did not occur to them to send their patients to a dentist or ask about oral health, and that communication between medical and dental professionals was rare unless there was a specific problem. Inadequate holistic management of patients was identified from both a dental and rheumatology perspective (HPQ1, HPQ2), but the potential to overcome this was also recognised (HPQ3). Healthcare professionals from a dental background emphasised the difficulties of not having access to complete medical histories for patients, from a safety perspective. They felt that some patients incorrectly assumed that their healthcare records were automatically shared between healthcare professionals, whereas other patients did not recognise the importance of sharing this information and were surprised by the link between oral health and general health (HPQ4). Some healthcare professionals perceived that the disjoin between medicine and dentistry was a result of commissioning and financial barriers and inadequate training (HPQ5, HPQ6). Variations in the extent of collaboration between medicine and dentistry, due to geographical location and research activity, were also acknowledged (HPQ7, HPQ8). Information and communication Individuals at risk of RA Preference for provision of information relating to the association between oral health and RA, and to dental trial participation, varied among participants. One participant perceived that time point preferences would depend on the individual (PQ11). Some participants expressed a preference for verbal information, others preferred written information, and others felt they needed a combination of both (PQ12). Likewise, some participants felt information of this nature was best delivered by a dentist, while others felt it should come from a rheumatologist. One participant suggested a multidisciplinary approach whereby dentists and rheumatologists provide the same information, while others recognised the issues that lack of communication between dental and medical teams posed. A participant with dental phobia expressed a preference for visual aids prior to treatment. Other participants highlighted the importance of feedback and encouragement regarding the impact of dental treatment on their risk of developing RA as a motivator to continue with preventive measures (PQ13). Healthcare professionals One healthcare professional perceived that information relating to the link between oral health and the risk of developing RA should come from the rheumatology team (HPQ9). With regard to the timing of information provision, another felt the link between poor oral health and systemic conditions should be discussed at diagnosis (HPQ10). The use of guidelines and posters was suggested to aid rheumatology teams to ask patients about their oral health.
Individuals at risk of RA Participants identified various perceived risk factors for RA, including genetics, diet, being overweight, lack of exercise and smoking. However, the majority of participants were unaware of any potential link between poor oral health and RA/the risk of developing RA prior to being invited to participate in a dental research study (PQ1). Some participants recognised the negative effects of smoking on oral health (PQ2, PQ3), and half of participants were aware that smoking would increase their risk of developing RA. Among at-risk participants who were unaware of the link between smoking and the risk of developing RA, the information was unsurprising (PQ4, PQ5). In contrast, one participant commented on the lack of public awareness of the link between poor oral health and RA, including the risk of developing RA (PQ6). Some participants also perceived a potential lack of knowledge among dentists regarding their at-risk status and regarding knowledge of the association between poor oral health and RA/risk of developing RA (PQ7, PQ8, PQ9), and highlighted that from a dental perspective, medical history was focused around any medications they were taking rather than specific conditions (PQ10). Healthcare professionals Healthcare professionals highlighted a disjoin between dentistry and medicine, with some from a medical background acknowledging that it did not occur to them to send their patients to a dentist or ask about oral health, and that communication between medical and dental professionals was rare unless there was a specific problem. Inadequate holistic management of patients was identified from both a dental and rheumatology perspective (HPQ1, HPQ2), but the potential to overcome this was also recognised (HPQ3). Healthcare professionals from a dental background emphasised the difficulties of not having access to complete medical histories for patients, from a safety perspective. They felt that some patients incorrectly assumed that their healthcare records were automatically shared between healthcare professionals, whereas other patients did not recognise the importance of sharing this information and were surprised by the link between oral health and general health (HPQ4). Some healthcare professionals perceived that the disjoin between medicine and dentistry was a result of commissioning and financial barriers and inadequate training (HPQ5, HPQ6). Variations in the extent of collaboration between medicine and dentistry, due to geographical location and research activity, were also acknowledged (HPQ7, HPQ8).
Participants identified various perceived risk factors for RA, including genetics, diet, being overweight, lack of exercise and smoking. However, the majority of participants were unaware of any potential link between poor oral health and RA/the risk of developing RA prior to being invited to participate in a dental research study (PQ1). Some participants recognised the negative effects of smoking on oral health (PQ2, PQ3), and half of participants were aware that smoking would increase their risk of developing RA. Among at-risk participants who were unaware of the link between smoking and the risk of developing RA, the information was unsurprising (PQ4, PQ5). In contrast, one participant commented on the lack of public awareness of the link between poor oral health and RA, including the risk of developing RA (PQ6). Some participants also perceived a potential lack of knowledge among dentists regarding their at-risk status and regarding knowledge of the association between poor oral health and RA/risk of developing RA (PQ7, PQ8, PQ9), and highlighted that from a dental perspective, medical history was focused around any medications they were taking rather than specific conditions (PQ10).
Healthcare professionals highlighted a disjoin between dentistry and medicine, with some from a medical background acknowledging that it did not occur to them to send their patients to a dentist or ask about oral health, and that communication between medical and dental professionals was rare unless there was a specific problem. Inadequate holistic management of patients was identified from both a dental and rheumatology perspective (HPQ1, HPQ2), but the potential to overcome this was also recognised (HPQ3). Healthcare professionals from a dental background emphasised the difficulties of not having access to complete medical histories for patients, from a safety perspective. They felt that some patients incorrectly assumed that their healthcare records were automatically shared between healthcare professionals, whereas other patients did not recognise the importance of sharing this information and were surprised by the link between oral health and general health (HPQ4). Some healthcare professionals perceived that the disjoin between medicine and dentistry was a result of commissioning and financial barriers and inadequate training (HPQ5, HPQ6). Variations in the extent of collaboration between medicine and dentistry, due to geographical location and research activity, were also acknowledged (HPQ7, HPQ8).
Individuals at risk of RA Preference for provision of information relating to the association between oral health and RA, and to dental trial participation, varied among participants. One participant perceived that time point preferences would depend on the individual (PQ11). Some participants expressed a preference for verbal information, others preferred written information, and others felt they needed a combination of both (PQ12). Likewise, some participants felt information of this nature was best delivered by a dentist, while others felt it should come from a rheumatologist. One participant suggested a multidisciplinary approach whereby dentists and rheumatologists provide the same information, while others recognised the issues that lack of communication between dental and medical teams posed. A participant with dental phobia expressed a preference for visual aids prior to treatment. Other participants highlighted the importance of feedback and encouragement regarding the impact of dental treatment on their risk of developing RA as a motivator to continue with preventive measures (PQ13). Healthcare professionals One healthcare professional perceived that information relating to the link between oral health and the risk of developing RA should come from the rheumatology team (HPQ9). With regard to the timing of information provision, another felt the link between poor oral health and systemic conditions should be discussed at diagnosis (HPQ10). The use of guidelines and posters was suggested to aid rheumatology teams to ask patients about their oral health.
Preference for provision of information relating to the association between oral health and RA, and to dental trial participation, varied among participants. One participant perceived that time point preferences would depend on the individual (PQ11). Some participants expressed a preference for verbal information, others preferred written information, and others felt they needed a combination of both (PQ12). Likewise, some participants felt information of this nature was best delivered by a dentist, while others felt it should come from a rheumatologist. One participant suggested a multidisciplinary approach whereby dentists and rheumatologists provide the same information, while others recognised the issues that lack of communication between dental and medical teams posed. A participant with dental phobia expressed a preference for visual aids prior to treatment. Other participants highlighted the importance of feedback and encouragement regarding the impact of dental treatment on their risk of developing RA as a motivator to continue with preventive measures (PQ13).
One healthcare professional perceived that information relating to the link between oral health and the risk of developing RA should come from the rheumatology team (HPQ9). With regard to the timing of information provision, another felt the link between poor oral health and systemic conditions should be discussed at diagnosis (HPQ10). The use of guidelines and posters was suggested to aid rheumatology teams to ask patients about their oral health.
Personal challenges and opportunities for dental intervention and oral health maintenance Individuals at risk of RA The majority of participants made routine visits to a dentist; however, negative perceptions of these visits were common. Participants described ‘hate’ towards the experience of visiting the dentist, ‘a little bit of fear at the general thought of it’ and ‘tensing up’. Some participants explicitly expressed a phobia of dentists and attributed their anxiety to traumatic dental experiences during childhood (PQ14). A minority of participants were comfortable visiting the dentist, with no anxiety whatsoever, but still perceived that ‘people don’t like dentists’. Comorbidities also impacted on the perceptions and priorities around oral health for some participants. One participant noted that her reflux had caused multiple fillings. Another acknowledged that his leaking heart valve meant he was supposed to look after his teeth; this participant had explicitly made his dentist aware of his heart condition, but was unsure if the dentist knew he was CCP+ at risk (PQ15). Oral health was identified as less of a priority when compared with conditions such as irritable bowel syndrome, which had a greater impact on daily life (PQ16). Another participant expressed how stress and anxiety made oral health less of a priority (PQ17). Healthcare professionals Healthcare professionals also identified dental anxiety as an issue for patients, perceiving that some patients avoided going to the dentist despite needing treatment (HPQ11, HPQ12). External barriers to dental intervention and oral health maintenance Individuals at risk of RA Many participants expressed difficulty in accessing an NHS dentist, particularly since the COVID-19 pandemic. Some participants had gone to a private dentist as a result (PQ18). One participant who did have an NHS dentist felt ‘lucky’, but emphasised the short duration of NHS dental appointments, while another expressed how her NHS dentist had not explained she had gum disease or given any advice on how to address it. This was attributed to lack of time during NHS dental appointments (PQ19, PQ20). While a minority of participants confirmed that the cost of dental treatment was not a problem for them, many participants identified that the cost of dental treatment had previously been or was still an issue. Some noted that although cost was an issue, it had not stopped them going to the dentist, whereas others explicitly stated that the cost had an impact on how frequently they were able to go. Cost also impacted on oral health maintenance; for example, one participant could not afford the upkeep of his dentures after volunteering for a limited course of free treatment at a dental school. Another participant perceived that dentistry was about making money rather than about health, while another felt that dentists had not given enough advice regarding the link between oral health and general health as a motivator to maintain good oral health habits (PQ21). Healthcare professionals Healthcare professionals identified similar barriers to dental intervention, highlighting that patients, both their own and in the wider sense, had difficulties accessing NHS dentists (HPQ13). Some attributed these access difficulties to social deprivation (HPQ14, HPQ15). This perceived lack of access to a dentist had a potential impact on healthcare professionals’ management of patients (HPQ16, HPQ17). Some healthcare professionals also identified cost as a barrier to seeking dental treatment. This included cost of both NHS treatment, and private treatment when NHS access was not possible (HPQ18, HPQ19).
Individuals at risk of RA The majority of participants made routine visits to a dentist; however, negative perceptions of these visits were common. Participants described ‘hate’ towards the experience of visiting the dentist, ‘a little bit of fear at the general thought of it’ and ‘tensing up’. Some participants explicitly expressed a phobia of dentists and attributed their anxiety to traumatic dental experiences during childhood (PQ14). A minority of participants were comfortable visiting the dentist, with no anxiety whatsoever, but still perceived that ‘people don’t like dentists’. Comorbidities also impacted on the perceptions and priorities around oral health for some participants. One participant noted that her reflux had caused multiple fillings. Another acknowledged that his leaking heart valve meant he was supposed to look after his teeth; this participant had explicitly made his dentist aware of his heart condition, but was unsure if the dentist knew he was CCP+ at risk (PQ15). Oral health was identified as less of a priority when compared with conditions such as irritable bowel syndrome, which had a greater impact on daily life (PQ16). Another participant expressed how stress and anxiety made oral health less of a priority (PQ17). Healthcare professionals Healthcare professionals also identified dental anxiety as an issue for patients, perceiving that some patients avoided going to the dentist despite needing treatment (HPQ11, HPQ12).
The majority of participants made routine visits to a dentist; however, negative perceptions of these visits were common. Participants described ‘hate’ towards the experience of visiting the dentist, ‘a little bit of fear at the general thought of it’ and ‘tensing up’. Some participants explicitly expressed a phobia of dentists and attributed their anxiety to traumatic dental experiences during childhood (PQ14). A minority of participants were comfortable visiting the dentist, with no anxiety whatsoever, but still perceived that ‘people don’t like dentists’. Comorbidities also impacted on the perceptions and priorities around oral health for some participants. One participant noted that her reflux had caused multiple fillings. Another acknowledged that his leaking heart valve meant he was supposed to look after his teeth; this participant had explicitly made his dentist aware of his heart condition, but was unsure if the dentist knew he was CCP+ at risk (PQ15). Oral health was identified as less of a priority when compared with conditions such as irritable bowel syndrome, which had a greater impact on daily life (PQ16). Another participant expressed how stress and anxiety made oral health less of a priority (PQ17).
Healthcare professionals also identified dental anxiety as an issue for patients, perceiving that some patients avoided going to the dentist despite needing treatment (HPQ11, HPQ12).
Individuals at risk of RA Many participants expressed difficulty in accessing an NHS dentist, particularly since the COVID-19 pandemic. Some participants had gone to a private dentist as a result (PQ18). One participant who did have an NHS dentist felt ‘lucky’, but emphasised the short duration of NHS dental appointments, while another expressed how her NHS dentist had not explained she had gum disease or given any advice on how to address it. This was attributed to lack of time during NHS dental appointments (PQ19, PQ20). While a minority of participants confirmed that the cost of dental treatment was not a problem for them, many participants identified that the cost of dental treatment had previously been or was still an issue. Some noted that although cost was an issue, it had not stopped them going to the dentist, whereas others explicitly stated that the cost had an impact on how frequently they were able to go. Cost also impacted on oral health maintenance; for example, one participant could not afford the upkeep of his dentures after volunteering for a limited course of free treatment at a dental school. Another participant perceived that dentistry was about making money rather than about health, while another felt that dentists had not given enough advice regarding the link between oral health and general health as a motivator to maintain good oral health habits (PQ21). Healthcare professionals Healthcare professionals identified similar barriers to dental intervention, highlighting that patients, both their own and in the wider sense, had difficulties accessing NHS dentists (HPQ13). Some attributed these access difficulties to social deprivation (HPQ14, HPQ15). This perceived lack of access to a dentist had a potential impact on healthcare professionals’ management of patients (HPQ16, HPQ17). Some healthcare professionals also identified cost as a barrier to seeking dental treatment. This included cost of both NHS treatment, and private treatment when NHS access was not possible (HPQ18, HPQ19).
Many participants expressed difficulty in accessing an NHS dentist, particularly since the COVID-19 pandemic. Some participants had gone to a private dentist as a result (PQ18). One participant who did have an NHS dentist felt ‘lucky’, but emphasised the short duration of NHS dental appointments, while another expressed how her NHS dentist had not explained she had gum disease or given any advice on how to address it. This was attributed to lack of time during NHS dental appointments (PQ19, PQ20). While a minority of participants confirmed that the cost of dental treatment was not a problem for them, many participants identified that the cost of dental treatment had previously been or was still an issue. Some noted that although cost was an issue, it had not stopped them going to the dentist, whereas others explicitly stated that the cost had an impact on how frequently they were able to go. Cost also impacted on oral health maintenance; for example, one participant could not afford the upkeep of his dentures after volunteering for a limited course of free treatment at a dental school. Another participant perceived that dentistry was about making money rather than about health, while another felt that dentists had not given enough advice regarding the link between oral health and general health as a motivator to maintain good oral health habits (PQ21).
Healthcare professionals identified similar barriers to dental intervention, highlighting that patients, both their own and in the wider sense, had difficulties accessing NHS dentists (HPQ13). Some attributed these access difficulties to social deprivation (HPQ14, HPQ15). This perceived lack of access to a dentist had a potential impact on healthcare professionals’ management of patients (HPQ16, HPQ17). Some healthcare professionals also identified cost as a barrier to seeking dental treatment. This included cost of both NHS treatment, and private treatment when NHS access was not possible (HPQ18, HPQ19).
Making oral health changes with the aim of preventing RA Individuals at risk of RA Participants discussed oral health issues such as bleeding and sore gums, chipped and weak teeth, infections, missing teeth and self-extraction. In some cases, oral health issues were closely linked to the external barriers identified in Theme 2 (PQ22). A minority of participants stated they had no problems with their oral health. Participants described varying levels of oral health maintenance, including regular brushing, flossing, use of interdental brushes, mouthwash, and electric toothbrushes, avoidance or reduction of carbonated drinks and sugary snacks, and drinking through a straw. Although half of participants described experiencing symptoms in their hands, and discussed how joint pain had led to limitation or modification of activities, being unable to work, and relying on family members for personal care, only two reported that these symptoms had caused difficulties with oral health maintenance. Among participants who were previously aware of the link between oral health and developing RA, some had actively made changes; for example, visiting the dental hygienist more often, having a better brushing routine and quitting or reducing smoking (PQ23, PQ24). One participant stated that being at risk of developing RA resulted in being willing to pay for dental treatment. Another reported that she would only seek dental treatment due to being at risk of RA if a dentist recommended it (PQ25), whereas being told about the link between developing RA and oral health during the interview was enough for another participant to state that she would prioritise her oral health (PQ26). Healthcare professionals Some healthcare professionals identified that the importance of good oral health behaviours might be underestimated, by people in general and by rheumatology patients specifically (HPQ20, HPQ21). They concluded that patients who had previously neglected their oral health would have difficulties changing their behaviour (HPQ22). Acceptability of participation in periodontal research aiming to prevent RA Individuals at risk of RA Seventeen of the 19 participants reported that a clinical trial aiming to reduce the risk of RA through dental treatment would be acceptable to them (PQ27). In contrast, a clinical trial aiming to reduce the risk of RA through taking a medicine was less acceptable; participants identified the need to consider their risk level and side effects of the drug. Facilitators to participating in a dental trial included the personal benefits of being able to reduce their risk of developing RA (PQ28) and access free dental treatment (PQ29), and the wider societal benefit of being able to potentially help others in the future (PQ30). A participant with dental phobia felt that the acceptability of this type of trial was dependent on the clinician carrying out the treatment, and that pain or discomfort during treatment would influence his decision to participate. Other participants recognised the importance of seeing the same dentist at every visit was important and felt they would be more comfortable receiving treatment from their routine dentist rather than a new dentist (PQ31). In contrast, another participant felt that treatment as part of a research study should be done by a specialist rather than at a routine dentist appointment. Other potential barriers to dental trial participation included the location of treatment and appointment times. Some participants suggesting that a smaller, less clinical environment would be better for people with dental phobia, while others focused on ease of parking nearby and public transport routes. Participants highlighted other commitments that could affect their ability to participate, such as childcare and work (PQ32). One participant perceived that she had no oral health problems, so participation in the trial would not be a priority for her. Healthcare professionals While healthcare professionals understood the advantages of a multidisciplinary approach to managing patients with systemic diseases, some identified the cost of dentists’ time as a potential barrier. Others perceived that this could be overcome by using other members of the dental workforce (HPQ23, HPQ24). In relation to provision of preventive dental treatment for individuals at risk of RA, cost-effectiveness was also highlighted (HPQ25). The extent to which individuals at risk of developing RA would be open to receiving preventive dental treatment was perceived to depend on their current oral health behaviours (HPQ26).
Individuals at risk of RA Participants discussed oral health issues such as bleeding and sore gums, chipped and weak teeth, infections, missing teeth and self-extraction. In some cases, oral health issues were closely linked to the external barriers identified in Theme 2 (PQ22). A minority of participants stated they had no problems with their oral health. Participants described varying levels of oral health maintenance, including regular brushing, flossing, use of interdental brushes, mouthwash, and electric toothbrushes, avoidance or reduction of carbonated drinks and sugary snacks, and drinking through a straw. Although half of participants described experiencing symptoms in their hands, and discussed how joint pain had led to limitation or modification of activities, being unable to work, and relying on family members for personal care, only two reported that these symptoms had caused difficulties with oral health maintenance. Among participants who were previously aware of the link between oral health and developing RA, some had actively made changes; for example, visiting the dental hygienist more often, having a better brushing routine and quitting or reducing smoking (PQ23, PQ24). One participant stated that being at risk of developing RA resulted in being willing to pay for dental treatment. Another reported that she would only seek dental treatment due to being at risk of RA if a dentist recommended it (PQ25), whereas being told about the link between developing RA and oral health during the interview was enough for another participant to state that she would prioritise her oral health (PQ26). Healthcare professionals Some healthcare professionals identified that the importance of good oral health behaviours might be underestimated, by people in general and by rheumatology patients specifically (HPQ20, HPQ21). They concluded that patients who had previously neglected their oral health would have difficulties changing their behaviour (HPQ22).
Participants discussed oral health issues such as bleeding and sore gums, chipped and weak teeth, infections, missing teeth and self-extraction. In some cases, oral health issues were closely linked to the external barriers identified in Theme 2 (PQ22). A minority of participants stated they had no problems with their oral health. Participants described varying levels of oral health maintenance, including regular brushing, flossing, use of interdental brushes, mouthwash, and electric toothbrushes, avoidance or reduction of carbonated drinks and sugary snacks, and drinking through a straw. Although half of participants described experiencing symptoms in their hands, and discussed how joint pain had led to limitation or modification of activities, being unable to work, and relying on family members for personal care, only two reported that these symptoms had caused difficulties with oral health maintenance. Among participants who were previously aware of the link between oral health and developing RA, some had actively made changes; for example, visiting the dental hygienist more often, having a better brushing routine and quitting or reducing smoking (PQ23, PQ24). One participant stated that being at risk of developing RA resulted in being willing to pay for dental treatment. Another reported that she would only seek dental treatment due to being at risk of RA if a dentist recommended it (PQ25), whereas being told about the link between developing RA and oral health during the interview was enough for another participant to state that she would prioritise her oral health (PQ26).
Some healthcare professionals identified that the importance of good oral health behaviours might be underestimated, by people in general and by rheumatology patients specifically (HPQ20, HPQ21). They concluded that patients who had previously neglected their oral health would have difficulties changing their behaviour (HPQ22).
Individuals at risk of RA Seventeen of the 19 participants reported that a clinical trial aiming to reduce the risk of RA through dental treatment would be acceptable to them (PQ27). In contrast, a clinical trial aiming to reduce the risk of RA through taking a medicine was less acceptable; participants identified the need to consider their risk level and side effects of the drug. Facilitators to participating in a dental trial included the personal benefits of being able to reduce their risk of developing RA (PQ28) and access free dental treatment (PQ29), and the wider societal benefit of being able to potentially help others in the future (PQ30). A participant with dental phobia felt that the acceptability of this type of trial was dependent on the clinician carrying out the treatment, and that pain or discomfort during treatment would influence his decision to participate. Other participants recognised the importance of seeing the same dentist at every visit was important and felt they would be more comfortable receiving treatment from their routine dentist rather than a new dentist (PQ31). In contrast, another participant felt that treatment as part of a research study should be done by a specialist rather than at a routine dentist appointment. Other potential barriers to dental trial participation included the location of treatment and appointment times. Some participants suggesting that a smaller, less clinical environment would be better for people with dental phobia, while others focused on ease of parking nearby and public transport routes. Participants highlighted other commitments that could affect their ability to participate, such as childcare and work (PQ32). One participant perceived that she had no oral health problems, so participation in the trial would not be a priority for her. Healthcare professionals While healthcare professionals understood the advantages of a multidisciplinary approach to managing patients with systemic diseases, some identified the cost of dentists’ time as a potential barrier. Others perceived that this could be overcome by using other members of the dental workforce (HPQ23, HPQ24). In relation to provision of preventive dental treatment for individuals at risk of RA, cost-effectiveness was also highlighted (HPQ25). The extent to which individuals at risk of developing RA would be open to receiving preventive dental treatment was perceived to depend on their current oral health behaviours (HPQ26).
Seventeen of the 19 participants reported that a clinical trial aiming to reduce the risk of RA through dental treatment would be acceptable to them (PQ27). In contrast, a clinical trial aiming to reduce the risk of RA through taking a medicine was less acceptable; participants identified the need to consider their risk level and side effects of the drug. Facilitators to participating in a dental trial included the personal benefits of being able to reduce their risk of developing RA (PQ28) and access free dental treatment (PQ29), and the wider societal benefit of being able to potentially help others in the future (PQ30). A participant with dental phobia felt that the acceptability of this type of trial was dependent on the clinician carrying out the treatment, and that pain or discomfort during treatment would influence his decision to participate. Other participants recognised the importance of seeing the same dentist at every visit was important and felt they would be more comfortable receiving treatment from their routine dentist rather than a new dentist (PQ31). In contrast, another participant felt that treatment as part of a research study should be done by a specialist rather than at a routine dentist appointment. Other potential barriers to dental trial participation included the location of treatment and appointment times. Some participants suggesting that a smaller, less clinical environment would be better for people with dental phobia, while others focused on ease of parking nearby and public transport routes. Participants highlighted other commitments that could affect their ability to participate, such as childcare and work (PQ32). One participant perceived that she had no oral health problems, so participation in the trial would not be a priority for her.
While healthcare professionals understood the advantages of a multidisciplinary approach to managing patients with systemic diseases, some identified the cost of dentists’ time as a potential barrier. Others perceived that this could be overcome by using other members of the dental workforce (HPQ23, HPQ24). In relation to provision of preventive dental treatment for individuals at risk of RA, cost-effectiveness was also highlighted (HPQ25). The extent to which individuals at risk of developing RA would be open to receiving preventive dental treatment was perceived to depend on their current oral health behaviours (HPQ26).
This study informs our understanding of the perceptions and experiences of oral health among individuals at risk of RA. Our findings indicate that dental intervention and oral health maintenance to reduce the risk of developing RA are generally perceived to be acceptable among at-risk individuals, congruent with previous studies whereby at-risk individuals were more willing to make lifestyle changes and adopt healthy behaviours than to take preventive medication. Despite growing evidence suggesting an association between PD and RA, our findings indicate that awareness of this association is limited both among patients as well as healthcare professionals. This may reflect a wider disconnect between medicine and dentistry, which was highlighted by both at-risk participants and healthcare professionals. Models to address the siloed delivery of medicine and dentistry are being explored in the UK with the view of developing pathways that facilitate access to care for high needs patients, including those with multimorbidities. Our findings emphasise that dental anxiety can impact on patients’ dental care seeking behaviours. This is congruent with a previous study focusing on attitudes towards oral health in patients with established RA, whereby previous negative experiences of dental care discouraged participation in a periodontal trial. Our study also highlighted that pursuing dental treatment can be hindered by treatment costs and further inequalities around access to an NHS dentist. This reflects inequalities throughout Europe; in 2021, 5% of the European Union (EU) had an unmet need for dental examination or treatment, due to cost, distance and waiting lists. In England, primary dental care under the NHS is not free at the point of delivery and most patients are expected to pay for their treatment, with a few exceptions such as children, pregnant women and people receiving certain state benefits. Access to NHS dental care has been highlighted as an increasingly pressing problem, especially for people living in the most socially deprived areas. Several patient organisations, trade unions and even cross-party parliamentary groups have been calling for dental system reform. The newly established Integrated Care Systems are expected to take over the commissioning responsibilities for both medical and dental care services in England, creating new opportunities for reducing the siloed delivery of these services. There have been various initiatives aimed at reducing the barriers between healthcare services, such as Making Every Contact Count (MECC). MECC aims to maximise the benefits of the interactions between various healthcare settings and patients by promoting evidence-based preventive messages and signposting between healthcare services. Dental care professionals could have an important role in providing preventive interventions and early detection of chronic conditions by capturing non-regular attendees of general healthcare services. With an expected increase in the number of people with multimorbidities, there are unique opportunities to design more person-centred, integrated medical and dental care services delivered by multidisciplinary teams. Our study has implications for clinical practice and future clinical trials. Rheumatology teams should consider oral health as part of the holistic management of RA including those at risk of developing RA, while dental care professionals should consider the implications of being CCP+ at risk when managing these patients. Whereas patients with established RA have reported difficulties in maintaining their oral hygiene due to their joint problems and a burden of numerous different hospital appointments linked to having RA, leading them to deprioritise oral health, targeting CCP+ at-risk individuals bestows an earlier opportunity to provide information and advice that may be easier to act on. When designing and conducting pre-RA clinical studies, researchers must consider the barriers and facilitators to participation reported by patients. A personalised approach that considers each participant’s level of dental anxiety, their preferred methods of information delivery, location accessibility and appointment time flexibility is recommended. Future studies in this area should focus on how at-risk individuals assess risk versus benefit in deciding on participation in preventive periodontal studies, addressing the research agenda within recent guidance for conducting clinical trials and observational studies in individuals at risk of RA. This agenda also highlights the need to understand which risk factors at-risk individuals consider to be high risk for developing RA. Our study has started to address this point, but emphasises the importance of at-risk individuals to understand all potential risk factors before considering which are high risk. Future research should also explore potential differences between CCP+ at-risk individuals who accept certain preventive measures and those who decline, to determine what factors influence different approaches to health behaviours. Our findings must be viewed in light of some limitations. First, health professionals were recruited through the authors’ professional networks and most of those interviewed were known to the interviewer, which may have introduced bias during data collection. However, in an attempt to minimise bias, the authors took care to recruit fellow health professionals who they felt did not have additional specific experience or knowledge around the impact of PD on RA, and all interviews were independently analysed by a member of the research team who is not from a dental background and had no preconceptions. Second, the rationale for this study was to explore the acceptability of periodontal treatment from the patient perspective, with input from health professionals to triangulate and explore factors that might influence patient participation. Our analysis, therefore, commenced with the patient interviews and was widened out from there, but not all data from the health professional interviews related to the patient perspective. In addition, we acknowledge that our sample of at-risk participants were already part of a CCP cohort study, and may not reflect the views of at-risk individuals who are less willing to participate in research. Nevertheless, to our knowledge, this is the first qualitative study to explore the perceptions and experiences of periodontal health in this population and provides a grounding for future research supporting the design of preventive interventional periodontal studies in individuals at risk of developing RA.
The association between poor oral health and RA may not be well understood by individuals at risk of RA and the healthcare professionals involved in their care. Information relating to this association should be tailored to the individual. While PD is common in individuals at risk of RA, seeking dental treatment can be hindered by dental phobia, treatment cost and inequalities around access to an NHS dentist. A clinical trial involving preventive periodontal treatment is potentially acceptable for individuals at risk of RA.
|
Microbial diversity and functions in saline soils: A review from a biogeochemical perspective | 956cf662-296a-48fe-a805-50c063277947 | 11081963 | Microbiology[mh] | Soil salinization has been one of the major issues that substantially affect the structure, processes, and functions of global ecosystems , including soil nutrient cycling , , organic matter decomposition , plant productivity , , and biodiversity . Under future climate change scenarios, it is predicted that global soil salinization will be exacerbated . Soil degradation due to increased salinization has detrimental impacts on crop production and human well-being . Given the projected sea-level rise (SLR) by climate change, soil salinization in the coastal area will directly act on population migration by reducing crop yield and negatively influencing regional socioeconomic development . The causes of soil salinization are diverse and can occur in all climatic conditions , . Generally, there are two types of soil salinization based on the dominant role of natural and human-induced actions: primary salinization and secondary salinization ( ). Soil primary salinization is mainly linked with the local climate, parent material, soil properties, and groundwater. Insufficient rainfall to replenish higher evapotranspiration or rising groundwater level can also lead to increased soil salinity . Secondary or human-induced salinization is introduced by the inappropriate use of soil by humans. Saline irrigation combined with poor drainage conditions can easily result in salt accumulation on the soil surface. For coastal areas, seawater intrusion due to rising temperatures and rapid urbanization is projected to increase in future climate change scenarios . Drought-induced salinity intrusion can also lead to the salinization of freshwater marsh wetlands, which in turn affects wetland ecological processes and functions (e.g., carbon dynamics) . In short, a better understanding of the causes of salinization can guide targeted measures to avoid soil degradation in the future. As a critical driver of plant productivity and ecosystem micro-scale processes, soil microorganisms, how they respond to and adapt to the changing soil salinization condition and then perform vital functions, has been a hot topic in the field of environmental microbiology in recent years , , . Soil microorganisms are also considered to be defenders used in the future to combat climate change and other adverse effects on ecosystems . Therefore, we need to clarify the potential effects of salinization on soil microbial composition in diverse ecosystems and decipher the changing pattern in microbial-mediated ecosystem functions caused by increasing salinity. Increased soil salinity is considered to be unfavorable to crop yield in most agricultural ecosystems, it has also been reported to cause ecosystem degradation in grassland and forest ecosystems , . However, some studies have found that, for coastal wetlands, increasing salinity associated with SLR within a certain range is conducive to plant primary production, thereby promoting carbon storage , , . Therefore, in the context of predictable future soil salinity changes, specific management practices should be adopted for different ecosystems to maintain soil health, which is conducive to the performance of ecological services. The important roles of microorganisms in saline soils are inevitably needed to be fully understood and revealed for the ecosystems. In this review, we summarize the existing information on soil microbial diversity, community composition, and microbial-mediated biogeochemical processes under soil salt stress. Meanwhile, we emphasize the development and utilization of soil microbial resources in saline-alkaline soils to serve crop production, plant growth, and degraded soil restoration. Future research directions and priorities were also put forward. We hope this can contribute to generating progress in the understanding of microbial biodiversity and functions under the influence of soil salinization.
Within a given range of salinity fluctuations, soil microorganisms can adjust their own physiological metabolism to respond to salt stress. Some halophilic microbes can actively absorb salt ions (predominantly potassium ions) to increase cell osmotic pressure when the external salinity increases, sometimes accompanied by exporting sodium ions from the cell . Alternatively, by synthesizing low-molecular-weight organic compounds, such as some amino acids and carbohydrates, in microbial cells, microorganisms can balance the osmotic pressure inside and outside the cell , . Both osmo-adaptation strategies are undoubtedly energy-consuming. In addition, microorganisms will also hibernate for a period of time to cope with unfavorable environmental conditions in salt marshes with frequent salinity fluctuation . Owing to strong environmental perturbations such as nutrient and oxygen concentrations, the proportion of dormant microbial taxa can reach more than half or even higher in salt marsh sediments . There is also a report that bacterial dormancy is more prevalent in freshwater habitats than hypersaline lakes, which may be due to the fact that these extremophiles need dormancy less often to thrive and survive in the hypersaline condition . Indeed, excessive salinity stress means cell lysis and death for some intolerant microorganisms.
Response patterns of soil microbial diversity along gradients of increasing salinity are habitat-specific or context-specific. Recent advances in the field of molecular ecology, especially the development of multiple ‘omics’ technologies, can provide insights into the changes or succession of diverse microbial taxa with increasing soil salt stress. Early comprehensive analysis showed that salinity was the major environmental determinant of microbial community composition than temperature, pH, and other physicochemical factors across global diverse environments , and saline soils were allowed to find new microbial diversity. However, at the local scale, the influencing strength of salinity is less important for microbial assemblage, particularly in ecosystems that are already salt-rich . For instance, along a marine-terrestrial transition, soil physical structure and soil organic matter were the best predictors of fungal communities in coastal ecosystems . Soil bacterial community assemblage was also collectively explained by the soil’s physical structure, pH, and salinity in this salt marsh chronosequence . There are fundamental differences in the response of different soil microbial taxa to salt stress. It has been reported that soil salinity was not the constraint of bacterial communities in hypersaline soils, while the populations of archaea were significantly correlated with soil sodium concentration and electrical conductivity in these soils . In a desert ecosystem, soil salinity imposed a strong pressure on prokaryotic communities, and a significant negative correlation between microbial diversity, richness and phylogenetic diversity (PD), and soil salinity level (with a range of 0–2.5 mS cm −1 ) was detected . Unexpectedly, within a range of soil electrical conductivity of 0–18 mS cm −1 across oligohaline to hypersaline estuarine wetlands, bacterial alpha diversity reached the highest in moderate saline soils . Similarly, along an estuarine salinity gradient, soil fungal richness was highest in the brackish marsh with intermediate salinity, but the presence of plants had an impact on fungal diversity that cannot be ignored . Differences in the response of microbial diversity to salt stress in different studies should be related to the community composition of the microorganisms studied. It is generally believed that fungi have a stronger ability to cope with osmotic stress than bacteria. However, the increasing importance of fungi under higher salinity levels has not been well presented . Usually, land degradation due to soil salinization can lead to a shift in plant diversity, which directly affects microbial diversity and composition. For instance, grassland degradation that may be caused by soil salinization reduced the Shannon diversity of soil bacteria and fungi, and it was also found that fungi exerted a greater changing magnitude than bacteria . In acidic soils dominated by fungal biomass in the floodplain of the Werra River in Germany, as soil salinity increased, the microbial community shifted toward increased prokaryotic abundance . In arid and semi-arid areas grown with halophytes Leymus chinensis , the soil rhizosphere bacterial community was more tightly associated with soil salinization than soil fungi . Collectively, when evaluating patterns of response of different microbial taxa to salt stress, soil background properties and vegetation types need to be taken into account. Soil harbors extremely diverse microbial species with different physiological characteristics. A large number of salt-tolerant microbial taxa are also recognized in different ecosystems. In a polar desert, Firmicutes can largely replace Acidobacteria and Actinobacteria in soils with increasing salinity , as microbial taxa affiliated with Firmicutes have Gram-positive cell walls and can form spore, which makes them resistant to harsh environments . The Bacillus genera assigned to Firmicutes were also proven to produce halophilic enzymes in saline soils . In natural saline soil, an increased abundance of Bacteroidetes in soils with higher salinity levels was verified . Some well-known halotolerant bacteria classified into Gammaproteobacteria, Alphaproteobacteria, Bacteroidetes, and Verrucomicrobia were also identified in soils with the halophyte Suaeda salsa to help this non-host plant to withstand salinity . These taxa were also found to be enriched in hypersaline soils in coastal wetlands and degraded grasslands . A study also revealed that microbial taxa belonging to Proteobacteria, Bacteroidetes, Actinobacteria, and Halobacteria have a high-salinity niche preference in desert soils . Archaea living in hypersaline systems often harbor a phylogenetically diverse group with complex metabolic pathways to persist here , and some ectomycorrhizal fungi such as Tomentella , Lactarius , and Phialocephala were abundant in more saline soils grown with Black alder ( Alnus glutinosa Gaertn.), but their dominance changed with plant development .
Microbial biomass and growth with increasing soil salinity Soil microbial biomass and growth are important biological indicators for soil health assessment , . Salinization-driven changes in microbial biomass and growth are pertinent to soil carbon dynamics and greenhouse gas emissions , . There is no general pattern for the relationship between salinity and soil microbial biomass (or microbial biomass per unit of organic carbon) in environments including natural and manipulated soils, as reviewed by Rath and Rousk . This is because the microbial biomass in the soil is constrained by a variety of historical and current environmental conditions , . The quantification method may also affect the estimation of microbial biomass under salt stress. The quantificational polymerase chain reaction (qPCR) exhibited greater sensitivity in estimating microbial biomass in soils along salinity gradients than the lipid-based method . Because of the different salt tolerance of bacteria and fungi to salt stress, the contribution of microbial residues to soil organic carbon (SOC) storage is salinity dependent. With the salinization of coastal wetlands, microbial necromass contribution to SOC shifts from fungal- to bacterial-dominated residue . In addition to soil bacteria and fungi, by compiling data from published literature, recent research in coastal ecosystems also found that, as a top-tier heterotrophic microbial group, the productivity of testate amoebae reduced with increasing salinity . For the influence of soil salinity on microbial growth, a consensus has gradually been reached on the depressive effect of excessive salinity on microbial growth in natural and agricultural land. By adding different salt solutions to a nonsaline soil, bacterial and fungal growth rates (leucine incorporation and acetate incorporation into ergosterol, respectively) were quantified and found that soil microbial growth was obviously inhibited by the salt exposure . They also pointed out that fungi were more resistant to salt exposure than bacteria as revealed by the growth-based measurements. On the contrary, a study concluded that bacterial salt tolerance was less related to soil salinity across an arid agricultural salinity gradient . The inhibitory effect of salinity on microbial growth is also indirectly caused by the availability of soil moisture and soil organic matter . An updated research demonstrated that saline soils had higher bacterial salt resistance than low-salt soil, and an increased salinity within the appropriate range enhanced the growth of bacteria . A study by Yan and Marschner showed that microbes in saline soils did not have a stronger salt resistance than in non-salinized soils, microbial activity and biomass in saline soils were substantially regulated by substrate availability. Trait-based microbiological research is more conducive for us to reveal the microbial growth response under salt stress . Soil organic carbon decomposition SOC decomposition is one of the most basic and concerned ecological processes in belowground ecology and soil science. There is an urgent need to integrate microbial community ecology into belowground biogeochemistry to advance our understanding of soil biogeochemical cycles . Previous studies have reviewed the SOC decomposition and dynamics in saline-alkaline soils and/or under salt stress , . Here, we pay more attention to how microorganisms respond to salt stress and then regulate organic carbon decomposition. The direction and magnitude of response in SOC decomposition to salinization are related to the background of the soil studied. By investigating soil microbial growth and soil respiration along natural salinity gradients, a strong negative correlation was observed between cumulative carbon emission and salinity, which was mainly ascribed to the fact that increased salinity directly inhibited the microbial growth rate, and ultimately led to the decrease in the decomposition of organic carbon . A number of previous studies have also observed that the increase in salinity significantly inhibits the rate of organic carbon mineralization in different types of soils , , . The types of ions in the process of soil salinization can also differ in the decomposition intensity of organic carbon. The inhibitory magnitude of sodium chloride on SOC decomposition (also including microbial growth) was higher than that of sulfate . Salinity ionic stress may have a larger inhibitory effect on both aerobic and anaerobic carbon mineralization than the sulfate in a short-term microcosm experiment . In tidal wetlands where freshwater and saltwater interact, ranging from freshwater to oligohaline conditions, salinity increases could dramatically promote the activity of carbon-degrading extracellular enzymes and thereby increased soil microbial decomposition rates in these low salinity tidal wetlands . Furthermore, a global meta -analysis presented that hydrolytic carbon-acquiring enzyme activities were reduced by salinization, but oxidative carbon-acquiring enzymes were enhanced in tidal wetlands, yet salinization still exerted adverse effects on soil organic carbon storage from a comprehensive perspective . Field investigation and analysis based on metagenomic sequencing in saline soils also provided clear evidence that microbial carbon metabolic potentials (carbohydrate metabolism and genes encoding glycosyl transferases and glycoside hydrolases) were negatively correlated with soil salinity, therefore higher saline soils corresponded to low carbon emissions . Based on these collected research results, the increase in soil salinity is not beneficial to the long-term storage of soil organic carbon. Meanwhile, an incubation experiment revealed that, compared with liable carbon resources such as glucose, salinity reduced the ability of microorganisms to decompose cellulose more although the microbial characteristics of the tested saline and non-saline soil should be different . In an investigation study of the microbial functional gene abundance along a coastal salinity gradient, the abundance of most genes encoding carbon degradation was negatively correlated with salinity, however, the abundance of gene encoding ligninase increased with enhancing soil salinity . This may be due to the higher proportion of recalcitrant carbon in high-salt environments . Overall, it’s needed to link the microbial functional genes with microbial carbon degrading enzymes when understanding the roles of the microbial community in regulating SOC turnover in the context of soil salinization. Microbial carbon use efficiency (CUE), which reflects the partitioning of carbon allocation between microbial growth and respiration, was also explored under soil salinization in some studies. Soil microbial CUE inferred by eco-enzymatic stoichiometry follows a unimodal pattern along a salinity gradient in coastal soils . Within a controllable range of increased salinity, higher salinity promoted microbial CUE with less effect on microbial growth. Increasing salinity also affects SOC decomposition and its sequestration by suppressing plant productivity, as quantified at a broad scale . An attempt has been made to relate CUE to microbial genome size and suggested that larger genomes can access a wider variety of carbon substrates and thus behave with higher CUE . Increasing salinity by saltwater intrusion significantly altered soil microbial diversity and communities, but microbial CUE was less influenced . The emerged microbial functions (e.g., CUE) can be indirectly affected, such as by mediating soil structure, plant biodiversity, and nutrient state under soil salinization , , . For instance, increasing soil salinity decreased microbial enzyme activities, but increased the metabolic quotient (higher metabolic quotient means lower CUE), the presence of plants in saline soils can relieve the adverse effects of salinity on microbial catabolism . With the more refined description of the composition of soil microbial communities at the taxonomic level, it has become a critical topic to predict soil CUE by analyzing the dynamics of soil microbial communities under soil salinization, but so far there is no such research. Incorporating microbial CUE into the next-generation model of soil biogeochemistry is needed, we know less about this, especially in saline soils. Soil nitrogen cycling Nitrogen cycling in soils includes nitrogen mineralization, nitrogen fixation, nitrification, denitrification, dissimilatory nitrate reduction to ammonium (DNRA), anaerobic ammonium oxidation (Anammox), and so on (as shown in and ref. ). The nitrogen turnover in soils is jointly affected by physical (nitrate leaching, ), chemical (ammonia volatilization and dissolution, ), and biological processes (plant uptake and microbial utilization, ). Here, we focus on nitrogen cycling processes mediated by soil microorganisms, including nitrogen biogeochemical rate, functional gene abundance, and microbial communities involved in nitrogen transformation under soil salinization. Nitrogen mineralization refers to the process by which organic nitrogen in the soil is converted into inorganic nitrogen under the action of soil microorganisms. A review suggested that soil salinization had negative effects on nitrogen mineralization in coastal wetlands as microbial activity could be inhibited by the increasing salinity . On the contrary, a meta -analysis indicated that soil salinization significantly increased net nitrogen mineralization, because greater nitrogen content in plant detritus and higher abundance of specific microbial groups responsible for nitrogen mineralization can be used to explain the result . Most current estimates of nitrogen mineralization rates are based on the changes of ammonium over time in the field or incubation. The DNRA process can produce ammonium, and ammonium can also be assimilated or nitrified by microorganisms. Therefore, the 15 N tracer technology can be a more scientific and effective method for nitrogen mineralization estimation . Nitrification is an aerobic process mediated by ammonia oxidizers (also called nitrifiers) that oxidizes ammonia to nitrite or nitrate. The effect of salt stress on soil nitrification is tightly related to the degree of soil salinization. As reported by ref. , in the initial stage of soil salinization, seawater intrusion in coastal wetlands simulated by adding salt solution significantly inhibited potential nitrification rates but increased markedly the nitrification rates in the later stage of salinization. Within a wider range of soil salinity fluctuations (0–20 mS cm −1 ), the abundance of functional genes (such as amoA and hao ) decreased along with increasing salinity gradients , . There may be an optimal salinity threshold for the activity of nitrifiers : within an optimum of increasing salinity, the nitrification rate could be enhanced; extensive salinity can make a suppressive influence on nitrification rates by constraining the microbial physiology . Different nitrifier groups can exhibit distinct sensitivity to increasing salt exposure. Culture experiments showed that ammonia-oxidizing bacteria (AOB) can dominate nitrification and pose higher abundance at intermediate salinity, while ammonia-oxidizing archaea (AOA) exhibited higher salinity tolerance with less affected after salinity elevation . By combining 13 C-DNA stable-isotope probing (SIP) with genomic sequencing, a considerable number of salt tolerance ammonia oxidizers belonging to Nitrosococcus and Nitrosospira in saline and non-saline agricultural soils were identified . These ammonia oxidizers hold a variety of genes participating in Na + extrusion /H + import, and compatible solutes biosynthesis. Traditionally, two steps are involved in nitrification: ammonia oxidizers firstly oxidize ammonia to nitrite, then nitrite-oxidizing bacteria (NOB) convert nitrite to nitrate . Complete nitrifiers such as Nitrospira bacteria were found that they can directly use catalyze ammonia to nitrate (also called comammox or complete ammonia oxidation) , . However, we know less about them in saline soils which also represent a specific environment. The influence of soil salinization on denitrification varies with soil background salinity level, salinization stage, and ecosystems. Depressed denitrification was observed in tidal freshwater wetland soils after exposure to saline water in the Hudson River, USA . A laboratory-incubated experiment in tidal forest soils showed that increasing salinity stimulated denitrification in soils with lower background salinity, but had less effect in soils with relatively higher background salinity levels . Inferred from the results by ref. , in the initial stage of soil salinization, low salinity promoted soil denitrification but inhibited nitrification, resulting in higher N 2 O emission; in the later stage of salinized soil development, salinity significantly promoted nitrification, which provided more nitrate substrates for the denitrification although it was depressed, as a consequence, higher N 2 O generation was also observed. A meta -analysis also suggested that soil salinization effect on denitrification was not statistically significant although there was a decreasing trend between them across coastal wetlands . Considering the fact that soil denitrification preferentially occurs in anoxic conditions, changes in soil redox environment, moisture, and substrate availability caused by soil salinization can have a non-negligible impact on denitrification . Increasing soil salinity also has contrasting effects on microbial DNRA. Microbial DNRA and denitrification all reduce nitrate, denitrification can cause nitrogen loss in the form of nitrogen gas whereas DNRA can result in the conservation of nitrogen (NH 4 + ) in soils . By comparing the soil denitrification and DNRA potentials in freshwater and oligohaline wetlands, it was observed that increasing salinity shifted the nitrate reduction regime from denitrification toward DNRA . A similar study also suggested the positive relationship between the DNRA and salinity level in the freshwater wetlands of Massachusetts, USA . An integrated data analysis showed that soil salinization generally suppressed the DNRA . Factors controlling soil DNRA are complex, soil DNRA potential was more regulated by substrates rather than nrfA gene abundance along an estuarine and intertidal wetland . Soil organic matter quality and sulfide content also play important roles in the partitioning of the DNRA and denitrification in estuarine saline soils , . Soil DNRA can be more favored in anoxic environments when organic carbon or sulfide are effective, whereas denitrification would be more supported in soils with limited electron donors but sufficient nitrate . Anammox is also a key pathway that is responsible for nitrogen loss in soils (NH 4 + + NO 2 – → 2H 2 O + N 2 ), which is mediated by specialized microbes and ubiquitous in natural and artificial settings . Anammox bacteria are also more prevalent in saline/alkaline than acidic soils , which are affiliated with the Planctomycetes phylum with slow growth rates . Although the contribution of Anammox to N 2 production was low compared to denitrification in tidal marshes, the highest relative importance of Anammox was observed in low salinity marshes, increasing salinity lowered the importance of Anammox to nitrogen removal . It has been also demonstrated that soil N 2 production in coastal tidal flats and inland paddy was dominantly contributed by denitrification over Anammox, and soil salinity driven the Anammox bacterial community in coastal flats (dominated by Candidatus Scalindua and Candidatus Kuenenia ) and paddy soils (dominated by Candidatus Brocadia ) . Incubation experiments proposed that the low-salinity treatment stimulated the soil Anammox but it was inhibited by high-salinity addition . A meta -analysis by ref. manifested that soil salinization on average stimulated Anammox more than twofold. A higher abundance of hzo gene (mediating the Anammox process) was also observed in soils located in tidally influenced wetlands than that in riverine wetlands . Anammox bacteria generally exhibited higher salt tolerance, which makes them to adapt high-salt environments, this may explain the increasing trend of Anammox potential after salinity exposure . Collectively, we summarize a conceptual diagram based on existing reports on soil carbon and nitrogen cycles under the influence of salinization across different ecosystems ( ). Soil phosphorus and sulfur cycling Phosphorus and sulfur are widely involved in the synthesis of microbial cell membranes, DNA, and proteins in the soil . However, there is much less research on the changes in soil phosphorus and sulfur and the role of microorganisms under the influence of salinization than carbon and nitrogen turnover. Existing studies have shown that salinity affects the availability of phosphorus in the soil. For instance, when freshwater forested wetlands were converted into oligohaline marshes, phosphorus mineralization was stimulated by soil salinification . Compared with freshwater wetlands, soils in brackish wetlands contained higher available phosphorus which can well be ascribed to the higher abundance of genes that participated in phosphorus solubilization (e.g., gcd and ppa ) and phosphorus mineralization (e.g., phoD , phy , and ugpQ ) . This study also implied that slight salinization of coastal freshwater wetlands may enhance the soil phosphorus availability by regulating the microbial community mediating phosphorus cycling. Some salt-tolerant phosphate solubilizing bacteria (PSB) also exist in saline soils, which play critical roles in solubilizing unavailable phosphorus into bioavailable phosphorus . Sulfur oxidation and reduction are the main metabolic processes in soil sulfur cycling, which are microbially carried out by sulfur-oxidizing (SOB) and sulfur-reducing bacteria (SRB) in all ecosystems . Wetland salinization could increase the generation of sulfides which are toxic for plant growth . Increased soil salt stress can reduce soil oxygen availability, as a consequence, induce the accumulation of sulfide in soils . Some SOB and SRB groups living in salt-saturating conditions have been found, such as the SOB genera Thioalkalivibrio , Thioalkalimicrobium , Thioalkalispira , and Thioalkalibacter within the Gammaproteobacteria, the SRB genera Desulfonatronovibrio , Desulfonatronum , and Desulfonatronospira belonging to Desulfovibrionales . The SOB and SRB communities and their mediated sulfur-cycling processes in saline soils are needed to be characterized in detail.
Soil microbial biomass and growth are important biological indicators for soil health assessment , . Salinization-driven changes in microbial biomass and growth are pertinent to soil carbon dynamics and greenhouse gas emissions , . There is no general pattern for the relationship between salinity and soil microbial biomass (or microbial biomass per unit of organic carbon) in environments including natural and manipulated soils, as reviewed by Rath and Rousk . This is because the microbial biomass in the soil is constrained by a variety of historical and current environmental conditions , . The quantification method may also affect the estimation of microbial biomass under salt stress. The quantificational polymerase chain reaction (qPCR) exhibited greater sensitivity in estimating microbial biomass in soils along salinity gradients than the lipid-based method . Because of the different salt tolerance of bacteria and fungi to salt stress, the contribution of microbial residues to soil organic carbon (SOC) storage is salinity dependent. With the salinization of coastal wetlands, microbial necromass contribution to SOC shifts from fungal- to bacterial-dominated residue . In addition to soil bacteria and fungi, by compiling data from published literature, recent research in coastal ecosystems also found that, as a top-tier heterotrophic microbial group, the productivity of testate amoebae reduced with increasing salinity . For the influence of soil salinity on microbial growth, a consensus has gradually been reached on the depressive effect of excessive salinity on microbial growth in natural and agricultural land. By adding different salt solutions to a nonsaline soil, bacterial and fungal growth rates (leucine incorporation and acetate incorporation into ergosterol, respectively) were quantified and found that soil microbial growth was obviously inhibited by the salt exposure . They also pointed out that fungi were more resistant to salt exposure than bacteria as revealed by the growth-based measurements. On the contrary, a study concluded that bacterial salt tolerance was less related to soil salinity across an arid agricultural salinity gradient . The inhibitory effect of salinity on microbial growth is also indirectly caused by the availability of soil moisture and soil organic matter . An updated research demonstrated that saline soils had higher bacterial salt resistance than low-salt soil, and an increased salinity within the appropriate range enhanced the growth of bacteria . A study by Yan and Marschner showed that microbes in saline soils did not have a stronger salt resistance than in non-salinized soils, microbial activity and biomass in saline soils were substantially regulated by substrate availability. Trait-based microbiological research is more conducive for us to reveal the microbial growth response under salt stress .
SOC decomposition is one of the most basic and concerned ecological processes in belowground ecology and soil science. There is an urgent need to integrate microbial community ecology into belowground biogeochemistry to advance our understanding of soil biogeochemical cycles . Previous studies have reviewed the SOC decomposition and dynamics in saline-alkaline soils and/or under salt stress , . Here, we pay more attention to how microorganisms respond to salt stress and then regulate organic carbon decomposition. The direction and magnitude of response in SOC decomposition to salinization are related to the background of the soil studied. By investigating soil microbial growth and soil respiration along natural salinity gradients, a strong negative correlation was observed between cumulative carbon emission and salinity, which was mainly ascribed to the fact that increased salinity directly inhibited the microbial growth rate, and ultimately led to the decrease in the decomposition of organic carbon . A number of previous studies have also observed that the increase in salinity significantly inhibits the rate of organic carbon mineralization in different types of soils , , . The types of ions in the process of soil salinization can also differ in the decomposition intensity of organic carbon. The inhibitory magnitude of sodium chloride on SOC decomposition (also including microbial growth) was higher than that of sulfate . Salinity ionic stress may have a larger inhibitory effect on both aerobic and anaerobic carbon mineralization than the sulfate in a short-term microcosm experiment . In tidal wetlands where freshwater and saltwater interact, ranging from freshwater to oligohaline conditions, salinity increases could dramatically promote the activity of carbon-degrading extracellular enzymes and thereby increased soil microbial decomposition rates in these low salinity tidal wetlands . Furthermore, a global meta -analysis presented that hydrolytic carbon-acquiring enzyme activities were reduced by salinization, but oxidative carbon-acquiring enzymes were enhanced in tidal wetlands, yet salinization still exerted adverse effects on soil organic carbon storage from a comprehensive perspective . Field investigation and analysis based on metagenomic sequencing in saline soils also provided clear evidence that microbial carbon metabolic potentials (carbohydrate metabolism and genes encoding glycosyl transferases and glycoside hydrolases) were negatively correlated with soil salinity, therefore higher saline soils corresponded to low carbon emissions . Based on these collected research results, the increase in soil salinity is not beneficial to the long-term storage of soil organic carbon. Meanwhile, an incubation experiment revealed that, compared with liable carbon resources such as glucose, salinity reduced the ability of microorganisms to decompose cellulose more although the microbial characteristics of the tested saline and non-saline soil should be different . In an investigation study of the microbial functional gene abundance along a coastal salinity gradient, the abundance of most genes encoding carbon degradation was negatively correlated with salinity, however, the abundance of gene encoding ligninase increased with enhancing soil salinity . This may be due to the higher proportion of recalcitrant carbon in high-salt environments . Overall, it’s needed to link the microbial functional genes with microbial carbon degrading enzymes when understanding the roles of the microbial community in regulating SOC turnover in the context of soil salinization. Microbial carbon use efficiency (CUE), which reflects the partitioning of carbon allocation between microbial growth and respiration, was also explored under soil salinization in some studies. Soil microbial CUE inferred by eco-enzymatic stoichiometry follows a unimodal pattern along a salinity gradient in coastal soils . Within a controllable range of increased salinity, higher salinity promoted microbial CUE with less effect on microbial growth. Increasing salinity also affects SOC decomposition and its sequestration by suppressing plant productivity, as quantified at a broad scale . An attempt has been made to relate CUE to microbial genome size and suggested that larger genomes can access a wider variety of carbon substrates and thus behave with higher CUE . Increasing salinity by saltwater intrusion significantly altered soil microbial diversity and communities, but microbial CUE was less influenced . The emerged microbial functions (e.g., CUE) can be indirectly affected, such as by mediating soil structure, plant biodiversity, and nutrient state under soil salinization , , . For instance, increasing soil salinity decreased microbial enzyme activities, but increased the metabolic quotient (higher metabolic quotient means lower CUE), the presence of plants in saline soils can relieve the adverse effects of salinity on microbial catabolism . With the more refined description of the composition of soil microbial communities at the taxonomic level, it has become a critical topic to predict soil CUE by analyzing the dynamics of soil microbial communities under soil salinization, but so far there is no such research. Incorporating microbial CUE into the next-generation model of soil biogeochemistry is needed, we know less about this, especially in saline soils.
Nitrogen cycling in soils includes nitrogen mineralization, nitrogen fixation, nitrification, denitrification, dissimilatory nitrate reduction to ammonium (DNRA), anaerobic ammonium oxidation (Anammox), and so on (as shown in and ref. ). The nitrogen turnover in soils is jointly affected by physical (nitrate leaching, ), chemical (ammonia volatilization and dissolution, ), and biological processes (plant uptake and microbial utilization, ). Here, we focus on nitrogen cycling processes mediated by soil microorganisms, including nitrogen biogeochemical rate, functional gene abundance, and microbial communities involved in nitrogen transformation under soil salinization. Nitrogen mineralization refers to the process by which organic nitrogen in the soil is converted into inorganic nitrogen under the action of soil microorganisms. A review suggested that soil salinization had negative effects on nitrogen mineralization in coastal wetlands as microbial activity could be inhibited by the increasing salinity . On the contrary, a meta -analysis indicated that soil salinization significantly increased net nitrogen mineralization, because greater nitrogen content in plant detritus and higher abundance of specific microbial groups responsible for nitrogen mineralization can be used to explain the result . Most current estimates of nitrogen mineralization rates are based on the changes of ammonium over time in the field or incubation. The DNRA process can produce ammonium, and ammonium can also be assimilated or nitrified by microorganisms. Therefore, the 15 N tracer technology can be a more scientific and effective method for nitrogen mineralization estimation . Nitrification is an aerobic process mediated by ammonia oxidizers (also called nitrifiers) that oxidizes ammonia to nitrite or nitrate. The effect of salt stress on soil nitrification is tightly related to the degree of soil salinization. As reported by ref. , in the initial stage of soil salinization, seawater intrusion in coastal wetlands simulated by adding salt solution significantly inhibited potential nitrification rates but increased markedly the nitrification rates in the later stage of salinization. Within a wider range of soil salinity fluctuations (0–20 mS cm −1 ), the abundance of functional genes (such as amoA and hao ) decreased along with increasing salinity gradients , . There may be an optimal salinity threshold for the activity of nitrifiers : within an optimum of increasing salinity, the nitrification rate could be enhanced; extensive salinity can make a suppressive influence on nitrification rates by constraining the microbial physiology . Different nitrifier groups can exhibit distinct sensitivity to increasing salt exposure. Culture experiments showed that ammonia-oxidizing bacteria (AOB) can dominate nitrification and pose higher abundance at intermediate salinity, while ammonia-oxidizing archaea (AOA) exhibited higher salinity tolerance with less affected after salinity elevation . By combining 13 C-DNA stable-isotope probing (SIP) with genomic sequencing, a considerable number of salt tolerance ammonia oxidizers belonging to Nitrosococcus and Nitrosospira in saline and non-saline agricultural soils were identified . These ammonia oxidizers hold a variety of genes participating in Na + extrusion /H + import, and compatible solutes biosynthesis. Traditionally, two steps are involved in nitrification: ammonia oxidizers firstly oxidize ammonia to nitrite, then nitrite-oxidizing bacteria (NOB) convert nitrite to nitrate . Complete nitrifiers such as Nitrospira bacteria were found that they can directly use catalyze ammonia to nitrate (also called comammox or complete ammonia oxidation) , . However, we know less about them in saline soils which also represent a specific environment. The influence of soil salinization on denitrification varies with soil background salinity level, salinization stage, and ecosystems. Depressed denitrification was observed in tidal freshwater wetland soils after exposure to saline water in the Hudson River, USA . A laboratory-incubated experiment in tidal forest soils showed that increasing salinity stimulated denitrification in soils with lower background salinity, but had less effect in soils with relatively higher background salinity levels . Inferred from the results by ref. , in the initial stage of soil salinization, low salinity promoted soil denitrification but inhibited nitrification, resulting in higher N 2 O emission; in the later stage of salinized soil development, salinity significantly promoted nitrification, which provided more nitrate substrates for the denitrification although it was depressed, as a consequence, higher N 2 O generation was also observed. A meta -analysis also suggested that soil salinization effect on denitrification was not statistically significant although there was a decreasing trend between them across coastal wetlands . Considering the fact that soil denitrification preferentially occurs in anoxic conditions, changes in soil redox environment, moisture, and substrate availability caused by soil salinization can have a non-negligible impact on denitrification . Increasing soil salinity also has contrasting effects on microbial DNRA. Microbial DNRA and denitrification all reduce nitrate, denitrification can cause nitrogen loss in the form of nitrogen gas whereas DNRA can result in the conservation of nitrogen (NH 4 + ) in soils . By comparing the soil denitrification and DNRA potentials in freshwater and oligohaline wetlands, it was observed that increasing salinity shifted the nitrate reduction regime from denitrification toward DNRA . A similar study also suggested the positive relationship between the DNRA and salinity level in the freshwater wetlands of Massachusetts, USA . An integrated data analysis showed that soil salinization generally suppressed the DNRA . Factors controlling soil DNRA are complex, soil DNRA potential was more regulated by substrates rather than nrfA gene abundance along an estuarine and intertidal wetland . Soil organic matter quality and sulfide content also play important roles in the partitioning of the DNRA and denitrification in estuarine saline soils , . Soil DNRA can be more favored in anoxic environments when organic carbon or sulfide are effective, whereas denitrification would be more supported in soils with limited electron donors but sufficient nitrate . Anammox is also a key pathway that is responsible for nitrogen loss in soils (NH 4 + + NO 2 – → 2H 2 O + N 2 ), which is mediated by specialized microbes and ubiquitous in natural and artificial settings . Anammox bacteria are also more prevalent in saline/alkaline than acidic soils , which are affiliated with the Planctomycetes phylum with slow growth rates . Although the contribution of Anammox to N 2 production was low compared to denitrification in tidal marshes, the highest relative importance of Anammox was observed in low salinity marshes, increasing salinity lowered the importance of Anammox to nitrogen removal . It has been also demonstrated that soil N 2 production in coastal tidal flats and inland paddy was dominantly contributed by denitrification over Anammox, and soil salinity driven the Anammox bacterial community in coastal flats (dominated by Candidatus Scalindua and Candidatus Kuenenia ) and paddy soils (dominated by Candidatus Brocadia ) . Incubation experiments proposed that the low-salinity treatment stimulated the soil Anammox but it was inhibited by high-salinity addition . A meta -analysis by ref. manifested that soil salinization on average stimulated Anammox more than twofold. A higher abundance of hzo gene (mediating the Anammox process) was also observed in soils located in tidally influenced wetlands than that in riverine wetlands . Anammox bacteria generally exhibited higher salt tolerance, which makes them to adapt high-salt environments, this may explain the increasing trend of Anammox potential after salinity exposure . Collectively, we summarize a conceptual diagram based on existing reports on soil carbon and nitrogen cycles under the influence of salinization across different ecosystems ( ).
Phosphorus and sulfur are widely involved in the synthesis of microbial cell membranes, DNA, and proteins in the soil . However, there is much less research on the changes in soil phosphorus and sulfur and the role of microorganisms under the influence of salinization than carbon and nitrogen turnover. Existing studies have shown that salinity affects the availability of phosphorus in the soil. For instance, when freshwater forested wetlands were converted into oligohaline marshes, phosphorus mineralization was stimulated by soil salinification . Compared with freshwater wetlands, soils in brackish wetlands contained higher available phosphorus which can well be ascribed to the higher abundance of genes that participated in phosphorus solubilization (e.g., gcd and ppa ) and phosphorus mineralization (e.g., phoD , phy , and ugpQ ) . This study also implied that slight salinization of coastal freshwater wetlands may enhance the soil phosphorus availability by regulating the microbial community mediating phosphorus cycling. Some salt-tolerant phosphate solubilizing bacteria (PSB) also exist in saline soils, which play critical roles in solubilizing unavailable phosphorus into bioavailable phosphorus . Sulfur oxidation and reduction are the main metabolic processes in soil sulfur cycling, which are microbially carried out by sulfur-oxidizing (SOB) and sulfur-reducing bacteria (SRB) in all ecosystems . Wetland salinization could increase the generation of sulfides which are toxic for plant growth . Increased soil salt stress can reduce soil oxygen availability, as a consequence, induce the accumulation of sulfide in soils . Some SOB and SRB groups living in salt-saturating conditions have been found, such as the SOB genera Thioalkalivibrio , Thioalkalimicrobium , Thioalkalispira , and Thioalkalibacter within the Gammaproteobacteria, the SRB genera Desulfonatronovibrio , Desulfonatronum , and Desulfonatronospira belonging to Desulfovibrionales . The SOB and SRB communities and their mediated sulfur-cycling processes in saline soils are needed to be characterized in detail.
Soil microorganisms are viewed as an important tool against the threat of irreversible global salinization, as they can mitigate some negative impacts of salt stress on plants . Saline soils are important sources because they provide specific habitats for many novel salt-tolerant microorganisms, where researchers have isolated a large number of plant growth-promoting (PGP) microorganisms and applied them to agricultural production or degraded soil restoration. For instance, some 1-aminocyclopropane-1-carboxycarboxylate (ACC) deaminase-producing bacteria from rice rhizosphere in coastal soils were isolated and their considerable positive impacts on seed germination and root growth were observed . Bacteria belonging to Bacillales , Actinomycetales , Rhizobiales , and Oceanospirillales have also been isolated from paddy soil, and the growth-promoting effect of rice could be associated with the production of ACC deaminase, induction of proline accumulation, and reduction of the salt-induced malondialdehyde in the plant . A study demonstrated that the core microbiome in the rhizosphere of Halophyte Suaeda salsa holds genes encoding the processes of salt stress acclimatization and nutrient solubilization in coastal saline soils . And many salt-tolerant PGP microorganisms have been effectively identified and characterized as potential PGP groups for application to plant growth in saline soils, such as Azotobacter , Bacillus , Brevibacterium , Halococcus , Lysinibacillus , Paenibacillus , Pseudomonas and so on . Plants can also recruit the specific salt-tolerant bacterial consortium instead of individual bacterial members for enduring resistance against salt stress and promoting plant growth . There are several mechanisms that can explain the improving plant tolerance by salt-tolerant microorganisms under salt stress ( ). Some salt-tolerant PGP bacteria are able to encode the ACC deaminase to degrade the ACC (the precursor of ethylene) and reduce the negative effects of ethylene on plant growth . PGP bacteria can also be tightly associated with plant roots and produce some osmolytes (e.g., carbohydrates and proteins) to alleviate the osmotic stress . Some halotolerant PGP bacteria can colonize in plant roots as endophytic bacteria, and various antimicrobial metabolites are produced against pathogens under stressful conditions . It was also found that the PGP bacteria Kocuria erythromyxa EY43 and Staphylococcus kloosii EY37 can decline the absorption of toxic ions (sodium and chloride ions) from saline soils for the growth of the strawberry plant . The applications of salt-tolerant PSB are viewed as an effective and economical way to manage the phosphorus deficit in saline soils as they can activate the bio-unavailable phosphorus in saline soils . Several halotolerant bacterial isolates (belonging to Klebsiella , Pseudomonas , Agrobacterium , and Ochrobactrum ) have the ability to fix nitrogen and produce ACC deaminase, thereby enhancing peanut seedlings and growth . Arbuscular mycorrhizal fungi (AMF) are also highly hoped to alleviate salinity stress for host plants due to their importance in plant nutrition uptake, water absorption, and protection from pathogens . Under stressed conditions, plants can promote the growth of specific microorganisms in the rhizosphere by regulating the composition of root exudates, thus improving the adaptation of plants in harsh ecosystems . Changes in root exudates (e.g., phenolic acids and terpenoids) directly affect the composition and metabolic activity of the rhizosphere microbiome . In turn, root exudates contribute to the formation of carbon pools such as microbial residues and mineral-associated organic carbon (MAOC) in the soil , then affecting soil carbon sequestration in saline soils. However, until now, we still lack an integrated understanding of the plant-microbial-soil systemic responses to salt stress. The rapid development of metagenomics and other molecular techniques provide a powerful tool for the exploration of more salt-tolerant microorganisms, which can help us to have a better understanding of how these microorganisms survive in saline soils and how they affect soil health and mediate ecosystem functions . Therefore, we propose that the development of microbial resources in saline soils through advanced omics techniques is beneficial to ecosystem health and sustainable agriculture. It is a convenient and effective way to capture changes in microbial diversity and community structure under salt stress through the application of metagenomic sequencing technology ( ). We can also use transcriptomics to analyze the metabolic processes and gene regulation of active microorganisms during salt stresses . Considering that salt stress can cause changes in the expression of defense-related proteins, activation of antioxidant machinery, accumulation of osmolytes, and exopolysaccharides synthesis , it’s particularly important to reveal the molecular mechanisms of salt stress on microbial and plant cells through these multi-omics approaches, also including metaproteomics and metabolomics. Soil bioengineering tools like metagenomics and metabolomics can help decipher novel microbial metabolites from stressed ecosystems . Microbial salt-specific metabolites and triggered genes can also be explored by these multi-omics in the near future . The combination of several molecular ‘omics’ approaches makes it possible to reveal new mechanisms of microbially driven processes under environmental changes . With the help of microbiome-assisted strategies, soil-borne plant pathogens can be effectively suppressed through the manipulation of soil microbiota, such as by soil pre-fumigation, organic amendment, crop rotation, and intercropping in agricultural ecosystems . In natural settings, plants often selectively recruit beneficial microbiomes in the rhizosphere by changing the composition of root exudates to withstand harsh environments . A recent review also pointed out that inoculation with key microbiota members has the potential to improve plant growth by influencing plant traits . Additionally, the alteration of soil microbial communities and functions by emerging pollutants (EPs), in particular, such as micro/nanoplastics, metallic nanoparticles (MNPs), antibiotics, and organophosphate esters (OPEs), cannot be ignored , , in saline soils. It has been becoming a hot topic to disentangle how these EPs, in conjunction with environmental changes (e.g., saline stress), affect microbially mediated processes of geochemical cycling and energy flow , .
Soil salinization is one of the major global issues due to its adverse influence on agricultural productivity and ecosystem sustainability. Numerous molecular-based surveys have been conducted to explore the changing pattern of microbial diversity and community in saline soils from diverse ecosystems including coastal wetlands , , desert ecosystems , inland Salt Lake , forest , , grassland , and agriculture . These studies mainly focused on soil bacteria and fungi, seldom research explored the diversity and functions of archaea and protists although they also play critical roles in soil microbial food web and microbial-mediated biochemistry in saline soils , , . In salinized soils caused by either natural or artificial factors, the general pattern and community assembly of them are needed to be extensively clarified and revealed in different habitats. Recent studies emphasized that microbial roles in mediating soil biogeochemical cycles should be incorporated into the process-based models to improve the accuracy of earth biogeochemical models . Whereas these models are less been focused on the effect of soil salinization on microbial-driven functions such as SOC decomposition, nutrient turnover, and greenhouse gas emission. Especially, less research can accurately capture the periodic fluctuations in microbial diversity and its induced effect on biogeochemical cycles with the occurrence of soil salinization. Soil salinization not just affects changes in edaphic properties, but also leads to the succession of vegetation and alternation in environmental conditions. It is necessary to strengthen the research on the spatial–temporal stability of microbial communities under salt stress, tipping points, disturbance-recovery processes, and the importance of historical events and local context in driving community response to the changing salinity in soils. Soil, microorganisms, and plants interact with each other and perform various ecological functions. The tripartite interaction between them was reviewed and discussed in different environments , . However, in salinized soils, microbial diversity, plant growth, and soil conditions are often explored individually or in pairs. We lack the understanding of how the interaction between them changes under salinization. The microbial omics technology and advanced microscopy should provide new insights into these mechanisms of interaction between saline soil, microorganisms, and plant. Meanwhile, future research needs to enhance the mechanistic understanding of how soil microorganisms regulate context-dependent biogeochemical processes across spatial and temporal scales. The development of salt-tolerant functional microorganisms in different habitats can help to deal with the adverse effects on ecosystem functions by soil salinization. For instance, by applying salt-tolerant plant growth-promoting rhizobacteria (PGPR) ( Sphingobacterium BHU-AV3), the inoculated tomato plants promoted salt resistance by decreasing levels of oxidative stress, lipid peroxidation, reactive oxygen species (ROS), and cell death, and enhancing antioxidant activities and energy metabolism . Halotolerant PGP bacteria Staphylococcus sciuri ET101 has also been verified to protect plant photosynthesis by the interplay between carboxylation and oxygenation under salt stress conditions . However, the interaction between microorganisms and plants remains particularly unrevealed under salt stress. Based on remediation or restoration targets, such as plant productivity enhancement, stress resistance, nutrient enhancement, biodiversity maintenance, and greenhouse gas mitigation, we can enhance the development and utilization of targeted microbial resources in saline soils. Then, criteria applicable to different scenarios for microbial resourcefulness should be developed for different saline ecosystems.
Soil is a non-renewable resource, it’s difficult to restore it to its original healthy level once degraded by salinization. Soil salinization is occurring at an unprecedented rate across global ecosystems. Climate change and anthropogenic activity are expected to further exacerbate soil salinization. Therefore, it is necessary to develop effective measures to reduce the risk of salinization by anthropogenic activities. This review presents a comprehensive view of the impacts of salt stress on the microbiologically mediated biogeochemical cyclings and highlights that soil microbiome can be harnessed as a nature-based solution for the restoration of salinized soils and saline ecosystem sustainability. Future studies should strengthen the answer to some key questions that help to cope with salt stress and ecosystem degradation, such as: (1) What is the tipping point for the response of different soil microbial constituents in saline and non-saline soils? (2) How do alter environmental conditions to modulate microbial diversity and functions under human control? (3) How to use microbiomes in saline soils to serve human society? (4) How do microorganisms interact with each other and interplay with upper trophic levels (e.g., nematodes) under salt stress, and how can their interactions be used to guide ecosystem diversity maintenance and smart agricultural production? Answering these questions will enable us to take active and effective measures to mitigate the adverse effects of soil salinization. We should apply salt-tolerant microorganisms as a biological tool for improving plant salt stress tolerance. Meanwhile, we also emphasize the development of salt-tolerant microorganisms serves agricultural production, environmental pollution control, and restoration of degraded ecosystems, and ultimately benefits human well-being.
This article does not contain any studies with human or animal subjects
Guangliang Zhang: Conceptualization, Writing – original draft. Junhong Bai: Conceptualization, Supervision, Project administration, Writing – review & editing. Yujia Zhai: Writing – review & editing. Jia Jia: Investigation, Validation, Formal analysis. Qingqing Zhao: Investigation, Resources, Formal analysis. Wei Wang: Investigation, Formal analysis. Xingyun Hu: Conceptualization, Visualization.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper .
|
A Qualitative Study Exploring the Barriers and Facilitators for Maintaining Oral Health and Using Dental Service in People with Severe Mental Illness: Perspectives from Service Users and Service Providers | 881d9eac-0f1a-4f96-8c80-cf4cc34e6e21 | 8998854 | Dental[mh] | Oral health is an important part of general health. Oral diseases affect at least 3.58 billion people worldwide . Oral health affects aspects of social life, including self-esteem, social interaction, job performance and overall quality of life . In addition, oral diseases are associated with other physical health conditions such as diabetes and coronary heart diseases . People with severe mental illness (SMI) have some of the worst health indices and the lowest life expectancy of any section of the UK population . In England the prevalence of people with SMI (patients with schizophrenia, bipolar affective disorder and other psychoses) is 0.95% (all ages) and premature mortality in adults with SMI is 103.6 per 100,000 . The burden of oral disease is particularly high in people with SMI and it remains a largely neglected issue. The evidence shows that oral health among people with SMI is poorer than among the general population. They have a nearly three times higher chance of losing all their teeth (95% Confidence Interval (CI) 1.7–4.6) and higher caries rates (mean difference 5.0, 95% CI 2.5–7.4), compared with people without SMI . Poor oral health has a profound effect on their general health and quality of life . Complications from untreated tooth decay are reported to be a common cause of non-psychiatric hospital admissions among patients with SMI . This means that, in addition to the impacts on individual health, oral disease may have implications for health services, including the associated costs of treatment. Oral health interventions have been demonstrated to be cost-effective in relation to children’s oral health . It is therefore important to recognise the potential economic benefits of improved oral health amongst those with SMI. Despite the importance of oral health, maintenance of regular oral hygiene is a challenge for this population. A review by Turner et al. reported that individuals with SMI were significantly less likely to maintain regular tooth brushing in comparison with the general population (OR 0.19, 95% CI 0.08–0.42) . Behavioural risk factors for oral health, such as higher levels of consumption of sugary food and drinks, are also more common amongst those with SMI . Furthermore, despite the high treatment needs in this population, people with SMI are less likely to access dental services and receive routine dental care . This may be due to a lack of motivation, failure to make appointments for routine dental check-ups, inadequate cooperation, poor communication, and social and financial barriers for accessing oral healthcare. In addition, maintaining good oral health can be particularly difficult for this population due to specific challenges, such as side effects of anti-psychotic and anti-depressant medications (e.g., dry mouth) and co-morbidities associated with mental and physical health conditions. Dental anxiety, phobia, symptoms of mental illness and lack of support systems may also contribute to reluctance for dental visiting or maintenance of oral hygiene . A phenomenological study exploring oral health experiences and needs among young adults after a first-episode psychosis reported barriers for patients in their general oral care and also related to their experiences with dental professionals with regards to how interventions failed to address their needs . Furthermore, in a narrative review, Slack et al. (2017) investigated individual, organisational and systemic levels of barriers of oral health in people with SMI. The report focuses mainly on downstream individual-level factors. However, the organisational, systemic and policy factors were not covered in depth due to fewer studies focusing on more upstream approaches (organisational and systemic levels) to improving oral health outcomes in this group . To tackle poor oral health, this population needs to be urgently targeted with interventions that are tailored to meet their needs and address the barriers for appropriate oral healthcare at different levels (individual as well as organisational and systemic levels). The recent consensus statement on the 5 years’ action plan to improve oral health in people with SMI also highlighted the importance of understanding the barriers from different stakeholders and suggested more collaborative ‘whole-person care’ approaches . Therefore, it is important to explore the barriers and facilitators of different aspects of oral health care and service use in people with SMI, from the perspectives of both service users and service providers. Thus, in this study we aimed to explore how people with SMI and the related service providers experience and express the barriers and facilitators related to oral health.
The present study reports on the qualitative exploration of barriers and facilitators for the maintenance of oral health and dental service use by people with SMI and the views of the service providers. Research ethics approval was obtained from the University of York Health Sciences Research and Governance Committee. All research participants provided written informed consent prior to participating in the study. 2.1. Participants and Setting A convenience sampling technique was employed to recruit service users to the study. People aged over 18 years, living in the UK and having had a self-reported diagnosis of SMI (schizophrenia, schizoaffective disorder or bipolar disorder) were recruited as service users. Health professionals and carers with experience of providing health services to patients with SMI were purposively recruited to allow for a mix of service providers involved in the provision of both dental and mental health care for people with SMI. Participants were recruited through ‘Involvement@York’ which is the patient and public involvement network and resource co-ordinated by the University of York. and social media posts as well as by using current contacts to spread the word about the study. The service providers were also identified from the professional network of the co-investigators. Eligible participants were invited by email to participate in the research and those expressing interest were provided with the information pack and the consent form. An introductory video of the project which was created to provide an overview of the research to aid with recruitment was sent with the invitation email. A mutually convenient time was scheduled for the interviews upon receipt of the signed consent forms. 2.2. Data Collection Based on the preference and convenience of the participants, the in-depth interviews were conducted using either one-to-one or dyadic interview techniques. In line with COVID restrictions in place at the time of the interviews, all interviews were conducted remotely via the video conferencing platform Zoom . The interviews were co-facilitated by MRF and MPM. A semi-structured topic guide was used to guide the flow of the interviews. For service users, the participants were asked to share their experience of caring for their oral health and use of dental services, any challenges that they faced for doing this and their perception of a best possible service. The service providers were also asked to share their experience of providing care to patients with SMI, their perceptions of challenges that people with SMI face in seeking oral health care and views about the best possible service that could be implemented. Prior to the interview, a Zoom meeting link was emailed to the participants with guidance on how to join the Zoom meeting. The participants were asked to set aside two hours for the session to allow adequate time for data collection. Participants were given the choice to keep their camera switched off during the interview if they did not feel comfortable speaking with the camera on. The interviewers had their camera turned on through the duration of the interview to allow participants a sense of non-verbal, facial expressions in relation to active and compassionate listening instead of only relying on the interviewers’ tone of voice for these non-verbal cues. All interviews were video recorded by the default recording function in Zoom and participants’ consent was once again sought prior to initiating the recording. As a token of appreciation for their time, all participants were offered a GBP 20 Amazon e-voucher. Upon completion of each interview, the video files were deleted, the audio recordings transcribed verbatim, and transcripts pseudonymised along with the removal of any identifying information. The interviewers wrote down their reflections immediately after the interviews. 2.3. Data Analysis Data analysis was carried out following the thematic analysis procedure described by Braun and Clarke (2006) . In the first step, the two reviewers (MRF and MPM) read the transcripts and discussed them along with their individual reflections. In the second step, coding was performed individually by the two reviewers, which was then discussed to ensure clarity and agreement. Once initial codes were agreed on, they were then collated to form categories and sub-themes. Themes were created by compiling the sub-themes for both service users and the service providers, to identify barriers and facilitators at three levels—personal level, inter-personal level and system level (with thematic maps provided as additional information). NVivo 12 Pro was used for analysing the data. The data were analysed concurrently to help identify new themes emerging from the data and this also provided an opportunity to add or modify questions to help further explore phenomena during subsequent interviews. Rigour for the qualitative study process was supported through interviewers’ recording their reflections during or after the interviews and through constant comparison between the accounts of the participants to reduce analysis bias . In addition, in order to obtain validation of our study findings, we conducted 11 one-to-one stakeholder consultations with a diverse range of stakeholders (Head of Research, and Deputy Director of two NHS Foundation Trusts; Professor and Honorary Consultant of Dental Public Health; Director of Research & Clinical Senior Lecturer/Honorary Consultant; Consultant, Health Care Public Health Team, NHS England and NHS Improvement (North East); Associate Research Delivery Manager (NIHR); Peer Consultant and Co-Production advisor, Training Programme Director for Oral Health Improvement and Dental Care Professionals; Physical Health Lead Nurse; Senior Lecturer in Dental Nursing and Dental Hygiene; Member of Oral Health Promotion Team of a NHS Foundation Trusts). We discussed the study and emerging themes on barriers and facilitators to oral health, the data synthesis plan and future recommendations. We took notes on the views and recommendations of the stakeholders during the consultations, which was reflected in the findings and recommendation of the study.
A convenience sampling technique was employed to recruit service users to the study. People aged over 18 years, living in the UK and having had a self-reported diagnosis of SMI (schizophrenia, schizoaffective disorder or bipolar disorder) were recruited as service users. Health professionals and carers with experience of providing health services to patients with SMI were purposively recruited to allow for a mix of service providers involved in the provision of both dental and mental health care for people with SMI. Participants were recruited through ‘Involvement@York’ which is the patient and public involvement network and resource co-ordinated by the University of York. and social media posts as well as by using current contacts to spread the word about the study. The service providers were also identified from the professional network of the co-investigators. Eligible participants were invited by email to participate in the research and those expressing interest were provided with the information pack and the consent form. An introductory video of the project which was created to provide an overview of the research to aid with recruitment was sent with the invitation email. A mutually convenient time was scheduled for the interviews upon receipt of the signed consent forms.
Based on the preference and convenience of the participants, the in-depth interviews were conducted using either one-to-one or dyadic interview techniques. In line with COVID restrictions in place at the time of the interviews, all interviews were conducted remotely via the video conferencing platform Zoom . The interviews were co-facilitated by MRF and MPM. A semi-structured topic guide was used to guide the flow of the interviews. For service users, the participants were asked to share their experience of caring for their oral health and use of dental services, any challenges that they faced for doing this and their perception of a best possible service. The service providers were also asked to share their experience of providing care to patients with SMI, their perceptions of challenges that people with SMI face in seeking oral health care and views about the best possible service that could be implemented. Prior to the interview, a Zoom meeting link was emailed to the participants with guidance on how to join the Zoom meeting. The participants were asked to set aside two hours for the session to allow adequate time for data collection. Participants were given the choice to keep their camera switched off during the interview if they did not feel comfortable speaking with the camera on. The interviewers had their camera turned on through the duration of the interview to allow participants a sense of non-verbal, facial expressions in relation to active and compassionate listening instead of only relying on the interviewers’ tone of voice for these non-verbal cues. All interviews were video recorded by the default recording function in Zoom and participants’ consent was once again sought prior to initiating the recording. As a token of appreciation for their time, all participants were offered a GBP 20 Amazon e-voucher. Upon completion of each interview, the video files were deleted, the audio recordings transcribed verbatim, and transcripts pseudonymised along with the removal of any identifying information. The interviewers wrote down their reflections immediately after the interviews.
Data analysis was carried out following the thematic analysis procedure described by Braun and Clarke (2006) . In the first step, the two reviewers (MRF and MPM) read the transcripts and discussed them along with their individual reflections. In the second step, coding was performed individually by the two reviewers, which was then discussed to ensure clarity and agreement. Once initial codes were agreed on, they were then collated to form categories and sub-themes. Themes were created by compiling the sub-themes for both service users and the service providers, to identify barriers and facilitators at three levels—personal level, inter-personal level and system level (with thematic maps provided as additional information). NVivo 12 Pro was used for analysing the data. The data were analysed concurrently to help identify new themes emerging from the data and this also provided an opportunity to add or modify questions to help further explore phenomena during subsequent interviews. Rigour for the qualitative study process was supported through interviewers’ recording their reflections during or after the interviews and through constant comparison between the accounts of the participants to reduce analysis bias . In addition, in order to obtain validation of our study findings, we conducted 11 one-to-one stakeholder consultations with a diverse range of stakeholders (Head of Research, and Deputy Director of two NHS Foundation Trusts; Professor and Honorary Consultant of Dental Public Health; Director of Research & Clinical Senior Lecturer/Honorary Consultant; Consultant, Health Care Public Health Team, NHS England and NHS Improvement (North East); Associate Research Delivery Manager (NIHR); Peer Consultant and Co-Production advisor, Training Programme Director for Oral Health Improvement and Dental Care Professionals; Physical Health Lead Nurse; Senior Lecturer in Dental Nursing and Dental Hygiene; Member of Oral Health Promotion Team of a NHS Foundation Trusts). We discussed the study and emerging themes on barriers and facilitators to oral health, the data synthesis plan and future recommendations. We took notes on the views and recommendations of the stakeholders during the consultations, which was reflected in the findings and recommendation of the study.
A total of 17 dyadic and one-to-one interviews were conducted over a period of three months (July–September 2021), with service users and informal carer and service providers. The sessions lasted two hours and allowed time for in-depth exploration of the oral health issues for people with SMI, captured at the service user and the service provider level. The participant demographics are provided in . The barriers and facilitators explored were categorised at three levels: (1) the personal level, relating to those barriers and facilitators that the individual service user faced for their oral health care, and the service providers’ perspectives regarding delivery of care; (2) the inter-personal level, indicating those faced at the service user–service provider interface and (3) the system level, for identifying the wider elements and their influence. The categories compiled were grouped as sub-themes under each of the three levels for both the service users and the service providers ( and ). Analysis of the data revealed cross-cutting themes and for this reason thematic analysis findings for service users and services providers are presented in combination ( ). 3.1. Theme 1: Ameliorating the Problem This theme contained sub-themes that linked to the oral-health-related barriers and facilitators that the service users and the providers faced at the personal level. 3.1.1. Impact of Mental Ill-Health The service users talked about how their mental illness put them at a disadvantage in comparison to the general population. One example of this was a lack of motivation impacting the service users’ ability to maintain good oral hygiene. “I mean I can spend days when I can’t actually get out of bed never mind think about cleaning my teeth, you know that’s just not something that’s going to happen.” (Service user J-06 with diagnosis of bipolar disorder). “I think, when you have a severe mental illness, you can neglect yourself. And a part of that can be you neglect your oral health.” (Service user H-03 with diagnosis of schizophrenia). One of the service users also mentioned how her mental health had worsened during the COVID-19 restrictions due to a sense of isolation, which further fed into her apathy for her general well-being and oral health in particular. “I mean, at one point I was really good and I was doing everything you know brushing three times a day, using the little inter dental brushes, the mouthwash, I was at the top end of the regime and then my mental health got worse, I think during the second lockdown and that’s when I lost the momentum and I’m struggling to get that momentum back.” (Service user M-02 with diagnosis of schizophrenia). Participants also mentioned how their negative life experiences heavily influenced their intention to visit the dentist for regular check-ups or for treatment. These relate to the need to have a sense of trust and rapport with their dental health professional before they could feel comfortable about being under their care. “if somebody said that well Hayley ( pseudonym ) can’t look after you today, I drive away for a while, you know, a couple of weeks, if need be, I know she’s moved to another dentist I say where she’s moved to please? because you know, I trust that person, you know I would want to be on her caseload and because it’s an important thing to people who have endured poor mental health and serious mental illness that when they start to trust somebody it becomes a very particular relationship.” (Service user J-04 with diagnosis of bipolar disorder). Furthermore, the intrusive nature of dental treatments was also reported as a significant barrier due to associations with past experiences or negative emotions. “I think it’s really a common thing like a lot of people have had experiences that you know felt very intrusive and as an invasive and around the mouth, it makes sense to me that, like dentistry is really triggering for that and really replicate some of that feeling of powerlessness feeling of being out of control, it being painful like having to have your mouth open and you’re not in control of that.” (Service user Sa-07 with diagnosis of bipolar disorder) In addition, the participants also spoke about the range of emotions that they go through whenever they have to visit a dentist such as shame, fear, anxiety and distress related to dental treatments. It was highlighted that sitting in the waiting room and having to listen to the drilling sounds can provoke anxiety. It was apparent from the interviews that oral health was considered an integral part of general health and well-being and the major barrier that they faced at the service provider’s personal level was lack of understanding about their mental illness and lack of consideration on how to manage the individual patient according to their needs. “This level of education is really needed with these groups of individuals around trauma and you know, so that they are psychologically informed and trauma informed. You know who wants to put anybody through any kind of distress, but you know so it’s a group of people that really do need to learn more about their patients.” (Service user K-05 with diagnosis of schizophrenia and autism). 3.1.2. Having a Positive Attitude The views of the service users regarding the need for sensitivity and tact while dealing with patients with mental illness were reflected by the service providers as well. They agreed that there was a need to move away from a position of judgment, and to use compassion and empathy when dealing with patients. “I think it’s just important not to judge and actually what you think may be normal for a group of patients isn’t and if some of my patients brush maybe once or twice a week, then that’s better than never and that’s actually all I can expect from them. So, I think it’s about being realistic and non-judgmental and starting with basic things…” (Service provider C-01 working as a community dentist). The need for the development of effective communication skills in order to be able to effectively communicate with patients who require extra support was also highlighted by the health professionals as an important area that required improvement. “So just as much as tooth brushing is a habit, it’s a healthy habit and it needs to be encouraged so again it just comes back to the way in which that conversation happens. It’s not the ‘you need to do it like this’, we need ‘we’re here to educate you and tell you what to do’, it’s more ‘do you understand the benefits of what I am teaching you and can you demonstrate it to me so that I know that you’re able to do it well yourself’ and that’s the approach that I think could go somewhere.” (Service provider B-10, special care dentist). 3.1.3. Keeping Oral Health on the Agenda The need for effective communication skills for patient management, highlighted by the health professionals, was further explored to understand how this could be incorporated to address the oral health needs of the service users with SMI. Taking a more holistic approach by considering not just patient’s teeth but the whole person, proportional to their individual needs, was suggested as the way forward. “I think that sometimes people may misunderstand that oral health just means mouth and teeth but actually it’s about the whole of the person, including medical but also including and I suppose it’s sort of taking a rounded approach to the person and sort of a holistic approach for that person.” (Service provider C-01, community dentist). “I think education is quite the key and also trying to break down those barriers and say you know we are kind of patient people, we do understand your problems and anxieties and try to find ways of managing that and dealing with that and showing them that it’s not as bad as what they think is.” (Service provider- H-04, special care dentist). Training dental professionals in mental health was another area highlighted as needing attention to address the barriers to providing the patients with best possible care. “Yes, indeed clinicians don’t tend to raise things if they’re a bit anxious about whether they’re able to deal with what comes up. So, I think there is a need for some mental health training for the dentist. May be even mental health first aid course that can be two days. Not expecting the dentists to train as mental health professionals, that’s a little bit training we have.” (Service provider- D-07, caring for a person with schizophrenia). On the other hand, it was also discussed that mental health staff could be trained to look out for their patient’s oral health by flagging up any signs of a problem and referring the patient to receive appropriate care. “I think there’s a real awareness now that physical and mental health go hand in hand, and we need to have an angle on both and doesn’t mean you have to be an expert in dentistry in dental hygiene, but just having a general awareness of kind of I don’t know what warning signs or things to look out for. Just making sure, a lot of it might just be making sure people have the regular checks and understanding the importance of that.” (Service provider S-06, occupational therapist). 3.2. Theme 2: Use of a Tailored Approach Use of a tailored approach was identified as a facilitator at the inter-personal level by both the service users and the service providers. 3.2.1. Need to Be Heard and Understood The service users felt that they faced discrimination or experienced patronising attitudes because of their impact of mental ill health’, which created barriers for them in accessing dental services. “Like the stigma and discrimination around mental health in society generally I think comes into it. People feel anxious that they’re going to be judged and misunderstood and I think that you know, makes it difficult for people, especially like in the acute phase of their illness to sort of make contact with other health providers.” (Service user Sa-07 with diagnosis of schizophrenia). The mental health service users expressed their desire to be involved in their treatment planning, to be treated as a whole individual and be given a voice. “Mental health, I would say it’s already exploited you know in terms of not giving patients a voice and disabilities can be very life limiting. So, giving people the options and scope around that gives them a strong voice and a recognition that they are involved in their own treatment in healthcare.” (Service user S-01 with diagnosis of schizophrenia). 3.2.2. Considering the Individual Needs The service providers, similarly, spoke about how important it is to provide adequate support to people with SMI and the importance of framing the narrative in a way that would involve them more in their own care and decision making. “Patient should be provided information about the potential side effects of their medication that they are prescribed, they should have a fully informed choice. So again, that will come under the mental health side of things. I think, historically, some mental health services avoided telling people about all the potential effects because the medications were pretty problematic. Hopefully now that’s pretty much legal and patients in the hereafter provide fully informed consent but I am not sure how thoroughly people still do that” (Service provider C-05, clinical psychologist). However, one of the biggest barriers from the health service provider’s point of view was the lack of motivation and lack of compliance on the part of the patient, because even when there was perceived to be adequate support available for a patient, their non-concordance would potentially prevent the service user from benefitting from the dental services. “The main thing should just be getting people through the door and to have an examination or to have education about the kind of oral hygiene, that is where there will be the most benefit.” (Service provider C-05, clinical psychologist). For this reason, involvement of the carers such as the family or friends was brought up as an important element of managing patients with SMI, not only to motivate them but also to liaise with the health professionals on their behalf. “I have had people who don’t want to discuss their trauma or their past and have consented to the person who is supporting them to discuss it and so sometimes having that other person there, it gives them a different way of communicating and if they don’t want to speak about it directly but have allowed their carer or support worker to do on their behalf, that’s also happened sometimes.” (Service provider C-01, community dentist). 3.3. Provision of Comprehensive Support At the systems level, it was clear from both the service users’ and service providers’ narrative that more comprehensive support is needed to help people with severe mental illness to overcome barriers for their oral health care. 3.3.1. Utilisation of Dental Services At the service level, there were two main barriers that were identified by the service users: (1) accessibility issues and (2) lack of availability of integrated care. The service users stated how difficult it was to find an NHS dentist. Even when they were successful in finding one, the dental practice was either too far away, which caused transportation issues, or they would end up being removed from the practice due to missed appointments because of their unstable mental health condition. “You know, you have no choice, you know you have to often put your name down, where I live, its centralized system that you can put your name down and then you’ll be allocated a dentist, but it could be somebody on the other side of town to try to keep it local, ‘you know this one’s come up, would you like to register with them?’ and the cost because rather wait another three or four months you are going to say yeah. So then getting across there becomes a problem.” (Service user J-04, with diagnosis of bipolar disorder). “ So the barrier, is the support, if you are unwell how will you be able to get to the appointment? That’s where the barrier is, would there be enough support in order for me to get to the appointment or will I be able to ask questions during the appointment? And if so, will it be with my level of care be affected? (Service user S-01, with diagnosis of schizophrenia). The cost of dental treatment was also reported as a significant barrier for seeking dental treatment. “Because it’s having access to quality dental care and if it’s costing you 45 quid to go now and a bit of a squirt and clean 45 quid is, you know well that’s Monday, Tuesday, Wednesday, Thursday’s benefits for me well what shall we not pay? Shall we not pay my rent, shall we not pay my council tax; so I am not going see my kids, yeah; no, I am okay with brown teeth and a bit of plaque. You know you’re asking people to make those sort of choices.” (Service user J-04, with diagnosis of bipolar disorder). Lack of integration between health services regarding the provision of holistic care and considering the overall health and well-being of the patient was highlighted as the missing piece of the puzzle. The service users reported that this lack of integration meant that oral health was not considered a priority by mental health and other health professionals, without considering the negative impact of poor physical health on their mental health or vice versa. “Making every contact count, it does need to be a conversation and part of you know, a multi-disciplinary team approach, social workers, health workers, mental health workers, GPs. You know it’s a bit like the conversation around making sure people get their physical health checks as part of their severe mental illness and medics, I’ve heard them say it before you know ‘we’re not experts in physical health’, but you know what you, you are my consultant psychiatrist, you are my mental health nurse, you are my social worker, you are whoever, you don’t have to be an expert in the field to put in my CPA letter or my discharge letter or the letter to my GP-when was the last time I saw a dentist or when’s the last time I had a physical health check…you know, to advocate for me and that’s what we need, we need people to support us, we need people to advocate for us.” (Service user K-05, with diagnosis of schizophrenia). 3.3.2. Accessibility and Availability of Services The dental service providers were aware of the difficulty with finding a dentist but mentioned that due to the way the dental commissioning works and having heavy caseloads, they have to remove a patient from under their care if appointments are frequently missed. “Those patients that don’t attend appointments with us, you know they don’t add three hours of our time. So, we are commissioned to deliver those targets, so the practice just you know can’t keep seeing them, you know if they really struggle, unfortunately, to comply with the normal frame of practice in primary care.” (Service provider E-02, High street dentist). Getting the dental appointment easily and within a short time was suggested, though such facilities are scarce now. “The majority of people who come in, it isn’t that it was their focus or their priority, but if there were any issues there used to be a facility for a very quick referral to a local dentist and the whole system is not there anymore. But it was an NHS dentist and it was possible to bring them in the morning and have an appointment the same day. And that was focused mainly on people who have mental health problems and I think the benefits of that were people got seen straight away, they didn’t have to think about it and the pressing issues, whatever the tooth ache or whatever contentious was fixed straightaway.” (Service provider- S-08, worked as mental health nurse). It was also pointed out that with the existing demand for mental healthcare, existing services are at full capacity; therefore, facilitating the patients in other areas of their health might not always be feasible with limited resources. “With the caseloads that people carry at the moment you wouldn’t be able to, mental health staff wouldn’t be able to kind of facilitate supporting someone to get those. So even you had that overview and you have that, I don’t know that awareness you still have not got the resources in terms of staffing to be able to support that. And so you, you just continue hitting that barrier, because the people have just got ridiculous caseloads essentially.” (Service provider M-09, mental health nurse). However, in line with the service users’ views, the health professionals agreed that there was a lack of integration between services, with every service mostly dealing with one aspect of patients’ health and not working in coordination to improve the overall health and well-being of the patient. “So how do we have those conversations about finding a sweet spot for an individual- right balance so that each profession understands the rationale behind what the other one is doing and we’re not always just butting heads, but we’re actually supporting the patient in the middle.” (Service provider B-10, special care dentist).
This theme contained sub-themes that linked to the oral-health-related barriers and facilitators that the service users and the providers faced at the personal level. 3.1.1. Impact of Mental Ill-Health The service users talked about how their mental illness put them at a disadvantage in comparison to the general population. One example of this was a lack of motivation impacting the service users’ ability to maintain good oral hygiene. “I mean I can spend days when I can’t actually get out of bed never mind think about cleaning my teeth, you know that’s just not something that’s going to happen.” (Service user J-06 with diagnosis of bipolar disorder). “I think, when you have a severe mental illness, you can neglect yourself. And a part of that can be you neglect your oral health.” (Service user H-03 with diagnosis of schizophrenia). One of the service users also mentioned how her mental health had worsened during the COVID-19 restrictions due to a sense of isolation, which further fed into her apathy for her general well-being and oral health in particular. “I mean, at one point I was really good and I was doing everything you know brushing three times a day, using the little inter dental brushes, the mouthwash, I was at the top end of the regime and then my mental health got worse, I think during the second lockdown and that’s when I lost the momentum and I’m struggling to get that momentum back.” (Service user M-02 with diagnosis of schizophrenia). Participants also mentioned how their negative life experiences heavily influenced their intention to visit the dentist for regular check-ups or for treatment. These relate to the need to have a sense of trust and rapport with their dental health professional before they could feel comfortable about being under their care. “if somebody said that well Hayley ( pseudonym ) can’t look after you today, I drive away for a while, you know, a couple of weeks, if need be, I know she’s moved to another dentist I say where she’s moved to please? because you know, I trust that person, you know I would want to be on her caseload and because it’s an important thing to people who have endured poor mental health and serious mental illness that when they start to trust somebody it becomes a very particular relationship.” (Service user J-04 with diagnosis of bipolar disorder). Furthermore, the intrusive nature of dental treatments was also reported as a significant barrier due to associations with past experiences or negative emotions. “I think it’s really a common thing like a lot of people have had experiences that you know felt very intrusive and as an invasive and around the mouth, it makes sense to me that, like dentistry is really triggering for that and really replicate some of that feeling of powerlessness feeling of being out of control, it being painful like having to have your mouth open and you’re not in control of that.” (Service user Sa-07 with diagnosis of bipolar disorder) In addition, the participants also spoke about the range of emotions that they go through whenever they have to visit a dentist such as shame, fear, anxiety and distress related to dental treatments. It was highlighted that sitting in the waiting room and having to listen to the drilling sounds can provoke anxiety. It was apparent from the interviews that oral health was considered an integral part of general health and well-being and the major barrier that they faced at the service provider’s personal level was lack of understanding about their mental illness and lack of consideration on how to manage the individual patient according to their needs. “This level of education is really needed with these groups of individuals around trauma and you know, so that they are psychologically informed and trauma informed. You know who wants to put anybody through any kind of distress, but you know so it’s a group of people that really do need to learn more about their patients.” (Service user K-05 with diagnosis of schizophrenia and autism). 3.1.2. Having a Positive Attitude The views of the service users regarding the need for sensitivity and tact while dealing with patients with mental illness were reflected by the service providers as well. They agreed that there was a need to move away from a position of judgment, and to use compassion and empathy when dealing with patients. “I think it’s just important not to judge and actually what you think may be normal for a group of patients isn’t and if some of my patients brush maybe once or twice a week, then that’s better than never and that’s actually all I can expect from them. So, I think it’s about being realistic and non-judgmental and starting with basic things…” (Service provider C-01 working as a community dentist). The need for the development of effective communication skills in order to be able to effectively communicate with patients who require extra support was also highlighted by the health professionals as an important area that required improvement. “So just as much as tooth brushing is a habit, it’s a healthy habit and it needs to be encouraged so again it just comes back to the way in which that conversation happens. It’s not the ‘you need to do it like this’, we need ‘we’re here to educate you and tell you what to do’, it’s more ‘do you understand the benefits of what I am teaching you and can you demonstrate it to me so that I know that you’re able to do it well yourself’ and that’s the approach that I think could go somewhere.” (Service provider B-10, special care dentist). 3.1.3. Keeping Oral Health on the Agenda The need for effective communication skills for patient management, highlighted by the health professionals, was further explored to understand how this could be incorporated to address the oral health needs of the service users with SMI. Taking a more holistic approach by considering not just patient’s teeth but the whole person, proportional to their individual needs, was suggested as the way forward. “I think that sometimes people may misunderstand that oral health just means mouth and teeth but actually it’s about the whole of the person, including medical but also including and I suppose it’s sort of taking a rounded approach to the person and sort of a holistic approach for that person.” (Service provider C-01, community dentist). “I think education is quite the key and also trying to break down those barriers and say you know we are kind of patient people, we do understand your problems and anxieties and try to find ways of managing that and dealing with that and showing them that it’s not as bad as what they think is.” (Service provider- H-04, special care dentist). Training dental professionals in mental health was another area highlighted as needing attention to address the barriers to providing the patients with best possible care. “Yes, indeed clinicians don’t tend to raise things if they’re a bit anxious about whether they’re able to deal with what comes up. So, I think there is a need for some mental health training for the dentist. May be even mental health first aid course that can be two days. Not expecting the dentists to train as mental health professionals, that’s a little bit training we have.” (Service provider- D-07, caring for a person with schizophrenia). On the other hand, it was also discussed that mental health staff could be trained to look out for their patient’s oral health by flagging up any signs of a problem and referring the patient to receive appropriate care. “I think there’s a real awareness now that physical and mental health go hand in hand, and we need to have an angle on both and doesn’t mean you have to be an expert in dentistry in dental hygiene, but just having a general awareness of kind of I don’t know what warning signs or things to look out for. Just making sure, a lot of it might just be making sure people have the regular checks and understanding the importance of that.” (Service provider S-06, occupational therapist).
The service users talked about how their mental illness put them at a disadvantage in comparison to the general population. One example of this was a lack of motivation impacting the service users’ ability to maintain good oral hygiene. “I mean I can spend days when I can’t actually get out of bed never mind think about cleaning my teeth, you know that’s just not something that’s going to happen.” (Service user J-06 with diagnosis of bipolar disorder). “I think, when you have a severe mental illness, you can neglect yourself. And a part of that can be you neglect your oral health.” (Service user H-03 with diagnosis of schizophrenia). One of the service users also mentioned how her mental health had worsened during the COVID-19 restrictions due to a sense of isolation, which further fed into her apathy for her general well-being and oral health in particular. “I mean, at one point I was really good and I was doing everything you know brushing three times a day, using the little inter dental brushes, the mouthwash, I was at the top end of the regime and then my mental health got worse, I think during the second lockdown and that’s when I lost the momentum and I’m struggling to get that momentum back.” (Service user M-02 with diagnosis of schizophrenia). Participants also mentioned how their negative life experiences heavily influenced their intention to visit the dentist for regular check-ups or for treatment. These relate to the need to have a sense of trust and rapport with their dental health professional before they could feel comfortable about being under their care. “if somebody said that well Hayley ( pseudonym ) can’t look after you today, I drive away for a while, you know, a couple of weeks, if need be, I know she’s moved to another dentist I say where she’s moved to please? because you know, I trust that person, you know I would want to be on her caseload and because it’s an important thing to people who have endured poor mental health and serious mental illness that when they start to trust somebody it becomes a very particular relationship.” (Service user J-04 with diagnosis of bipolar disorder). Furthermore, the intrusive nature of dental treatments was also reported as a significant barrier due to associations with past experiences or negative emotions. “I think it’s really a common thing like a lot of people have had experiences that you know felt very intrusive and as an invasive and around the mouth, it makes sense to me that, like dentistry is really triggering for that and really replicate some of that feeling of powerlessness feeling of being out of control, it being painful like having to have your mouth open and you’re not in control of that.” (Service user Sa-07 with diagnosis of bipolar disorder) In addition, the participants also spoke about the range of emotions that they go through whenever they have to visit a dentist such as shame, fear, anxiety and distress related to dental treatments. It was highlighted that sitting in the waiting room and having to listen to the drilling sounds can provoke anxiety. It was apparent from the interviews that oral health was considered an integral part of general health and well-being and the major barrier that they faced at the service provider’s personal level was lack of understanding about their mental illness and lack of consideration on how to manage the individual patient according to their needs. “This level of education is really needed with these groups of individuals around trauma and you know, so that they are psychologically informed and trauma informed. You know who wants to put anybody through any kind of distress, but you know so it’s a group of people that really do need to learn more about their patients.” (Service user K-05 with diagnosis of schizophrenia and autism).
The views of the service users regarding the need for sensitivity and tact while dealing with patients with mental illness were reflected by the service providers as well. They agreed that there was a need to move away from a position of judgment, and to use compassion and empathy when dealing with patients. “I think it’s just important not to judge and actually what you think may be normal for a group of patients isn’t and if some of my patients brush maybe once or twice a week, then that’s better than never and that’s actually all I can expect from them. So, I think it’s about being realistic and non-judgmental and starting with basic things…” (Service provider C-01 working as a community dentist). The need for the development of effective communication skills in order to be able to effectively communicate with patients who require extra support was also highlighted by the health professionals as an important area that required improvement. “So just as much as tooth brushing is a habit, it’s a healthy habit and it needs to be encouraged so again it just comes back to the way in which that conversation happens. It’s not the ‘you need to do it like this’, we need ‘we’re here to educate you and tell you what to do’, it’s more ‘do you understand the benefits of what I am teaching you and can you demonstrate it to me so that I know that you’re able to do it well yourself’ and that’s the approach that I think could go somewhere.” (Service provider B-10, special care dentist).
The need for effective communication skills for patient management, highlighted by the health professionals, was further explored to understand how this could be incorporated to address the oral health needs of the service users with SMI. Taking a more holistic approach by considering not just patient’s teeth but the whole person, proportional to their individual needs, was suggested as the way forward. “I think that sometimes people may misunderstand that oral health just means mouth and teeth but actually it’s about the whole of the person, including medical but also including and I suppose it’s sort of taking a rounded approach to the person and sort of a holistic approach for that person.” (Service provider C-01, community dentist). “I think education is quite the key and also trying to break down those barriers and say you know we are kind of patient people, we do understand your problems and anxieties and try to find ways of managing that and dealing with that and showing them that it’s not as bad as what they think is.” (Service provider- H-04, special care dentist). Training dental professionals in mental health was another area highlighted as needing attention to address the barriers to providing the patients with best possible care. “Yes, indeed clinicians don’t tend to raise things if they’re a bit anxious about whether they’re able to deal with what comes up. So, I think there is a need for some mental health training for the dentist. May be even mental health first aid course that can be two days. Not expecting the dentists to train as mental health professionals, that’s a little bit training we have.” (Service provider- D-07, caring for a person with schizophrenia). On the other hand, it was also discussed that mental health staff could be trained to look out for their patient’s oral health by flagging up any signs of a problem and referring the patient to receive appropriate care. “I think there’s a real awareness now that physical and mental health go hand in hand, and we need to have an angle on both and doesn’t mean you have to be an expert in dentistry in dental hygiene, but just having a general awareness of kind of I don’t know what warning signs or things to look out for. Just making sure, a lot of it might just be making sure people have the regular checks and understanding the importance of that.” (Service provider S-06, occupational therapist).
Use of a tailored approach was identified as a facilitator at the inter-personal level by both the service users and the service providers. 3.2.1. Need to Be Heard and Understood The service users felt that they faced discrimination or experienced patronising attitudes because of their impact of mental ill health’, which created barriers for them in accessing dental services. “Like the stigma and discrimination around mental health in society generally I think comes into it. People feel anxious that they’re going to be judged and misunderstood and I think that you know, makes it difficult for people, especially like in the acute phase of their illness to sort of make contact with other health providers.” (Service user Sa-07 with diagnosis of schizophrenia). The mental health service users expressed their desire to be involved in their treatment planning, to be treated as a whole individual and be given a voice. “Mental health, I would say it’s already exploited you know in terms of not giving patients a voice and disabilities can be very life limiting. So, giving people the options and scope around that gives them a strong voice and a recognition that they are involved in their own treatment in healthcare.” (Service user S-01 with diagnosis of schizophrenia). 3.2.2. Considering the Individual Needs The service providers, similarly, spoke about how important it is to provide adequate support to people with SMI and the importance of framing the narrative in a way that would involve them more in their own care and decision making. “Patient should be provided information about the potential side effects of their medication that they are prescribed, they should have a fully informed choice. So again, that will come under the mental health side of things. I think, historically, some mental health services avoided telling people about all the potential effects because the medications were pretty problematic. Hopefully now that’s pretty much legal and patients in the hereafter provide fully informed consent but I am not sure how thoroughly people still do that” (Service provider C-05, clinical psychologist). However, one of the biggest barriers from the health service provider’s point of view was the lack of motivation and lack of compliance on the part of the patient, because even when there was perceived to be adequate support available for a patient, their non-concordance would potentially prevent the service user from benefitting from the dental services. “The main thing should just be getting people through the door and to have an examination or to have education about the kind of oral hygiene, that is where there will be the most benefit.” (Service provider C-05, clinical psychologist). For this reason, involvement of the carers such as the family or friends was brought up as an important element of managing patients with SMI, not only to motivate them but also to liaise with the health professionals on their behalf. “I have had people who don’t want to discuss their trauma or their past and have consented to the person who is supporting them to discuss it and so sometimes having that other person there, it gives them a different way of communicating and if they don’t want to speak about it directly but have allowed their carer or support worker to do on their behalf, that’s also happened sometimes.” (Service provider C-01, community dentist).
The service users felt that they faced discrimination or experienced patronising attitudes because of their impact of mental ill health’, which created barriers for them in accessing dental services. “Like the stigma and discrimination around mental health in society generally I think comes into it. People feel anxious that they’re going to be judged and misunderstood and I think that you know, makes it difficult for people, especially like in the acute phase of their illness to sort of make contact with other health providers.” (Service user Sa-07 with diagnosis of schizophrenia). The mental health service users expressed their desire to be involved in their treatment planning, to be treated as a whole individual and be given a voice. “Mental health, I would say it’s already exploited you know in terms of not giving patients a voice and disabilities can be very life limiting. So, giving people the options and scope around that gives them a strong voice and a recognition that they are involved in their own treatment in healthcare.” (Service user S-01 with diagnosis of schizophrenia).
The service providers, similarly, spoke about how important it is to provide adequate support to people with SMI and the importance of framing the narrative in a way that would involve them more in their own care and decision making. “Patient should be provided information about the potential side effects of their medication that they are prescribed, they should have a fully informed choice. So again, that will come under the mental health side of things. I think, historically, some mental health services avoided telling people about all the potential effects because the medications were pretty problematic. Hopefully now that’s pretty much legal and patients in the hereafter provide fully informed consent but I am not sure how thoroughly people still do that” (Service provider C-05, clinical psychologist). However, one of the biggest barriers from the health service provider’s point of view was the lack of motivation and lack of compliance on the part of the patient, because even when there was perceived to be adequate support available for a patient, their non-concordance would potentially prevent the service user from benefitting from the dental services. “The main thing should just be getting people through the door and to have an examination or to have education about the kind of oral hygiene, that is where there will be the most benefit.” (Service provider C-05, clinical psychologist). For this reason, involvement of the carers such as the family or friends was brought up as an important element of managing patients with SMI, not only to motivate them but also to liaise with the health professionals on their behalf. “I have had people who don’t want to discuss their trauma or their past and have consented to the person who is supporting them to discuss it and so sometimes having that other person there, it gives them a different way of communicating and if they don’t want to speak about it directly but have allowed their carer or support worker to do on their behalf, that’s also happened sometimes.” (Service provider C-01, community dentist).
At the systems level, it was clear from both the service users’ and service providers’ narrative that more comprehensive support is needed to help people with severe mental illness to overcome barriers for their oral health care. 3.3.1. Utilisation of Dental Services At the service level, there were two main barriers that were identified by the service users: (1) accessibility issues and (2) lack of availability of integrated care. The service users stated how difficult it was to find an NHS dentist. Even when they were successful in finding one, the dental practice was either too far away, which caused transportation issues, or they would end up being removed from the practice due to missed appointments because of their unstable mental health condition. “You know, you have no choice, you know you have to often put your name down, where I live, its centralized system that you can put your name down and then you’ll be allocated a dentist, but it could be somebody on the other side of town to try to keep it local, ‘you know this one’s come up, would you like to register with them?’ and the cost because rather wait another three or four months you are going to say yeah. So then getting across there becomes a problem.” (Service user J-04, with diagnosis of bipolar disorder). “ So the barrier, is the support, if you are unwell how will you be able to get to the appointment? That’s where the barrier is, would there be enough support in order for me to get to the appointment or will I be able to ask questions during the appointment? And if so, will it be with my level of care be affected? (Service user S-01, with diagnosis of schizophrenia). The cost of dental treatment was also reported as a significant barrier for seeking dental treatment. “Because it’s having access to quality dental care and if it’s costing you 45 quid to go now and a bit of a squirt and clean 45 quid is, you know well that’s Monday, Tuesday, Wednesday, Thursday’s benefits for me well what shall we not pay? Shall we not pay my rent, shall we not pay my council tax; so I am not going see my kids, yeah; no, I am okay with brown teeth and a bit of plaque. You know you’re asking people to make those sort of choices.” (Service user J-04, with diagnosis of bipolar disorder). Lack of integration between health services regarding the provision of holistic care and considering the overall health and well-being of the patient was highlighted as the missing piece of the puzzle. The service users reported that this lack of integration meant that oral health was not considered a priority by mental health and other health professionals, without considering the negative impact of poor physical health on their mental health or vice versa. “Making every contact count, it does need to be a conversation and part of you know, a multi-disciplinary team approach, social workers, health workers, mental health workers, GPs. You know it’s a bit like the conversation around making sure people get their physical health checks as part of their severe mental illness and medics, I’ve heard them say it before you know ‘we’re not experts in physical health’, but you know what you, you are my consultant psychiatrist, you are my mental health nurse, you are my social worker, you are whoever, you don’t have to be an expert in the field to put in my CPA letter or my discharge letter or the letter to my GP-when was the last time I saw a dentist or when’s the last time I had a physical health check…you know, to advocate for me and that’s what we need, we need people to support us, we need people to advocate for us.” (Service user K-05, with diagnosis of schizophrenia). 3.3.2. Accessibility and Availability of Services The dental service providers were aware of the difficulty with finding a dentist but mentioned that due to the way the dental commissioning works and having heavy caseloads, they have to remove a patient from under their care if appointments are frequently missed. “Those patients that don’t attend appointments with us, you know they don’t add three hours of our time. So, we are commissioned to deliver those targets, so the practice just you know can’t keep seeing them, you know if they really struggle, unfortunately, to comply with the normal frame of practice in primary care.” (Service provider E-02, High street dentist). Getting the dental appointment easily and within a short time was suggested, though such facilities are scarce now. “The majority of people who come in, it isn’t that it was their focus or their priority, but if there were any issues there used to be a facility for a very quick referral to a local dentist and the whole system is not there anymore. But it was an NHS dentist and it was possible to bring them in the morning and have an appointment the same day. And that was focused mainly on people who have mental health problems and I think the benefits of that were people got seen straight away, they didn’t have to think about it and the pressing issues, whatever the tooth ache or whatever contentious was fixed straightaway.” (Service provider- S-08, worked as mental health nurse). It was also pointed out that with the existing demand for mental healthcare, existing services are at full capacity; therefore, facilitating the patients in other areas of their health might not always be feasible with limited resources. “With the caseloads that people carry at the moment you wouldn’t be able to, mental health staff wouldn’t be able to kind of facilitate supporting someone to get those. So even you had that overview and you have that, I don’t know that awareness you still have not got the resources in terms of staffing to be able to support that. And so you, you just continue hitting that barrier, because the people have just got ridiculous caseloads essentially.” (Service provider M-09, mental health nurse). However, in line with the service users’ views, the health professionals agreed that there was a lack of integration between services, with every service mostly dealing with one aspect of patients’ health and not working in coordination to improve the overall health and well-being of the patient. “So how do we have those conversations about finding a sweet spot for an individual- right balance so that each profession understands the rationale behind what the other one is doing and we’re not always just butting heads, but we’re actually supporting the patient in the middle.” (Service provider B-10, special care dentist).
At the service level, there were two main barriers that were identified by the service users: (1) accessibility issues and (2) lack of availability of integrated care. The service users stated how difficult it was to find an NHS dentist. Even when they were successful in finding one, the dental practice was either too far away, which caused transportation issues, or they would end up being removed from the practice due to missed appointments because of their unstable mental health condition. “You know, you have no choice, you know you have to often put your name down, where I live, its centralized system that you can put your name down and then you’ll be allocated a dentist, but it could be somebody on the other side of town to try to keep it local, ‘you know this one’s come up, would you like to register with them?’ and the cost because rather wait another three or four months you are going to say yeah. So then getting across there becomes a problem.” (Service user J-04, with diagnosis of bipolar disorder). “ So the barrier, is the support, if you are unwell how will you be able to get to the appointment? That’s where the barrier is, would there be enough support in order for me to get to the appointment or will I be able to ask questions during the appointment? And if so, will it be with my level of care be affected? (Service user S-01, with diagnosis of schizophrenia). The cost of dental treatment was also reported as a significant barrier for seeking dental treatment. “Because it’s having access to quality dental care and if it’s costing you 45 quid to go now and a bit of a squirt and clean 45 quid is, you know well that’s Monday, Tuesday, Wednesday, Thursday’s benefits for me well what shall we not pay? Shall we not pay my rent, shall we not pay my council tax; so I am not going see my kids, yeah; no, I am okay with brown teeth and a bit of plaque. You know you’re asking people to make those sort of choices.” (Service user J-04, with diagnosis of bipolar disorder). Lack of integration between health services regarding the provision of holistic care and considering the overall health and well-being of the patient was highlighted as the missing piece of the puzzle. The service users reported that this lack of integration meant that oral health was not considered a priority by mental health and other health professionals, without considering the negative impact of poor physical health on their mental health or vice versa. “Making every contact count, it does need to be a conversation and part of you know, a multi-disciplinary team approach, social workers, health workers, mental health workers, GPs. You know it’s a bit like the conversation around making sure people get their physical health checks as part of their severe mental illness and medics, I’ve heard them say it before you know ‘we’re not experts in physical health’, but you know what you, you are my consultant psychiatrist, you are my mental health nurse, you are my social worker, you are whoever, you don’t have to be an expert in the field to put in my CPA letter or my discharge letter or the letter to my GP-when was the last time I saw a dentist or when’s the last time I had a physical health check…you know, to advocate for me and that’s what we need, we need people to support us, we need people to advocate for us.” (Service user K-05, with diagnosis of schizophrenia).
The dental service providers were aware of the difficulty with finding a dentist but mentioned that due to the way the dental commissioning works and having heavy caseloads, they have to remove a patient from under their care if appointments are frequently missed. “Those patients that don’t attend appointments with us, you know they don’t add three hours of our time. So, we are commissioned to deliver those targets, so the practice just you know can’t keep seeing them, you know if they really struggle, unfortunately, to comply with the normal frame of practice in primary care.” (Service provider E-02, High street dentist). Getting the dental appointment easily and within a short time was suggested, though such facilities are scarce now. “The majority of people who come in, it isn’t that it was their focus or their priority, but if there were any issues there used to be a facility for a very quick referral to a local dentist and the whole system is not there anymore. But it was an NHS dentist and it was possible to bring them in the morning and have an appointment the same day. And that was focused mainly on people who have mental health problems and I think the benefits of that were people got seen straight away, they didn’t have to think about it and the pressing issues, whatever the tooth ache or whatever contentious was fixed straightaway.” (Service provider- S-08, worked as mental health nurse). It was also pointed out that with the existing demand for mental healthcare, existing services are at full capacity; therefore, facilitating the patients in other areas of their health might not always be feasible with limited resources. “With the caseloads that people carry at the moment you wouldn’t be able to, mental health staff wouldn’t be able to kind of facilitate supporting someone to get those. So even you had that overview and you have that, I don’t know that awareness you still have not got the resources in terms of staffing to be able to support that. And so you, you just continue hitting that barrier, because the people have just got ridiculous caseloads essentially.” (Service provider M-09, mental health nurse). However, in line with the service users’ views, the health professionals agreed that there was a lack of integration between services, with every service mostly dealing with one aspect of patients’ health and not working in coordination to improve the overall health and well-being of the patient. “So how do we have those conversations about finding a sweet spot for an individual- right balance so that each profession understands the rationale behind what the other one is doing and we’re not always just butting heads, but we’re actually supporting the patient in the middle.” (Service provider B-10, special care dentist).
The study aimed to explore the barriers and facilitators for the maintenance of oral health and dental service use by people with SMI, informal carers and service providers. Themes were identified and classed at three levels—personal, inter-personal and system levels—to provide an understanding of how the barriers and facilitators can be addressed to provide the best possible care and support through the development and testing of a comprehensive intervention. When discussing with the stakeholders, they agreed with the study findings and recommendations. A previous qualitative study reported experience of oral health and perceived support by classifying the findings according to five categories: the shame of having poor dental health, history of dental care, experiences of self-care, handling of oral health problems, and experiences of staff support . Similarly, in the current study, we found that the shame of having poor dental health induced feelings of guilt, and a sense of stigma was considered as a major personal-level barrier. At the interpersonal level, service users stated that lack of sensitivity or a suitable approach by dental service providers while communicating with patients with SMI was a significant barrier for them to access dental services, for example the ‘patronising attitude’ they experienced when accessing services. The service users also highlighted the need for more psychologically and trauma-informed dentists, and more patient involvement in the provision of care. These feelings were also reflected in the accounts of those service users who were very happy with their dental care provider. This was because of the sense of trust and rapport that they shared with them, and the service users expressed fears of losing them due to job changes or re-location. This is in line with the findings reported by Bjørkvik et al. (2021) in their study conducted in Norway to explore perceived barriers for obtaining optimal dental care for patients with SMI. The authors report that patronising attitudes cannot lay the foundation for a respectful relationship between a service user and service provider(s) and that patients should be allowed input in planning their own care provision . In the current study, we also explored the views of different health service providers in relation to barriers and facilitators for oral health maintenance and dental service use by people with SMI. Limitations around access to a dentist including discontinuity of care due to missed appointments and high caseloads were identified as the main system level barriers. These findings are similar to those presented by a study conducted in Australia exploring the views of mental health nurses about dental access by patients with SMI. The authors reported the main barriers related to limited access to dental health care services, the cost of dental treatment and long waiting times . A review also reported that the cost of the care and dental phobia are the most frequently reported barriers to dental care in psychiatric patients. Other barriers included lack of awareness of dental health and mistrust towards dental health providers . The identified facilitators were related to dental professionals’ effective communication skills, the provision of tailored support, the involvement of carers and the need for an integrated care model with interprofessional communication to support the patient’s overall health and well-being and not just one aspect of their health. Similar results were reported by a study conducted in the USA regarding barriers and facilitators for oral health among persons living with mental illness, in which the authors qualitatively explored the views of patients with mental illness, psychiatrists and dentists. The study reported dentists’ chair side manner, community support and interprofessional communication as important professional- and system-level facilitators for supporting the dental needs of patients with SMI . The present study collected data from both service users and service providers. We aimed to recruit service users representative of both male and female genders and covering a range of diagnoses that constitute SMI. Although the number of service users interviewed is relatively small, they are from wide geographical areas, which indicates that the challenges that they face were common in the health system. Within the service providers we included both dental and mental health service providers, as well as an informal carer. The service providers were also from different geographical areas, and the incorporation of the perspective of both these two groups of service providers is a strength of the study. Apart from it being a necessity to conduct interviews remotely at that time, use of videoconferencing to conduct the interviews provided several other advantages. These included the ability to reach participants situated in diverse locations, flexibility in arranging interviews at a convenient time, avoidance of time spent travelling to the venue and associated expenses, and the ability to interview participants in a familiar surrounding whilst preserving the face-to-face aspect of in-person interviews to allow for observation of non-verbal and visual clues . To ensure quality, efforts were put in place to maintain rigour at all times in conducting this qualitative study. We also sought some validation of our findings via the stakeholder consultations. Having that wider stakeholder engagement in this under-researched area was an important addition to the validity of our findings. There are some limitations of the study. One main limitation was the small sample size. Within the remit of the study, we were able to recruit a limited number of service users (seven service users, nine health professionals and one informal carer). We were able to recruit only one informal carer who showed an interest in taking part within the study recruitment phase. The recruitment of both informal and paid carers could bring their perspective more comprehensively. There was also a lack of recruitment from diverse ethnic backgrounds. This would have increased the possibility of inclusion of cultural and ethnic diversity among participants and of different viewpoints, thereby further enriching the data. We did not collect data about participants’ socio-economic conditions. Allowing for more recruitment time and contacting related gatekeepers and collecting socio-demographic information of the participants should be considered for a similar future study. Nonetheless, this study is an important addition to the knowledge base, as this study highlighted an important and under-researched topic. Within the constraints of time and resources, our good quality but modestly sized study brings to light some very important issues which need to be highlighted. Further research is needed with a larger and diverse sample to explore the views on specific aspects of interventions to improve oral health in this population, which is important for developing or adapting effective interventions/programmes for improving the oral health and daily self-care of people living with SMI.
In this qualitative study the main barriers identified were the impact of mental ill-health, the lack of patient involvement and a tailored approach, and issues aroundaccessibility and availability of dental services, including the lack of integration of services. The main facilitators identified were service providers’ effective communication skills and further support through the involvement of carers. Our findings suggest the need for a comprehensive approach to better support people with SMI in their oral health care needs. Integration of services and provision of tailored support with a focus on the overall health and general well-being of the patient has been highlighted as the most important next step by both the service users and the service providers.
|
Equalizing the Playing Field and Improving School Food Literacy Programs Through the Eyes of Teens: A Grounded Theory Analysis Using a Gender and Sport Participation Lens | 923543c0-6642-4bd9-8bba-f76537d863d0 | 11858305 | Health Literacy[mh] | School food literacy programs, such as home economics classes, represent a key opportunity to improve teens (13–18 years) eating behaviours . Food literacy encompasses a range of skills or competencies such as knowledge on how to cook, what is considered healthy, and where food comes from . Knowledge of the role of food in culture and global sustainability, as well as the interconnections between economic and physical food systems, are also important food literacy competencies . A review of 11 countries that examined food education policies revealed that across countries there is no standardized approach or consensus in food literacy programs in terms of content or how they are delivered . The other global literature has supported the notion that food literacy programs in schools do not have a set curriculum or standard implementation strategy . Despite this, it is well supported that food literacy programs in schools are an important intervention strategy to improve the health of youth globally . In a systematic review of 44 studies of school-based food literacy programs from 16 countries, the authors found that participation in a school food literacy program had benefits on youths’ (10–19 years) dietary habits (e.g., improved consumption of fruits, vegetables and dairy) and knowledge of food-related skills (e.g., cooking safety and nutrition knowledge) . However, little research has sought to understand who specifically benefits from school food literacy programs and why. As girls have consistently been found to possess greater food literacy competencies compared with boys in several countries, including Australia, Iran, South Africa and the United States , and girls have been found to have greater opportunities to learn skills in countries such as Canada and China , boys may be at a disadvantage in current school food literacy systems. Historically, gender norms postulate that mothers are the primary parent in charge of food-related tasks . This may contribute to greater pressure and opportunities for girls to learn food literacy compared with boys . In a study exploring the food-related attitudes of 836 teens (11–18 years), cooking skills were viewed as more relevant for girls compared with boys . Over a decade later, these findings are still evident; in a qualitative study that explored changes in dietary behaviours, the authors reported that parents expected and encouraged their daughters, but not their sons, to learn food-related skills, such as how to cook . To this end, gender norms exist when discussing household division of labour, including tasks such as cooking, and they may impart a view among teens that only certain teens need to learn food literacy skills . Additionally, parents may promote these norms unintentionally by providing differing opportunities for their sons and daughters to learn food literacy skills at home . This inequality in the pressures and opportunities boys and girls have to learn food literacy skills may simultaneously disadvantage both groups in different ways. For example, girls may be exposed to more traditional views of ‘food-related’ work as they grow up, unknowingly creating greater opportunities for them to participate in food-related activities at home or schools . This may seem positive as greater food literacy has been correlated with beneficial dietary habits , but gender-based pressures may also create greater stress among teen girls to master food literacy in comparison with their peers who are boys. In contrast, boys may not be provided with as many opportunities to learn about food literacy or they may avoid participation altogether if they feel that food literacy skills are ‘feminine’ , potentially creating a negative impact on dietary habits and long-term physical health. The literature has also suggested that sport participation may impact food literacy. In certain situations, young athletes may receive tailored information from a sports dietitian, an expert in nutrition, about how to support their physical performance through food . This information often focuses on sport nutrition (e.g., knowledge of elevated nutrient needs and timing meals) to optimize sports performance , a subset of food literacy. . In the literature, few studies have evaluated athletes’ food literacy . Instead, most of the literature focuses on a specific subset of sport nutrition, such as safe supplement use , or is conducted in elite sport settings . In studies assessing teen athletes knowledge of sport supplementation, athlete boys express greater knowledge of protein supplementation compared with girls and are better able to outline proteins role in performance . In contrast, athlete girls have been found to report greater knowledge of vitamins and minerals for general health . These trends may be influenced by historical gender norms as dietary protein has long been tied to masculinity , whereas micronutrients have long been associated with femininity . As such, athlete boys and girls may pursue different types of sport nutrition information or experience different opportunities to hear about the impact of certain nutrients based on these historical connotations. This may have unfavourable consequences on an athletes’ health and performance that vary by gender; athlete boys may not pursue nutrition information about general health whereas athlete girls may not pursue information more specific to the performance context and muscle building. However, not all of the literature has found that gender plays a significant role in athletes’ sport nutrition knowledge. In a study from the United States that evaluated 535 high school athletes (14–18 years), the authors found that knowledge about sports nutrition in general (e.g., “ Importance of diet ”) did not differ between boys and girls . As food literacy is a complex topic that includes various aspects such as cooking skills and nutrition knowledge among other factors , there is a need to clarify how sport involvement and gender impact school food literacy program experiences. Understanding how gender and sport involvement impacts teens’ food literacy experiences is important as it has the potential to refine their design. For example, understanding why an athlete or a non-athlete may choose to participate in a food literacy program, and what they take away from the program, can point to gaps in current program delivery or content that can be amended to better address teens’ motivation to participate. At present, nothing in the literature to our knowledge has sought to untangle these differences. As motivation plays a significant role in the likelihood of behaviour change, this is an important gap to fill . To provide guidance that food literacy program developers can use to refine current programs, we generated a theoretical understanding of how to improve teens’ experiences and engagement in school food literacy programs. As there are currently no standardized programs in terms of content or delivery within food literacy programs, this work has the potential to improve the appeal of food literacy programs where athletes and non-athletes may both be present. This study is part of a broader project called the EATing in a Gendered world study (EatGen). The EatGen study is a mixed-methods project that aims to co-design a school-based intervention to improve high school athletes’ eating habits. This specific study is a part of the foundational ‘basic behavioural science’ phase that seeks to understand the key issues and differences in athletes’ and non-athletes eating habits and experiences surrounding food. The full interview guide used in the broader study can be found published here . In brief, the guide was developed by the first and senior authors using the Socio-Ecological Model , the Food Literacy Competencies for Young Adults Framework and previous research . The guide was pilot tested with one teen athlete prior to its use and minor changes to wording were made to improve clarity. Data were collected until saturation . The checklist for consolidated criteria for reporting qualitative research (COREQ) can be found in the . 2.1. Design We recruited teens who attended a local secondary school (British Columbia, Canada) using a scannable QR code that was placed on posters around the school or shown following classroom presentations. Three female research assistants conducted the in-person presentations. Recruitment occurred at two time points; a flow diagram detailing this can be found in . Interested teens left their contact information through the QR codes and received access to an electronic copy of the study’s consent or assent form that explained the study’s purpose and procedures, as well as the researchers’ reasons for conducting the study. All contacted teens who met the eligibility criteria were provided with a letter for their parents. To be scheduled for a one-on-one interview, a teen had to complete a brief online survey asking about their demographics (e.g., age, grade, sex, gender and ethnicity) and sport history collected on Qualtrics (Qualtrics XM ., Provo, UT, USA, 2020). Once completed, an online interview was scheduled while the teen was in a private location via Zoom (Zoom Video Communications Inc., San Jose, CA, USA, 2016). Interviews lasted 19–57 min and were conducted by a female research assistants (one of the three who led recruitment) who had been trained by the study’s first author. The audio from the interviews was stored on a secure server with case notes. All audio was sent to a transcription service that uses humans to transcribe the data verbatim into de-identified word documents (Transcription Heroes, Inc., Toronto, ON, Canada, 2023). 2.2. Setting and Participants To be eligible, a teen had to have no diagnosed eating disorder or severe dietary restriction (e.g., Crohns disease), have access to Zoom and be a Canadian citizen. Teens could be in any grade (grades 8 to 12) and were recruited to balance sport participation and biological sex. Teens who had participated in a competitive high school sport and/or a club sport in the previous year were considered ‘athletes.’ These sports compete outside of school hours several nights a week and include competition against other local schools or clubs. The content and design of food literacy programs at the chosen school are regulated at the school district level and can be found online ( https://curriculum.gov.bc.ca/curriculum/continuous-views accessed on 10 December 2024). The curriculum focuses on helping teens understand the role of food advertisements in eating, how to cook meals that do not have a high degree of difficulty and the relationships between food and health. How or what content is used to achieve this is not mandated by the school or the province and instead left up to the individual teacher. 2.3. Analysis Informed by Grounded Theory , we explored how gender and sport influenced teens’ food literacy experiences to develop a theoretical understanding of how to improve school programs. As best-practice methods in dietary intervention development suggest the use of a behavioural theory and framework to guide design-related decisions , a Grounded Theory analysis was considered appropriate to begin developing such a theoretical framework in the context of food literacy programs. The purpose of a Grounded Theory analysis is to construct a theory from data, making it well suited to this aim . All transcripts were independently coded by the first author and either the second or third author using NVivo 12 (QSR International Pty Ltd., Burlington, MA, USA, 2025) . The three researchers analyzing the data were females with various backgrounds in sport. Triangulation between the three was used to resolved any discrepancies . We developed an inductive coding scheme (i.e., codes were generated from the data itself) through the process of line-by-line coding through multiple passes of each transcript . We then conducted a process of focused coding, where similar codes or codes exploring related concepts were grouped together to form categories . Categories were defined and their relationships to one another were explored to ensure all the ideas that teens expressed were encompassed without overlap between categories. Throughout this process, memo-writing was used to capture notes on developing codes and interconnections between categories . Categories from transcripts were then consolidated into higher-level categories to inform a theoretical understanding of how to improve school programs . The naming of high-level categories was inductively derived using teens’ wording itself. We then compared higher-level categories to constructs from established behavioural theories, such as the Capabilities, Opportunities, Motivation and Behaviours (COM-B) Model , Self-Determination Theory (SDT) and Social Cognitive Theory (SCT) that are commonly used in dietary interventions. We elected to explore this mapping as food literacy programs are ultimately a dietary-based intervention and, as such, would benefit from using guiding theory in their design . By mapping inductive categories against known theories, we were able to see if there was better alignment towards using one over another, or if something completely novel was needed to guide the refinement of food literacy programs. As data were collected at two time points, we used concurrent data collection strategies and stopped data collection once theory saturation had been achieved (i.e., no new categories emerged) . Categories were also explored based on sport involvement and gender. We recruited teens who attended a local secondary school (British Columbia, Canada) using a scannable QR code that was placed on posters around the school or shown following classroom presentations. Three female research assistants conducted the in-person presentations. Recruitment occurred at two time points; a flow diagram detailing this can be found in . Interested teens left their contact information through the QR codes and received access to an electronic copy of the study’s consent or assent form that explained the study’s purpose and procedures, as well as the researchers’ reasons for conducting the study. All contacted teens who met the eligibility criteria were provided with a letter for their parents. To be scheduled for a one-on-one interview, a teen had to complete a brief online survey asking about their demographics (e.g., age, grade, sex, gender and ethnicity) and sport history collected on Qualtrics (Qualtrics XM ., Provo, UT, USA, 2020). Once completed, an online interview was scheduled while the teen was in a private location via Zoom (Zoom Video Communications Inc., San Jose, CA, USA, 2016). Interviews lasted 19–57 min and were conducted by a female research assistants (one of the three who led recruitment) who had been trained by the study’s first author. The audio from the interviews was stored on a secure server with case notes. All audio was sent to a transcription service that uses humans to transcribe the data verbatim into de-identified word documents (Transcription Heroes, Inc., Toronto, ON, Canada, 2023). To be eligible, a teen had to have no diagnosed eating disorder or severe dietary restriction (e.g., Crohns disease), have access to Zoom and be a Canadian citizen. Teens could be in any grade (grades 8 to 12) and were recruited to balance sport participation and biological sex. Teens who had participated in a competitive high school sport and/or a club sport in the previous year were considered ‘athletes.’ These sports compete outside of school hours several nights a week and include competition against other local schools or clubs. The content and design of food literacy programs at the chosen school are regulated at the school district level and can be found online ( https://curriculum.gov.bc.ca/curriculum/continuous-views accessed on 10 December 2024). The curriculum focuses on helping teens understand the role of food advertisements in eating, how to cook meals that do not have a high degree of difficulty and the relationships between food and health. How or what content is used to achieve this is not mandated by the school or the province and instead left up to the individual teacher. Informed by Grounded Theory , we explored how gender and sport influenced teens’ food literacy experiences to develop a theoretical understanding of how to improve school programs. As best-practice methods in dietary intervention development suggest the use of a behavioural theory and framework to guide design-related decisions , a Grounded Theory analysis was considered appropriate to begin developing such a theoretical framework in the context of food literacy programs. The purpose of a Grounded Theory analysis is to construct a theory from data, making it well suited to this aim . All transcripts were independently coded by the first author and either the second or third author using NVivo 12 (QSR International Pty Ltd., Burlington, MA, USA, 2025) . The three researchers analyzing the data were females with various backgrounds in sport. Triangulation between the three was used to resolved any discrepancies . We developed an inductive coding scheme (i.e., codes were generated from the data itself) through the process of line-by-line coding through multiple passes of each transcript . We then conducted a process of focused coding, where similar codes or codes exploring related concepts were grouped together to form categories . Categories were defined and their relationships to one another were explored to ensure all the ideas that teens expressed were encompassed without overlap between categories. Throughout this process, memo-writing was used to capture notes on developing codes and interconnections between categories . Categories from transcripts were then consolidated into higher-level categories to inform a theoretical understanding of how to improve school programs . The naming of high-level categories was inductively derived using teens’ wording itself. We then compared higher-level categories to constructs from established behavioural theories, such as the Capabilities, Opportunities, Motivation and Behaviours (COM-B) Model , Self-Determination Theory (SDT) and Social Cognitive Theory (SCT) that are commonly used in dietary interventions. We elected to explore this mapping as food literacy programs are ultimately a dietary-based intervention and, as such, would benefit from using guiding theory in their design . By mapping inductive categories against known theories, we were able to see if there was better alignment towards using one over another, or if something completely novel was needed to guide the refinement of food literacy programs. As data were collected at two time points, we used concurrent data collection strategies and stopped data collection once theory saturation had been achieved (i.e., no new categories emerged) . Categories were also explored based on sport involvement and gender. We conducted interviews with 33 teens. The majority of the teens were white and in grade 10 . Just over half were considered athletes (55%), and of these most athletes participated in basketball and volleyball. All teens had participated in at least one school food literacy program previously. Four categories captured the teens’ views on how food literacy programs could be improved, as follows: Provide a challenge , Establish importance, Make it fun , and Practice is key . Though these categories captured all the teens’ ideas, gender and sport participation impacted how and why different teens prioritized different categories. Each category will be presented and discussed by differences based on sport involvement and gender. 3.1. Provide a Challenge Teens expressed that food literacy programs in schools should be challenging to them. This could be achieved by providing greater depth in the content shared, such as how nutrients impact the body, or by learning how to cook more complex foods. “I want to learn a little bit more in depth about food and… what you should have in your diet or what you need every single day… breaking down what’s inside of it.” Participant 28, girl, non-athlete “I will probably just more so want to learn how to cook, like more complex meals.” Participant 27, girl, non-athlete In many cases, the idea of an impending challenge built up excitement or anticipation for what was to come in food literacy classes. “[I’d like to learn] how to do fish and like seafood in general. In my foods class towards the end of the year we were supposed to do sushi and some of the seafood stuff. However, we ran out of time in the class, and they weren’t able to cover it [and I had wanted to].” Participant 14, boy, non-athlete Contrasting with this, some teens outlined how food literacy competencies can be hard to understand and apply (e.g., label reading) and programs needed to be simpler. Despite the challenge of learning food literacy competencies, all teens recognized that the programs were useful as many did not have the space or confidence to practice on their own. “I would just say um, making it more, like easier to understand [would improve my experiences].” Participant 4, boy, athlete “Do I ever make food? Only in foods class. But, not at home… I’m not too good at cooking and neither are my friends… we don’t generally cook stuff.” Participant 7, boy, athlete Provide a challenge was mentioned most often by non-binary teens and girls compared with only half of boys as a useful way to improve food literacy programs. It was valued similarly between athletes and non-athletes. 3.2. Establishing Importance Outlining how food impacts health and well-being were critical to promote buy-in for participating in school programs. In many cases, teens specifically talked about how this knowledge was increasingly important as they were going to be on their own at university. “You need to know how to cook some very basic things… especially if you’re going to university… So, if you want to stay actually healthy, and not die of cancer at 50, then you got to know what to make for yourself.” Participant 3, boy, athlete Teen girls highlighted a need to emphasize the importance of food beyond controlling body shape, whereas all athletes (regardless of gender) focused on the need to learn more about how food impacts sport performance. Almost all girls and all non-binary teens suggested Establish importance as their number one suggestion to improve food literacy programs. Among boys, a little over half talked about this category as a key suggestion. Most athletes also suggested this category as important. “I think I would rather… just kind of learn more about how, like how different foods truly affect your body and how they can influence your overall being rather than just how you look.” Participant 27, girl, non-athlete “[like to learn] Why healthy foods give you more energy than not healthy food. Like, because, like I feel like it doesn’t make sense because I feel like sugar would give you energy.” Participant 22, girl, athlete Suggestions to emphasize the importance of food literacy included detailing how food literacy is a life skill and utilizing sources of information from national guidelines that are considered trustworthy to teens. Teens considered trusted sources as individuals possessing relevant credentials (e.g., trained teachers, nutritionists, government sources etc.), regulatory bodies (e.g., school boards, national nutrition guidelines such as Health Canada’s Canada Food Guide), or other sources perceived as being unbiased. “Because you’ll need how to cook for the rest of your life… maybe like bringing in like a nutrition specialist and just talk about it.” Participant 1, boy, athlete “It’s mostly all government recommendations, like the different types of food and stuff. I trust it, it makes sense to me. We have watched some documentaries though that I think are very opinionated—like vegan documentaries that I think show clips out of context. So, I don’t trust those.” Participant 9, boy, athlete Information learned about eating to improve and manage health was often taken to heart and enacted by teens outside of the classroom. “We were learning about content, like packaging… And so now I kinda am more cautious about what I eat. Like if I’m getting a bag of chips I’ll look at the amount of salt… if it’s more than 15% it’s probably not very good for you.” Participant 24, girl, athlete 3.3. Make It Fun In addition to the importance of food on health and well-being, teens suggested that food literacy programs in schools should be fun to help motivate teens to participate. Make it fun was talked about as the most important suggestion to improve food literacy programs among almost all boys and non-athletes. Make it fun was also highlighted among all non-binary teens as a priority, tied with establishing importance for the most important. “I really enjoy cooking, even before I took that class, so I knew that I would enjoy it. However, there were a lot of people in the class that were just taking it because it’s an easy A. I think really one element that could be emphasized a bit more is the fun that you could have with cooking.” Participant 14, boy, non-athlete Specific suggestions on how to incorporate more ‘fun’ into school programs included considering what teens wanted to learn, using clear dialogue, and supportive teachers. “I would probably make the foods that you want to make, because they sort of just like give you food you have to make, instead of like your own personal choice.” Participant 12, boy, non-athlete “I think simplified more. Because there was a lot of stuff that I didn’t understand, there was just a lot of big words, and it was kind of boring.” Participant 24, girl, athlete “It’s less focused on the course and more focused on the teacher. She was very negative about things… anytime people would make a mistake, she would get very aggressive about it, which I think that kind of discourages people from wanting to take foods classes.” Participant 16, non-binary, non-athlete 3.4. Practice Is Key A major aspect that contributed to enjoyment in school food literacy programs was the fact that they were hands-on and provided space to practice skills such as cooking. “I feel like everyone should have the hands-on experience… while creating the things that are the best for your body.” Participant 20, girl, athlete This was especially critical in improving teens’ confidence and knowledge to perform cooking tasks on their own without guardian supervision. “Making food can be pretty daunting. I used to not like it because the few experiences I’ve had before foods class it took super long and it didn’t turn out great and this class showed me that it’s a lot easier.” Participant 9, boy, athlete “I don’t cook very much at all. I cook when there’s nothing in the fridge or there’s no food at all. And usually, it’s just for me or me and my little sister.” Participant 30, girl, non-athlete An emphasis on practice was especially prominent among athletes who spoke about needing to understand how to perform food-related tasks on their own to manage their fuelling strategies. There was a stark difference between how non-athletes and athletes prioritized this suggestion. Practice is key was viewed by majority of athletes as critical to school programs whereas it was not focused on by non-athletes. “I’m kind of just happy that I know how to cook the basic stuff so that I can always cook myself a meal when I need to so I’m never hungry when I have to go to sports.” Participant 8, boy, athlete Teens expressed that food literacy programs in schools should be challenging to them. This could be achieved by providing greater depth in the content shared, such as how nutrients impact the body, or by learning how to cook more complex foods. “I want to learn a little bit more in depth about food and… what you should have in your diet or what you need every single day… breaking down what’s inside of it.” Participant 28, girl, non-athlete “I will probably just more so want to learn how to cook, like more complex meals.” Participant 27, girl, non-athlete In many cases, the idea of an impending challenge built up excitement or anticipation for what was to come in food literacy classes. “[I’d like to learn] how to do fish and like seafood in general. In my foods class towards the end of the year we were supposed to do sushi and some of the seafood stuff. However, we ran out of time in the class, and they weren’t able to cover it [and I had wanted to].” Participant 14, boy, non-athlete Contrasting with this, some teens outlined how food literacy competencies can be hard to understand and apply (e.g., label reading) and programs needed to be simpler. Despite the challenge of learning food literacy competencies, all teens recognized that the programs were useful as many did not have the space or confidence to practice on their own. “I would just say um, making it more, like easier to understand [would improve my experiences].” Participant 4, boy, athlete “Do I ever make food? Only in foods class. But, not at home… I’m not too good at cooking and neither are my friends… we don’t generally cook stuff.” Participant 7, boy, athlete Provide a challenge was mentioned most often by non-binary teens and girls compared with only half of boys as a useful way to improve food literacy programs. It was valued similarly between athletes and non-athletes. Outlining how food impacts health and well-being were critical to promote buy-in for participating in school programs. In many cases, teens specifically talked about how this knowledge was increasingly important as they were going to be on their own at university. “You need to know how to cook some very basic things… especially if you’re going to university… So, if you want to stay actually healthy, and not die of cancer at 50, then you got to know what to make for yourself.” Participant 3, boy, athlete Teen girls highlighted a need to emphasize the importance of food beyond controlling body shape, whereas all athletes (regardless of gender) focused on the need to learn more about how food impacts sport performance. Almost all girls and all non-binary teens suggested Establish importance as their number one suggestion to improve food literacy programs. Among boys, a little over half talked about this category as a key suggestion. Most athletes also suggested this category as important. “I think I would rather… just kind of learn more about how, like how different foods truly affect your body and how they can influence your overall being rather than just how you look.” Participant 27, girl, non-athlete “[like to learn] Why healthy foods give you more energy than not healthy food. Like, because, like I feel like it doesn’t make sense because I feel like sugar would give you energy.” Participant 22, girl, athlete Suggestions to emphasize the importance of food literacy included detailing how food literacy is a life skill and utilizing sources of information from national guidelines that are considered trustworthy to teens. Teens considered trusted sources as individuals possessing relevant credentials (e.g., trained teachers, nutritionists, government sources etc.), regulatory bodies (e.g., school boards, national nutrition guidelines such as Health Canada’s Canada Food Guide), or other sources perceived as being unbiased. “Because you’ll need how to cook for the rest of your life… maybe like bringing in like a nutrition specialist and just talk about it.” Participant 1, boy, athlete “It’s mostly all government recommendations, like the different types of food and stuff. I trust it, it makes sense to me. We have watched some documentaries though that I think are very opinionated—like vegan documentaries that I think show clips out of context. So, I don’t trust those.” Participant 9, boy, athlete Information learned about eating to improve and manage health was often taken to heart and enacted by teens outside of the classroom. “We were learning about content, like packaging… And so now I kinda am more cautious about what I eat. Like if I’m getting a bag of chips I’ll look at the amount of salt… if it’s more than 15% it’s probably not very good for you.” Participant 24, girl, athlete In addition to the importance of food on health and well-being, teens suggested that food literacy programs in schools should be fun to help motivate teens to participate. Make it fun was talked about as the most important suggestion to improve food literacy programs among almost all boys and non-athletes. Make it fun was also highlighted among all non-binary teens as a priority, tied with establishing importance for the most important. “I really enjoy cooking, even before I took that class, so I knew that I would enjoy it. However, there were a lot of people in the class that were just taking it because it’s an easy A. I think really one element that could be emphasized a bit more is the fun that you could have with cooking.” Participant 14, boy, non-athlete Specific suggestions on how to incorporate more ‘fun’ into school programs included considering what teens wanted to learn, using clear dialogue, and supportive teachers. “I would probably make the foods that you want to make, because they sort of just like give you food you have to make, instead of like your own personal choice.” Participant 12, boy, non-athlete “I think simplified more. Because there was a lot of stuff that I didn’t understand, there was just a lot of big words, and it was kind of boring.” Participant 24, girl, athlete “It’s less focused on the course and more focused on the teacher. She was very negative about things… anytime people would make a mistake, she would get very aggressive about it, which I think that kind of discourages people from wanting to take foods classes.” Participant 16, non-binary, non-athlete A major aspect that contributed to enjoyment in school food literacy programs was the fact that they were hands-on and provided space to practice skills such as cooking. “I feel like everyone should have the hands-on experience… while creating the things that are the best for your body.” Participant 20, girl, athlete This was especially critical in improving teens’ confidence and knowledge to perform cooking tasks on their own without guardian supervision. “Making food can be pretty daunting. I used to not like it because the few experiences I’ve had before foods class it took super long and it didn’t turn out great and this class showed me that it’s a lot easier.” Participant 9, boy, athlete “I don’t cook very much at all. I cook when there’s nothing in the fridge or there’s no food at all. And usually, it’s just for me or me and my little sister.” Participant 30, girl, non-athlete An emphasis on practice was especially prominent among athletes who spoke about needing to understand how to perform food-related tasks on their own to manage their fuelling strategies. There was a stark difference between how non-athletes and athletes prioritized this suggestion. Practice is key was viewed by majority of athletes as critical to school programs whereas it was not focused on by non-athletes. “I’m kind of just happy that I know how to cook the basic stuff so that I can always cook myself a meal when I need to so I’m never hungry when I have to go to sports.” Participant 8, boy, athlete Teens’ participation in food literacy programs in schools marks a key opportunity to improve their dietary habits. To understand how to improve such programs, we explored 33 teens’ opinions on what makes school food literacy programs appealing to develop an underlying theory for program refinement. Four categories captured teens ideas, including Provide a challenge, Establish importance , Make it fun and Practice is key, making up the theoretical underpinnings. These underpinnings aligned closely with the principles of the Capability, Opportunities, Motivation and Behaviours (COM-B) Model , and subsequently our findings suggest that incorporating COM-B into the development of future food literacy programs may have positive impact on teens’ dietary habits . Further, our findings add to the existing literature by examining how sport involvement and gender impact teens’ buy-in for school programs. Athletes emphasized the importance of Practice is key as food literacy competencies were viewed as a tool for sport performance. Boys were keener to suggest the importance of Make it fun whereas girls and non-binary teens wanted to see greater emphasis on Establishing importance . These differences fill a current gap in our understanding as to why diverse teens, including boys, girls, athletes and non-athletes, may participate in school food literacy program. 4.1. COM-B Can Guide Refinement of Food Literacy Programs in Schools Given how close the four categories (i.e., Provide a challenge , Establish importance , Make it fun and Practice is key ) identified in this analysis are to the core concepts of the COM-B model, the COM-B model may be an advantageous model to use when designing or modifying school food literacy programs. Nothing in the other literature, to our knowledge, has sought to evaluate how school food literacy programs can be improved in a systematic way using behavioural theory. COM-B may offer specific advantages as a guiding theory over others such as Self-Determination Theory or Social Cognitive Theory as it allows for the incorporation of the home environment through the ‘opportunity’ component . Literature from Australia supports this notion, as researchers determined that the COM-B model was an appropriate and comprehensive model to assess how the use of meal kits impacted parents’ food literacy competencies in the home . As the other literature has consistently suggested that incorporating the home environment and parental support in their teens’ dietary habits is advantageous , the COM-B model may provide more direct inclusion of the home compared with other behaviour change theories. This may be especially salient among boys, as previous research conducted in a similar setting has suggested that boys may have less support in the home to learn about learn food literacy . 4.2. Athletes View Food and Food Literacy Competencies Differently to Non-Athletes Large differences in how athletes and non-athletes valued learning food literacy competencies arose. Athletes drew connections with how competencies such as cooking and meal planning supported sport performance, and as a result were motivated to learn these skills. This is exemplified by the importance that athletes placed on Practice is key , which was minimally valued among non-athletes. Other studies have suggested that teen athletes are motivated to improve their sport performance and may seek out nutrition information for this reason, even to the extent of considering steroids . The literature from the United States further found that simply self-identifying as an athlete, regardless of performance goals, corresponded with a greater intake of sports drinks such as Gatorade . As such, teen athletes may be uniquely motivated to follow certain dietary strategies if they believe them to fit the habits of a ‘typical’ athlete, even if they are not evidence-based. Thus, food literacy programs may offer an opportunity to help empower teen athletes’ knowledge of evidence-based sport nutrition information by tailoring some of their content. 4.3. Traditional Gender Norms May Impact Teens’ Motivation to Learn About Food Literacy Our findings show that diverse gender groups, including boys and non-binary teens, valued participating in school food literacy programs. This is a substantial difference compared with previous research conducted in high school settings; girls expressed more opportunities and desire to learn such skills . While these are positive findings, suggesting that gender norms are not motivating factors for teens to learn about food literacy, our results also warrant caution. One main difference that arose in our work was that girls and non-binary teens suggested Establishing importance and Provide a challenge as the most critical aspects that improved the appeal of school programs. In contrast, boys highlighted Make it fun . This gender-based difference may suggest that traditional norms in food-related roles still prevail to some extent. The contrasting values of boys and girls in our sample match the historical literature exploring gender norms and the division of food work in the home , in that girls are expected to be healthier and manage food work for a family to ensure nourishment. Other studies have found that connotations in society surrounding certain types of food work, such as barbequing or being a professional chef, emphasize the ‘fun’ side of food for men, but not for women . As such, future research should explore how the wording in school food literacy programs can be adjusted to reinforce the importance of food literacy for health among boys without undermining the fun side of food literacy for girls. 4.4. Next Steps for Educators and Policy Makers in School Food Literacy Programs Like any behaviour change intervention, school food literacy programs should be subject to rigorous development, monitoring and assessment . As a first step, school policy makers and educators could administer an open-ended feedback survey among teens after a few weeks of a food literacy program using key prompts that evaluate the theoretical underpinnings of the program. Based on our findings, this survey could ask teens if the program (1) provides a challenge?; (2) is fun?; (3) clearly explains why food literacy is important for health?; and (4) provides enough time to practice the skills they are taught? Feedback from these surveys should be incorporated to refine the programs. To ensure that any changes made meet teens’ needs, teens should remain involved alongside educators . A second suggestion for food literacy educators and policy makers is the need to consider providing teens greater space for self-exploration. For example, having teens prepare a short script outlining how a nutrient of their choice is important for their health or performance as an athlete. An activity of this nature would capture the unique motivators of a diverse group, and in theory could help motivate participation. Finally, policy makers in educational settings should include assessments of ‘success’ which can be meaningfully compared across food literacy programs in diverse schools and countries . This would greatly increase our understanding of what a successful food literacy program should strive to achieve regardless of geographic boundaries. As schools face numerous time and resource constraints, our suggestions may not be feasible in all school settings. Greater work is needed to understand the feasibility of our findings in real-world settings. 4.5. Strengths and Limitations This study is one of the first to consider how participation in sport and gender may impact food literacy experiences in schools. We employed best-practice methods such as pilot testing the interview guide with teens to ensure its appropriateness in wording. We utilized triangulation and memo-writing to help mitigate potential bias between researchers’ own experiences while analyzing the data too. Further, the researchers who coded the data had great diversity in their sport experiences to help account for potential bias based on sport involvement from impacting the analysis. Our recruited sample was also gender diverse and accounted for the opinions of teens who played a wide range of sports, though we did not consider differences in responses based on the sport played. Despite these strengths, it must be acknowledged that our sample was relatively homogenous in terms of grade and ethnicity. As such, care should be taken when generalizing the findings to other groups, especially ethnically diverse teens who are from lower socio-economic neighborhoods as our findings may not capture the unique experiences of teens from these contexts. Future work should include a more diverse sample of teens from different grades and ethnic backgrounds to improve our understandings. We also did not account for any differences among athletes in terms of how much they trained, and it is possible that differences may arise between athletes who play sports more often than those who do not. Finally, as we did not measure teens’ actual dietary behaviours, future work is needed to assess if school food literacy programs do indeed have the long-term behavioural impacts they are believed to possess. Given how close the four categories (i.e., Provide a challenge , Establish importance , Make it fun and Practice is key ) identified in this analysis are to the core concepts of the COM-B model, the COM-B model may be an advantageous model to use when designing or modifying school food literacy programs. Nothing in the other literature, to our knowledge, has sought to evaluate how school food literacy programs can be improved in a systematic way using behavioural theory. COM-B may offer specific advantages as a guiding theory over others such as Self-Determination Theory or Social Cognitive Theory as it allows for the incorporation of the home environment through the ‘opportunity’ component . Literature from Australia supports this notion, as researchers determined that the COM-B model was an appropriate and comprehensive model to assess how the use of meal kits impacted parents’ food literacy competencies in the home . As the other literature has consistently suggested that incorporating the home environment and parental support in their teens’ dietary habits is advantageous , the COM-B model may provide more direct inclusion of the home compared with other behaviour change theories. This may be especially salient among boys, as previous research conducted in a similar setting has suggested that boys may have less support in the home to learn about learn food literacy . Large differences in how athletes and non-athletes valued learning food literacy competencies arose. Athletes drew connections with how competencies such as cooking and meal planning supported sport performance, and as a result were motivated to learn these skills. This is exemplified by the importance that athletes placed on Practice is key , which was minimally valued among non-athletes. Other studies have suggested that teen athletes are motivated to improve their sport performance and may seek out nutrition information for this reason, even to the extent of considering steroids . The literature from the United States further found that simply self-identifying as an athlete, regardless of performance goals, corresponded with a greater intake of sports drinks such as Gatorade . As such, teen athletes may be uniquely motivated to follow certain dietary strategies if they believe them to fit the habits of a ‘typical’ athlete, even if they are not evidence-based. Thus, food literacy programs may offer an opportunity to help empower teen athletes’ knowledge of evidence-based sport nutrition information by tailoring some of their content. Our findings show that diverse gender groups, including boys and non-binary teens, valued participating in school food literacy programs. This is a substantial difference compared with previous research conducted in high school settings; girls expressed more opportunities and desire to learn such skills . While these are positive findings, suggesting that gender norms are not motivating factors for teens to learn about food literacy, our results also warrant caution. One main difference that arose in our work was that girls and non-binary teens suggested Establishing importance and Provide a challenge as the most critical aspects that improved the appeal of school programs. In contrast, boys highlighted Make it fun . This gender-based difference may suggest that traditional norms in food-related roles still prevail to some extent. The contrasting values of boys and girls in our sample match the historical literature exploring gender norms and the division of food work in the home , in that girls are expected to be healthier and manage food work for a family to ensure nourishment. Other studies have found that connotations in society surrounding certain types of food work, such as barbequing or being a professional chef, emphasize the ‘fun’ side of food for men, but not for women . As such, future research should explore how the wording in school food literacy programs can be adjusted to reinforce the importance of food literacy for health among boys without undermining the fun side of food literacy for girls. Like any behaviour change intervention, school food literacy programs should be subject to rigorous development, monitoring and assessment . As a first step, school policy makers and educators could administer an open-ended feedback survey among teens after a few weeks of a food literacy program using key prompts that evaluate the theoretical underpinnings of the program. Based on our findings, this survey could ask teens if the program (1) provides a challenge?; (2) is fun?; (3) clearly explains why food literacy is important for health?; and (4) provides enough time to practice the skills they are taught? Feedback from these surveys should be incorporated to refine the programs. To ensure that any changes made meet teens’ needs, teens should remain involved alongside educators . A second suggestion for food literacy educators and policy makers is the need to consider providing teens greater space for self-exploration. For example, having teens prepare a short script outlining how a nutrient of their choice is important for their health or performance as an athlete. An activity of this nature would capture the unique motivators of a diverse group, and in theory could help motivate participation. Finally, policy makers in educational settings should include assessments of ‘success’ which can be meaningfully compared across food literacy programs in diverse schools and countries . This would greatly increase our understanding of what a successful food literacy program should strive to achieve regardless of geographic boundaries. As schools face numerous time and resource constraints, our suggestions may not be feasible in all school settings. Greater work is needed to understand the feasibility of our findings in real-world settings. This study is one of the first to consider how participation in sport and gender may impact food literacy experiences in schools. We employed best-practice methods such as pilot testing the interview guide with teens to ensure its appropriateness in wording. We utilized triangulation and memo-writing to help mitigate potential bias between researchers’ own experiences while analyzing the data too. Further, the researchers who coded the data had great diversity in their sport experiences to help account for potential bias based on sport involvement from impacting the analysis. Our recruited sample was also gender diverse and accounted for the opinions of teens who played a wide range of sports, though we did not consider differences in responses based on the sport played. Despite these strengths, it must be acknowledged that our sample was relatively homogenous in terms of grade and ethnicity. As such, care should be taken when generalizing the findings to other groups, especially ethnically diverse teens who are from lower socio-economic neighborhoods as our findings may not capture the unique experiences of teens from these contexts. Future work should include a more diverse sample of teens from different grades and ethnic backgrounds to improve our understandings. We also did not account for any differences among athletes in terms of how much they trained, and it is possible that differences may arise between athletes who play sports more often than those who do not. Finally, as we did not measure teens’ actual dietary behaviours, future work is needed to assess if school food literacy programs do indeed have the long-term behavioural impacts they are believed to possess. Food literacy programs intended for high school teens should be tailored in ways that motivate all teens, regardless of gender or sport participation. By doing so, teens may be more likely to pursue food literacy programs and adopt health-protective dietary habits. In the classroom, food literacy educators and policy makers should consider greater inclusion of teens’ self-exploration to tailor the learning experience to their unique needs. Balancing the messaging of the importance of food literacy skills for health in addition to the role of food in personal enjoyment should further be included in the development of these programs. Finally, our results suggest that food literacy programs need critical evaluation that considers teens’ values and perspectives. |
Baseline arterial stiffness does not influence post‐exercise reduction in pulse wave velocity | 6fa55749-c8c6-46de-b85d-ffc6737798a6 | 11880123 | Cardiovascular System[mh] | INTRODUCTION It is well known that chronically elevated blood pressure (i.e., hypertension) is associated with increased central and peripheral arterial stiffness (Armentano et al., ; Coutinho et al., ; Fantin et al., ; Ferrier et al., ; Kawai et al., ). In addition, acute changes in arterial pressure affect arterial stiffness (Bank et al., ; Crighton Bramwell et al., ; Lim et al., ; Nye, ). In particular, Zieff et al. ( ) demonstrated that changes in local arterial pressure induced by manipulating arm position affect peripheral arterial stiffness measured as blood flow, PWV, beta stiffness, compliance, or distensibility (Zieff et al., ). In that study, peripheral arterial stiffness was significantly greater when the arm was positioned below the heart. Dynamic exercise acutely reduces peripheral arterial stiffness (Heffernan, Collier, et al., ; Heffernan, Jae, et al., ; Rakobowchuk et al., ; Ranadive et al., ; Siasos et al., ; Trachsel et al., ). Data from Sugawara et al. ( ) and Ranadive et al. ( ) indicate that exercise‐related reductions in peripheral arterial stiffness are due to local factors since no changes in stiffness are observed in the nonexercised limbs. Another maneuver that has local effects on peripheral arterial stiffness is passive limb compression. Heffernan, Edwards, et al. ( ) observed that passive mechanical compressions of the leg significantly reduced peripheral arterial stiffness with no changes in the control limb. To our knowledge, it is unknown whether acute changes in baseline arterial stiffness affect the responses to an acute bout of dynamic exercise or passive mechanical compressions. Therefore, the purpose of this study was to investigate how manipulation of local arterial pressure with different limb positions affects peripheral arterial stiffness after a single bout of dynamic exercise or passive mechanical compressions. The primary hypothesis was that the magnitude of decrease in peripheral arterial stiffness post exercise or compression would be less when baseline arterial stiffness is higher with the arm positioned below the heart. Manipulating limb position below and above heart level also influences the magnitude of increase in blood flow with exercise (Egana & Green, ; Tschakovsky et al., ; Villar & Hughson, ) and mechanical compression (Tschakovsky et al., ). Since increases in flow have been associated with acute decreases in peripheral arterial stiffness after heating (Cheng et al., ) and reactive hyperemia (Jackson et al., ; Naka et al., ; Stoner et al., ), the secondary hypothesis was that the decrease in peripheral arterial stiffness post exercise or compression would be related to the magnitude of the increase in blood flow during dynamic exercise and passive mechanical compressions.
METHODS 2.1 Ethical Approval The study procedures were approved by the Institutional Review Board at the University of Illinois at Chicago (2022–1129) and followed the guidelines of the Declaration of Helsinki. Written and verbal consents were obtained from each participant before they participated in this study. 2.2 Participants Nineteen healthy adults (24–39 years) volunteered to participate in this study. Exclusion criteria were smoking, cardiovascular, pulmonary, metabolic, and neurological diseases. Other exclusion criteria included hypertension or hypotension, diabetes, obesity (body mass index [BMI] >35 kg/m 2 ), anti‐inflammatory medication, and pregnancy. A urine‐based pregnancy test was conducted in women of childbearing age. 2.3 Experimental procedures This cross‐sectional study consisted of 2 visits to the University of Illinois Chicago Integrative Physiology Laboratory. In visit 1, each participant performed rhythmic handgrip exercise with the arm below and above heart level. In visit 2, passive rhythmic compressions were performed with the participant's arm below and above heart level. Both visits occurred at the same time of day. Participants ( n = 19) were asked to abstain from caffeine, alcoholic drinks, and exercise for at least 24 h before the study visit and fast for at least 4 h before their study visit. After signing the consent, measurement of height and weight was performed. Then participants rested quietly in the supine position for 10 min in a dark and temperature‐controlled room before instrumentation. The supine position was maintained throughout the study visit. 2.4 Measures Blood pressure was continuously monitored during both visits via photoplethysmography (Finometer Pro, Finapres Medical System, Amsterdam, the Netherlands) on the middle finger of the left hand. Local arterial pressure in the forearm at the two arm positions (below and above heart level) was estimated by adjusting for the hydrostatic column differences between the heart and mid‐forearm. This was accomplished by measuring the vertical distance between the heart and mid‐forearm and converting that distance from cmH 2 O to mmHg. Brachial‐radial PWV was assessed using 2 tonometers (Millar Instruments) placed on the proximal brachial artery and longitudinal radial artery of the right arm. The tonometer signal was sampled at 1000 Hz using PowerLab (AD Instruments, Colorado Springs, CO, USA). The tonometers acquired brachial and radial pulse waves simultaneously, and the second derivative of the pressure waves was computed on additional channels for the calculation of transit time using the “foot‐to‐foot” method. A macro was designed to automatically identify and compute the transit time between brachial and radial pressure waves. The average of at least 10 consecutive brachial‐radial pressure waves was recorded. The distance between the brachial and radial arteries was measured with a tape measure. PWV was calculated as: brachial − radial distance meters average of transit time sec The coefficient of variance (CV) in the PWV measurements was calculated from separate studies in 5 additional healthy subjects. The CV for PWV at resting baseline was 1.7% and after handgrip exercise was 5.0%. Brachial blood flow was measured using an ultrasound and linear probe at 5–13 mHz (Prosound Alpha 7, Hitachi‐Aloka, Japan). Brachial diameter and flow velocity were recorded simultaneously with B‐mode and Doppler mode. Mean blood flow velocity (Vm) was obtained with the probe positioned at an insonation angle of <60°. Video recording and post‐processing were conducted using FMD Studio Cardiovascular Suite software (QUIPU, Pisa, Italy). Brachial blood velocity was acquired continuously during baseline, exercise or passive compressions, and recovery. All data were exported to an Excel spreadsheet (Microsoft, Redmond, WA, USA) for post‐analysis. Sec‐by‐sec brachial blood flow was calculated using the following equation: BF mL / min − 1 = V m × 60 × π r 2 where V m is the mean brachial artery velocity in cm/sec −1 , multiplied by 60 to convert to cm/min −1 , and r 2 is the brachial artery radius (cm) squared. The area under the blood flow curve (AUC) was calculated using the GraphPad Prism software (GraphPad by Dotmatics, Boston, MA, USA). 2.5 Protocol After instrumentation during the first visit, participants were asked to squeeze the handgrip dynamometer 3 times with maximal force. The highest value achieved was recorded as the maximal voluntary contraction (MVC). Then participants performed rhythmic handgrip exercise at 50% of MVC for 5 min with the arms either below (~50°) or above (~50°) heart level, with the order of the two positions randomized. To maintain the appropriate angle, the subject's arm was supported on a stationary apparatus that was tilted at the appropriate angles (JawStand XP, Rockwell, Charlotte, NC, USA). Thirty min of recovery was allowed between exercise trials. The rhythmic handgrip exercise consisted of a 2‐s contraction followed by a 2‐s relaxation, with visual feedback to assist in achieving the correct amount of force. Brachial blood flow was measured at baseline, 5 min of exercise, and for 4 min post exercise (10‐min total). Brachial‐radial PWV was measured at baseline and 5, 15, and 30 min post exercise. On a separate visit at least 24 h separated from the previous visit, a wide pressure cuff (CC17, Hokanson, Bellevue, WA) was placed on the forearm of participants for 5 min of 2 s inflation/2 s deflation cycles at 200 mmHg with a rapid cuff inflator unit (Hokanson E20, Bellevue, WA). Cuff cyclic inflations/deflations were performed with arms either below (~50°) or above (~50°) heart level in randomized order and with at least 30 min between conditions. As with the exercise trials, the arm was supported on a stationary apparatus that tilted at the appropriate angles. Brachial blood flow was measured at baseline, 5 min of compressions, and 4 min post compressions. Brachial‐radial PWV was measured at baseline and 5, 15, and 30 min post compressions. 2.6 Statistical analysis PWV, brachial blood flow, and blood pressure were analysed with a two‐way (condition × time) repeated measures analysis of variance. Post‐hoc comparisons were performed using a Bonferroni correction for multiple comparisons. The significance level was set at α = 0.05. Data are presented as mean ± standard deviation (SD).
Ethical Approval The study procedures were approved by the Institutional Review Board at the University of Illinois at Chicago (2022–1129) and followed the guidelines of the Declaration of Helsinki. Written and verbal consents were obtained from each participant before they participated in this study.
Participants Nineteen healthy adults (24–39 years) volunteered to participate in this study. Exclusion criteria were smoking, cardiovascular, pulmonary, metabolic, and neurological diseases. Other exclusion criteria included hypertension or hypotension, diabetes, obesity (body mass index [BMI] >35 kg/m 2 ), anti‐inflammatory medication, and pregnancy. A urine‐based pregnancy test was conducted in women of childbearing age.
Experimental procedures This cross‐sectional study consisted of 2 visits to the University of Illinois Chicago Integrative Physiology Laboratory. In visit 1, each participant performed rhythmic handgrip exercise with the arm below and above heart level. In visit 2, passive rhythmic compressions were performed with the participant's arm below and above heart level. Both visits occurred at the same time of day. Participants ( n = 19) were asked to abstain from caffeine, alcoholic drinks, and exercise for at least 24 h before the study visit and fast for at least 4 h before their study visit. After signing the consent, measurement of height and weight was performed. Then participants rested quietly in the supine position for 10 min in a dark and temperature‐controlled room before instrumentation. The supine position was maintained throughout the study visit.
Measures Blood pressure was continuously monitored during both visits via photoplethysmography (Finometer Pro, Finapres Medical System, Amsterdam, the Netherlands) on the middle finger of the left hand. Local arterial pressure in the forearm at the two arm positions (below and above heart level) was estimated by adjusting for the hydrostatic column differences between the heart and mid‐forearm. This was accomplished by measuring the vertical distance between the heart and mid‐forearm and converting that distance from cmH 2 O to mmHg. Brachial‐radial PWV was assessed using 2 tonometers (Millar Instruments) placed on the proximal brachial artery and longitudinal radial artery of the right arm. The tonometer signal was sampled at 1000 Hz using PowerLab (AD Instruments, Colorado Springs, CO, USA). The tonometers acquired brachial and radial pulse waves simultaneously, and the second derivative of the pressure waves was computed on additional channels for the calculation of transit time using the “foot‐to‐foot” method. A macro was designed to automatically identify and compute the transit time between brachial and radial pressure waves. The average of at least 10 consecutive brachial‐radial pressure waves was recorded. The distance between the brachial and radial arteries was measured with a tape measure. PWV was calculated as: brachial − radial distance meters average of transit time sec The coefficient of variance (CV) in the PWV measurements was calculated from separate studies in 5 additional healthy subjects. The CV for PWV at resting baseline was 1.7% and after handgrip exercise was 5.0%. Brachial blood flow was measured using an ultrasound and linear probe at 5–13 mHz (Prosound Alpha 7, Hitachi‐Aloka, Japan). Brachial diameter and flow velocity were recorded simultaneously with B‐mode and Doppler mode. Mean blood flow velocity (Vm) was obtained with the probe positioned at an insonation angle of <60°. Video recording and post‐processing were conducted using FMD Studio Cardiovascular Suite software (QUIPU, Pisa, Italy). Brachial blood velocity was acquired continuously during baseline, exercise or passive compressions, and recovery. All data were exported to an Excel spreadsheet (Microsoft, Redmond, WA, USA) for post‐analysis. Sec‐by‐sec brachial blood flow was calculated using the following equation: BF mL / min − 1 = V m × 60 × π r 2 where V m is the mean brachial artery velocity in cm/sec −1 , multiplied by 60 to convert to cm/min −1 , and r 2 is the brachial artery radius (cm) squared. The area under the blood flow curve (AUC) was calculated using the GraphPad Prism software (GraphPad by Dotmatics, Boston, MA, USA).
Protocol After instrumentation during the first visit, participants were asked to squeeze the handgrip dynamometer 3 times with maximal force. The highest value achieved was recorded as the maximal voluntary contraction (MVC). Then participants performed rhythmic handgrip exercise at 50% of MVC for 5 min with the arms either below (~50°) or above (~50°) heart level, with the order of the two positions randomized. To maintain the appropriate angle, the subject's arm was supported on a stationary apparatus that was tilted at the appropriate angles (JawStand XP, Rockwell, Charlotte, NC, USA). Thirty min of recovery was allowed between exercise trials. The rhythmic handgrip exercise consisted of a 2‐s contraction followed by a 2‐s relaxation, with visual feedback to assist in achieving the correct amount of force. Brachial blood flow was measured at baseline, 5 min of exercise, and for 4 min post exercise (10‐min total). Brachial‐radial PWV was measured at baseline and 5, 15, and 30 min post exercise. On a separate visit at least 24 h separated from the previous visit, a wide pressure cuff (CC17, Hokanson, Bellevue, WA) was placed on the forearm of participants for 5 min of 2 s inflation/2 s deflation cycles at 200 mmHg with a rapid cuff inflator unit (Hokanson E20, Bellevue, WA). Cuff cyclic inflations/deflations were performed with arms either below (~50°) or above (~50°) heart level in randomized order and with at least 30 min between conditions. As with the exercise trials, the arm was supported on a stationary apparatus that tilted at the appropriate angles. Brachial blood flow was measured at baseline, 5 min of compressions, and 4 min post compressions. Brachial‐radial PWV was measured at baseline and 5, 15, and 30 min post compressions.
Statistical analysis PWV, brachial blood flow, and blood pressure were analysed with a two‐way (condition × time) repeated measures analysis of variance. Post‐hoc comparisons were performed using a Bonferroni correction for multiple comparisons. The significance level was set at α = 0.05. Data are presented as mean ± standard deviation (SD).
RESULTS A summary of descriptive data for all 19 participants (10 females, 9 males) is presented in Table . One participant withdrew from the study before completion. Figure shows peripheral arterial stiffness measured as brachial‐radial PWV at baseline and 5‐, 15‐, and 30‐min post rhythmic handgrip exercise (Figure ) and passive compression (Figure ) with the arm below and above heart level. For both the exercise and passive compression trials, a main effect of arm position was observed for peripheral PWV ( p < 0.001) with lower PWV when the arm was positioned above heart level and higher PWV when the arm was positioned below heart. PWV was significantly lower at 5‐min after handgrip exercise with the arm below heart (10.4 ± 2.2 to 8.7 ± 2.2 m/s; p < 0.001) and returned near baseline values at 15‐min (10.5 ± 2.2 m/s, p > 0.05) and 30‐min (10.2 ± 2.5 m/s, p > 0.05). PWV was also significantly lower than baseline at 5‐min post exercise with the arm above heart (6.4 ± 1.3 to 5.3 ± 1.0 m/s; p = 0.004) and returned to baseline at 15‐min (6.1 ± 2 m/s, p > 0.05) and 30‐min post (6.9 ± 1.7 m/s, p > 0.05). There was no position × time interaction for handgrip exercise ( p > 0.05). Figure shows peripheral arterial stiffness measured with brachial‐radial PWV before and 5‐, 15‐, and 30‐min after rhythmic passive mechanical compressions of the forearm vasculature with the arm below and above heart. After rhythmic passive mechanical compressions of the forearm, PWV was significantly reduced at 5‐min with the arm below heart (10.8 ± 2.0 to 9.8 ± 2.1 m/s; p < 0.001) and returned near baseline values at 15‐min (10.3 ± 2.5 m/s, p > 0.05) and 30‐min (10.7 ± 2.3 m/s, p > 0.05). PWV also decreased 5‐min after passive compressions with the arm above heart (7.5 ± 1.4 to 5.7 ± 1.1 m/s, p < 0.001) and returned to baseline values at 15‐min (6.5 ± 1.9 m/s, p > 0.05) and 30‐min post (7.3 ± 2.3 m/s, p > 0.05). There was no position × time interaction for passive compressions ( p > 0.05). Table shows mean arterial pressure (MAP), systolic blood pressure (SBP), diastolic blood pressure (DBP), and pulse pressure (PP) values for rhythmic exercise and passive compressions in both arm positions. No significant differences were observed between arm positions for exercise or passive compressions for all systemic arterial pressure variables. The estimated local mean arterial pressure with the arm below the heart level was 111 ± 8 mmHg and above the heart was 62 ± 7 mmHg for the exercise visit ( p < 0.001). During the compression visit, the estimated local mean arterial pressure was 112 ± 9 mmHg for the arm below the heart and 60 ± 8 mmHg for the arm above the heart ( p < 0.001). To compare the magnitude of change in peripheral PWV between arm below and above heart level, the percent change in PWV was calculated (Figure ). Figure shows the magnitude of change in peripheral PWV from baseline to 5‐, 15‐, and 30‐min after the rhythmic handgrip exercise with arm above and below heart level. There was no interaction effect among the conditions ( p > 0.05). Figure shows the magnitude of change in peripheral PWV from baseline to 5‐, 15‐, and 30‐min post passive compressions with the arm above and below heart level. No interaction effect was identified between conditions ( p > 0.05). Ensemble averages of blood flow curves for all subjects during exercise and passive mechanical compressions of the forearm with the arm below and above heart level are shown in Figure . Exercise elicited similar patterns of increase in blood flow at the onset and during the rhythmic handgrip exercise with both arm positions (Figure ). A similar pattern of increase in brachial blood flow with both arm positions was also observed during passive compressions of the forearm vasculature (Figure ). These changes were analyzed quantitatively with 1‐min averages of blood flow at baseline, the last minute of intervention, and the last minute of recovery with the arm below and above heart level (Figure ). No statistically significant differences were observed between the arm below and above the heart for any of the time points for exercise or passive compressions ( p > 0.05).
DISCUSSION The purpose of this study was to investigate how manipulating baseline peripheral arterial stiffness via changes in local arterial pressure made by positioning the arm below and above heart level affects reductions in stiffness after rhythmic handgrip exercise and passive mechanical compressions. The salient finding is that the magnitude of decrease in brachial‐radial PWV after rhythmic handgrip exercise or passive compressions was unaffected by differences in baseline arterial stiffness associated with arm position. Unexpectedly, arm position also did not affect the magnitude of increase in blood flow with either intervention. Much of the literature on pressure dependency of arterial stiffness focuses on chronic changes in stiffness, often with pharmacologic or behavioral interventions. Acute changes in stiffness occur with systemic blood pressure perturbation due to head‐up tilt, mental stress, isometric handgrip exercise, and cold pressor (Lim et al., ) or whole‐body postural changes (Schroeder et al., ). Zieff et al. ( ) employed a more isolated effect of local arterial pressure on peripheral arterial stiffness using changes in limb position. Their results demonstrated that an increase in local arterial pressure caused by lowering the arm below heart level increased peripheral arterial stiffness measured in the brachial artery. It is important to point out that they employed a single‐point method with ultrasound, while the two‐point PWV with tonometers remains the gold standard method (Boutouyrie et al., ; Chirinos, ). Indeed, the same group has shown that single‐point PWV does not directly reflect regional 2‐point PWV (Fryer et al., ). To our knowledge, the current study is the first to demonstrate that local arterial pressure directly influences peripheral arterial stiffness assessed with two‐point measurement of peripheral PWV with tonometers. Our results demonstrate a significant reduction in PWV 5 min after rhythmic handgrip exercise at 50% MVC. This is consistent with the body of literature showing a reduction in peripheral arterial stiffness after an acute bout of dynamic exercise (Campbell et al., ; Doonan et al., ; Fryer et al., ; Heffernan, Collier, et al., ; Heffernan, Jae, et al., ; Kingwell et al., ; Okamoto et al., ; Ranadive et al., ; Sugawara et al., ). The results of the current study also demonstrated a significant reduction in PWV 5 min after passive mechanical compressions ceased, which is consistent with a previous study by Heffernan, Edwards, et al. ( ). The novel finding in the current study is that baseline arterial stiffness did not affect the magnitude of the reduction in PWV after exercise or passive compressions. Changes in local arterial pressure associated with arm position had the anticipated effect on baseline brachial–radial PWV, but the magnitude of the decrease in PWV associated with exercise or mechanical compressions was independent of the baseline values. It is noteworthy that the decrease in peripheral arterial stiffness after acute exercise or passive compressions is exclusively observed in the involved limbs (Heffernan, Edwards, et al., ; Ranadive et al., ; Sugawara et al., ). Thus, it seems that a local factor must contribute to the responses. We reasoned that one relevant local factor is the increase in limb blood flow with exercise or passive compressions. Since previous studies have observed decreases in peripheral arterial stiffness after increases in blood flow were elicited by heat (Cheng et al., ) or by reactive hyperemia (Naka et al., ; Stoner et al., ), it seemed plausible that increases in blood flow may play a role in the reduction of peripheral arterial stiffness after acute exercise. Manipulation of limb position has been shown to influence the magnitude of change in blood flow during exercise (Egana & Green, ; Tschakovsky et al., ; Villar & Hughson, ). A single contraction of the forearm produced a higher peak blood flow when the arm was below heart level compared to above (Tschakovsky et al., ). A series of rhythmic calf contractions showed higher blood flow when the legs were below heart level (Egana & Green, ; Villar & Hughson, ). Similarly, passive compressions of the forearm elicited a higher blood flow response when the arm was below heart level (Tschakovsky et al., ) and reactive hyperemia produced a higher blood flow response when the arm was below heart level (Jasperse et al., ). Based on these previous findings, we postulated that rhythmic handgrip and passive compression would produce higher blood flow when performed below heart level. However, our results showed no difference in the magnitude of brachial blood flow change between below versus above heart level. It is difficult to provide an adequate explanation for this inconsistency. One potential explanation may be related to the timing of contractions and compressions. Tschakovsky et al. ( ) performed 1‐s/2‐s contraction/relaxation and inflation/deflation cycles for 1 min, whereas we performed 2‐s/2‐s contraction/relaxation and inflation/deflation cycles for 5 min. It is possible that the comparable reduction in PWV in the two arm positions in the current study is related to the similarity in the brachial blood flow responses. The fact that differences in baseline arterial stiffness did not affect the magnitude of decrease in brachial‐radial PWV after rhythmic handgrip exercise has potential implications for hypertensive individuals who have chronically elevated arterial stiffness (Coutinho et al., ). Exercise should be effective in acutely lowering arterial stiffness in their exercising limbs independent of their blood pressure. This speculation should be verified in future studies. There are some limitations to this study. Although both male and female volunteers were included, the study was not adequately powered to examine sex differences. All the subjects were young and healthy. No older individuals or hypertensive individuals were included. Sex differences, aging, and hypertension would be good topics for future studies. We must acknowledge that local arterial pressure in the experimental limb was estimated rather than directly measured. This calculation is rather straightforward and has been used by other laboratories (Jasperse et al., ; Tucker et al., ; Walker et al., ).
CONCLUSION Data from this study confirm that local arterial pressure affects baseline peripheral arterial stiffness such that baseline brachial‐radial PWV was higher when the arm was below the heart and lower when the arm was above the heart. In contrast to our hypothesis, the data also demonstrate that arm position did not affect the magnitude of decrease in peripheral arterial stiffness measured with brachial‐radial PWV after rhythmic handgrip exercise or passive compressions. Importantly, the reduction in peripheral PWV after rhythmic handgrip exercise or passive compressions was independent of arm position and local arterial pressure‐mediated changes in baseline arterial stiffness.
There was no external funding.
|
Preoperative quadriceps malalignment is associated with poor outcomes after knee replacement which are avoided by external rotation of the femoral component | 45da203e-2448-4ba6-8d70-a706d467fdd3 | 11948163 | Surgery[mh] | Patella maltracking is associated with poor outcomes following total knee arthroplasty (TKA) [ , , , ]. It is caused by extensor mechanism malalignment relative to the flexion–extension axis of the knee . Patellofemoral alignment is highly variable in the normal population . Identifying patients who have pathological extensor mechanism malalignment before TKA is important in determining patients at risk of poor outcomes and in developing individualised alignment techniques for these patients. The association between quadriceps alignment and patella maltracking has been examined . Traditionally, the alignment of the quadriceps has been assumed to correlate with the quadriceps angle (Q‐angle), measured from the patella to the anterior inferior iliac crest . However, the Q‐angle has a poor correlation with patellofemoral kinematics . The Q‐angle only measures the bony attachments of the rectus femoral portion of the quadriceps muscle. It is likely that the bony attachments of the other three muscle bellies of the quadriceps, which attach to the femur and not the pelvis, are not aligned parallel to the rectus femoris. Therefore, the true force vector of the quadriceps complex may be significantly different from the Q‐angle. In support of this concept, the rotational alignment of the quadriceps muscle has been shown to rotate around the shaft of the femur in patients with patella instability . A recent publication by Talbot et al. measured the alignment of the quadriceps tendon alignment (QTA) relative to the centre of the knee joint in a normal population and describes a wide range of variability in the anatomy . The Quadriceps Tendon Coronal Angle (QTCA), the angle of the quadriceps tendon relative to the mechanical axis, varies from 14° varus to 7° valgus. The quadriceps tendon axial angle (QTAx), the angle between the apex of the quadriceps tendon and the centre of the shaft of the femur, varies from 44° externally rotated to 42° internally rotated . Additionally, the alignment of the quadriceps tendon is not associated with any other measure of bony anatomy including femoral torsion, trochlear groove or posterior condyle alignment. This indicates that malalignment of the quadriceps tendon is indicative of an isolated quadriceps muscle deformity . The QTAx in a group with lateral facet patellofemoral joint osteoarthritis (LFPFJOA) ( n = 25) was 17.3° compared to 3.3° in a nonarthritic group ( n = 25). The strength of the association indicates that Quadriceps Tendon Malalignment (QTM) is the predominant anatomical deformity associated with the development of patella maltracking causing severe LFPFJOA . In confirming the relationship between lateralisation of the quadriceps tendon and development of LFPFJOA, Talbot et al. demonstrated the clinical relevance of QTM. Previous work by Mizuno et al. showed in a cadaveric study that altering the alignment of the quadriceps mechanism creates changes in the orientation of the patella and patellofemoral contact pressures during flexion , supporting the findings of Talbot et al. The objective of this study is to explore the relationship between QTA, femoral component position and patient‐reported outcome measures (PROMs) in patients undergoing TKA. It aims to (i) identify an association between QTM and patient outcomes following TKA and (ii) using a subgroup of patients with QTM, determine the relationship between femoral component position and PROMs. The hypotheses are (i) preoperative QTA influences TKA outcomes and (ii) in patients with preoperative QTM, outcomes of TKA are influenced by femoral component axial rotation.
Prospectively collected preoperative and postoperative PROMs data were linked to retrospectively collected preoperative measurements of patient anatomy and postoperative measurements of component position. The analysis was performed on the data of patients undergoing TKA by a single surgeon over a 4‐year period (November 2018–November 2022). All patients undergoing TKA with a single prosthesis (Saiph; Matortho) were included. Exclusion criteria included absent or inadequate preoperative CT scans, previous fractures with malunion, previous tibial or femoral osteotomy, missing preoperative or postoperative PROMs. Of 574 cases, 144 were excluded due to missing preoperative or postoperative PROMs data and a further 42 were excluded due to absent or inadequate CT scans, resulting in 388 cases for analysis. CT images consist of 1.25 mm slices through the hip, knee and ankle (GE Optima 660 Brightspeed, 128‐slice scanner). The scans were segmented and analysed using mediCAD® 2.1 3D Knee software (mediCAD Hectec GmbH, Altdorf/Landshut). Scans were first oriented to a consistent reference frame. The axial plane was oriented to a tangential line across the posterior condyles of the femur. The coronal plane was oriented to the plane of the mechanical axis of the femur from the hip centre to the centre of the distal point of the trochlear groove. The sagittal plane was oriented to the mechanical axis of the femur from the hip centre to the knee centre identified as the distal point of the trochlear groove at the apex of the intercondylar notch. The published QTAx technique was used to measure preoperative QTA . QTAx was measured by identifying the centre of the apex of the QT using the centre point of a circle around the tendon. The centre of the axial cross‐section of the femur on the same axial slice was identified using the centre of the circle. The angle was measured from a vertical line perpendicular to the posterior condyle to a line between the centres of the two measured circles. A positive value was recorded when the apex of the quadriceps tendon was lateral to the centre of the femur (see Figure ). Surgical technique The surgical technique was standardised in all cases. Preoperative planning was based on CT scan results and the alignment technique personalised for each patient. The alignment technique was as follows: the starting point was neutral coronal alignment which was adjusted toward constitutional alignment based on ligament balance and prearthritic anatomy. Femoral rotation was adjusted between 0° and 5° of external rotation relative to the posterior condyles based on the position of the surgical epicondylar axis, the trochlear groove, the presence of preoperative patella maltracking and the intraoperative ligament balance. QTA was not measured before surgery. Adjustments to bony alignment, most commonly the tibial cut, were performed when necessary to minimise the requirement for ligament releases. No tourniquet was used. Medial parapatellar approach was performed. In all cases, the patella was resurfaced with a dome button. A lateral facetectomy was performed to remove any uncovered lateral patella bone. Femoral sizing was posterior referenced, and the component was positioned with a flush anterior osteotomy to avoid anterior overstuffing and to allow downsizing and lateralisation of the femoral component. Local anaesthetic infiltration, tranexamic acid and thromboprophylaxis were used. Postoperatively, mobilisation of the patient commenced on Day 1 (morning), with full weight bearing and full range of motion. A CT scan was performed between Days 2 and 4. Measurement technique Component position was determined from postoperative CT scans. The scans were segmented and analysed using the mediCAD® 2.1 3D Knee software (mediCAD Hectec GmbH, Altdorf/Landshut). Coronal alignment of the femoral and tibial components were measured relative to mechanical axis and changes in lateral distal femur angle (LDFA) and medial proximal tibial angle (MPTA) were calculated by comparing the preoperative anatomy. Tibial component rotation was measured relative to Akagi's Line . The change in posterior condyle angle was measured by comparing the preoperative posterior condylar angle relative to the Anatomical Epicondylar Axis (AEA) to the postoperative angle between the posterior condyles of the component and the AEA. Twenty cases were randomly selected and measured by the senior author and an orthopaedic resident to determine inter‐ and intraobserver reliability. Both assessors have extensive experience in performing these measurements. Upon confirmation of inter‐ and intraobserver reliability, the senior author measured all cases. Patient‐reported outcome measures PROMs data were collected using an online questionnaire sent via email and completed by patients pre‐ and postoperatively (12 months). The questionnaire included the Knee Injury and Osteoarthritis Outcome Score (KOOS) , Forgotten Joint Score (FJS) and EuroQol Visual Analogue Scale (EQ‐VAS) . The KOOS is a knee‐specific instrument used to measure functional recovery and quality of life (QoL) in patients following TKA . The KOOS includes five scales: pain, symptoms, activities of daily living, sports and recreation, and knee‐related quality of life . All subscales are scored separately from 0 to 100 points, with 0 indicating extreme knee problems and 100 indicating no knee problems. Patient acceptable symptom state (PASS) cut‐points for KOOS subscales were greater than 84.5 points for pain, 80.5 points for symptoms, 83.0 points for activities of daily living and 66.0 points for quality of life . KOOS sport PASS data cut‐point is not available for patients undergoing TKA. A KOOS‐12 score was calculated from the relevant items , with the Minimal Clinical Important Difference (MCID) for the KOOS‐12 at 11.1 points . The FJS is a questionnaire based on the assumption that the ability to forget an artificial joint is the goal following joint arthroplasty. The FJS uses a 5‐point Likert response format, consisting of 12 questions, each measuring the awareness of the artificial joint in daily activities. FJS score ranges 0–100 points with a higher score corresponding to a better outcome. The EQ‐VAS is a stand‐alone component of the EQ‐5D‐5L index , in which a patient self‐reports the impression of their general health and functionality. Patients rate their general health from 0 to 100, with higher scores indicating better function. EQ‐VAS has been evaluated and demonstrated validity in an Australian population undergoing TKA . Data analysis Sample size calculation was performed to adequately power comparative analysis between patients with QTM based on the level of femoral component rotation. A minimum sample size of 70 patients with QTM had a 95% chance of detecting such a difference, with an alpha level of 0.05. Based on the assumption that 20% of patients would be assessed as having QTM, a total sample size of 350 cases was required. Interobserver and intraobserver reliability was calculated using the intraclass correlation coefficient. The strength of the relationship was assessed as low/weak ( r < 0.25), fair ( r = 0.25 to < 0.50), good ( r = 0.50–0.75) or excellent ( r > 0.75). A KOOS‐12 change score was calculated by subtracting the KOOS‐12 preoperative score from the postoperative KOOS‐12 score. Independent sample t tests were conducted comparing patient groups (QTM to non‐QTM) and a subgroup of patients with QTM. The subgroup with QTM was divided into two groups using the femoral component rotation median distribution score. Pearson correlation coefficients were calculated for femoral rotation and PROMs. The results are reported as means, ± standard deviation (SD), range and 95% confidence intervals (95%CI). Data were analysed using IBM SPSS Statistics software (version 29.2.0).
The surgical technique was standardised in all cases. Preoperative planning was based on CT scan results and the alignment technique personalised for each patient. The alignment technique was as follows: the starting point was neutral coronal alignment which was adjusted toward constitutional alignment based on ligament balance and prearthritic anatomy. Femoral rotation was adjusted between 0° and 5° of external rotation relative to the posterior condyles based on the position of the surgical epicondylar axis, the trochlear groove, the presence of preoperative patella maltracking and the intraoperative ligament balance. QTA was not measured before surgery. Adjustments to bony alignment, most commonly the tibial cut, were performed when necessary to minimise the requirement for ligament releases. No tourniquet was used. Medial parapatellar approach was performed. In all cases, the patella was resurfaced with a dome button. A lateral facetectomy was performed to remove any uncovered lateral patella bone. Femoral sizing was posterior referenced, and the component was positioned with a flush anterior osteotomy to avoid anterior overstuffing and to allow downsizing and lateralisation of the femoral component. Local anaesthetic infiltration, tranexamic acid and thromboprophylaxis were used. Postoperatively, mobilisation of the patient commenced on Day 1 (morning), with full weight bearing and full range of motion. A CT scan was performed between Days 2 and 4.
Component position was determined from postoperative CT scans. The scans were segmented and analysed using the mediCAD® 2.1 3D Knee software (mediCAD Hectec GmbH, Altdorf/Landshut). Coronal alignment of the femoral and tibial components were measured relative to mechanical axis and changes in lateral distal femur angle (LDFA) and medial proximal tibial angle (MPTA) were calculated by comparing the preoperative anatomy. Tibial component rotation was measured relative to Akagi's Line . The change in posterior condyle angle was measured by comparing the preoperative posterior condylar angle relative to the Anatomical Epicondylar Axis (AEA) to the postoperative angle between the posterior condyles of the component and the AEA. Twenty cases were randomly selected and measured by the senior author and an orthopaedic resident to determine inter‐ and intraobserver reliability. Both assessors have extensive experience in performing these measurements. Upon confirmation of inter‐ and intraobserver reliability, the senior author measured all cases.
PROMs data were collected using an online questionnaire sent via email and completed by patients pre‐ and postoperatively (12 months). The questionnaire included the Knee Injury and Osteoarthritis Outcome Score (KOOS) , Forgotten Joint Score (FJS) and EuroQol Visual Analogue Scale (EQ‐VAS) . The KOOS is a knee‐specific instrument used to measure functional recovery and quality of life (QoL) in patients following TKA . The KOOS includes five scales: pain, symptoms, activities of daily living, sports and recreation, and knee‐related quality of life . All subscales are scored separately from 0 to 100 points, with 0 indicating extreme knee problems and 100 indicating no knee problems. Patient acceptable symptom state (PASS) cut‐points for KOOS subscales were greater than 84.5 points for pain, 80.5 points for symptoms, 83.0 points for activities of daily living and 66.0 points for quality of life . KOOS sport PASS data cut‐point is not available for patients undergoing TKA. A KOOS‐12 score was calculated from the relevant items , with the Minimal Clinical Important Difference (MCID) for the KOOS‐12 at 11.1 points . The FJS is a questionnaire based on the assumption that the ability to forget an artificial joint is the goal following joint arthroplasty. The FJS uses a 5‐point Likert response format, consisting of 12 questions, each measuring the awareness of the artificial joint in daily activities. FJS score ranges 0–100 points with a higher score corresponding to a better outcome. The EQ‐VAS is a stand‐alone component of the EQ‐5D‐5L index , in which a patient self‐reports the impression of their general health and functionality. Patients rate their general health from 0 to 100, with higher scores indicating better function. EQ‐VAS has been evaluated and demonstrated validity in an Australian population undergoing TKA .
Sample size calculation was performed to adequately power comparative analysis between patients with QTM based on the level of femoral component rotation. A minimum sample size of 70 patients with QTM had a 95% chance of detecting such a difference, with an alpha level of 0.05. Based on the assumption that 20% of patients would be assessed as having QTM, a total sample size of 350 cases was required. Interobserver and intraobserver reliability was calculated using the intraclass correlation coefficient. The strength of the relationship was assessed as low/weak ( r < 0.25), fair ( r = 0.25 to < 0.50), good ( r = 0.50–0.75) or excellent ( r > 0.75). A KOOS‐12 change score was calculated by subtracting the KOOS‐12 preoperative score from the postoperative KOOS‐12 score. Independent sample t tests were conducted comparing patient groups (QTM to non‐QTM) and a subgroup of patients with QTM. The subgroup with QTM was divided into two groups using the femoral component rotation median distribution score. Pearson correlation coefficients were calculated for femoral rotation and PROMs. The results are reported as means, ± standard deviation (SD), range and 95% confidence intervals (95%CI). Data were analysed using IBM SPSS Statistics software (version 29.2.0).
Excellent intra‐ and interobserver reliabilities were demonstrated between CT measurements with r > 0.8. Aim 1 The QTAx preoperative measurement mean was 6.2° (SD 12.0°, range −33° to 54°). Based on previous research , an angle of >14° is a clinically relevant level of QTM. Seventy‐six patients (19.6%) fulfilled the criteria for QTM. There were no significant differences in age or sex between the two groups. Demographic data are shown in Table . Compared to the non‐QTM group, the QTM group reported lower scores across all PROMs indicating poorer postoperative outcomes. This difference was statistically significant in all KOOS scales (pain, symptoms, activities of daily living, sport, QoL), the FJS, and the EQ‐VAS. Comparative analysis of patients with or without QTM is shown in Table . The comparison of KOOS scale PASS percentages is shown in Table . There was no association between internally rotated QTAx (medialised quadriceps tendon) and patient outcomes. Aim 2 Using data from patients identified with QTM ( n = 76), analysis was undertaken to assess the effect, if any, of femoral component position on PROMs. The association between femoral component rotation and change in KOOS‐12 score in patients with QTM is shown in Figure . There was no significant association between postoperative LDFA, MPTA, tibial component rotation, change in LDFA or change in MPTA in patients with QTM. The correlations between outcomes and femoral component rotation are reported in Table . Patients with QTM who reported a KOOS‐12 > 75 ( n = 40) had a mean femoral component external rotation of 2.9° (SD 1.4°) compared to a mean of 1.4° (SD 1.8°) in patients with a KOOS‐12 < 75 ( p < 0.001, n = 36). There was a positive correlation between femoral component rotation and QTAx angle in patients who achieved a good outcome (KOOS‐12 > 75) after TKA ( r = 0.536, p < 0.001). This indicates that patients with more severe QTM require greater femoral component external rotation to optimise outcomes. This is shown in Figure . Patients with QTM were divided into two groups based on the median femoral component rotation. Due to postoperative metal artefact obscuring landmarks, the femoral component rotation was measured in 70 of 76 patients. Patients who had ≤2° of femoral component external rotation had a mean KOOS‐12 of 70.0 (SD 16.8) compared to patients with >2° of femoral component external rotation who scored a mean KOOS‐12 of 79.3 (SD 18.1; p = 015). Patients who had ≤2° of femoral component external rotation had a mean change in KOOS‐12 of 31.3 (SD 21.3) compared to patients with >2° of femoral component external rotation who had mean change in KOOS‐12 of 43.0 (SD 21.9). Therefore, the difference in the change in KOOS‐12 between the two groups was 11.7 points (CI = 1.4–22.0, p = 0.013). Remaining PROMs and KOOS subscores are summarised in Table . There was no difference in KOOS‐12 between patients with no QTM and patients with QTM and greater than 2° of femoral component external rotation (80.3 [SD 16.1] vs. 79.3 [SD 18.1]). The comparison of KOOS subscale PASS percentages in patients with QTM and greater or less than 2° of femoral rotation is shown in Table .
The QTAx preoperative measurement mean was 6.2° (SD 12.0°, range −33° to 54°). Based on previous research , an angle of >14° is a clinically relevant level of QTM. Seventy‐six patients (19.6%) fulfilled the criteria for QTM. There were no significant differences in age or sex between the two groups. Demographic data are shown in Table . Compared to the non‐QTM group, the QTM group reported lower scores across all PROMs indicating poorer postoperative outcomes. This difference was statistically significant in all KOOS scales (pain, symptoms, activities of daily living, sport, QoL), the FJS, and the EQ‐VAS. Comparative analysis of patients with or without QTM is shown in Table . The comparison of KOOS scale PASS percentages is shown in Table . There was no association between internally rotated QTAx (medialised quadriceps tendon) and patient outcomes.
Using data from patients identified with QTM ( n = 76), analysis was undertaken to assess the effect, if any, of femoral component position on PROMs. The association between femoral component rotation and change in KOOS‐12 score in patients with QTM is shown in Figure . There was no significant association between postoperative LDFA, MPTA, tibial component rotation, change in LDFA or change in MPTA in patients with QTM. The correlations between outcomes and femoral component rotation are reported in Table . Patients with QTM who reported a KOOS‐12 > 75 ( n = 40) had a mean femoral component external rotation of 2.9° (SD 1.4°) compared to a mean of 1.4° (SD 1.8°) in patients with a KOOS‐12 < 75 ( p < 0.001, n = 36). There was a positive correlation between femoral component rotation and QTAx angle in patients who achieved a good outcome (KOOS‐12 > 75) after TKA ( r = 0.536, p < 0.001). This indicates that patients with more severe QTM require greater femoral component external rotation to optimise outcomes. This is shown in Figure . Patients with QTM were divided into two groups based on the median femoral component rotation. Due to postoperative metal artefact obscuring landmarks, the femoral component rotation was measured in 70 of 76 patients. Patients who had ≤2° of femoral component external rotation had a mean KOOS‐12 of 70.0 (SD 16.8) compared to patients with >2° of femoral component external rotation who scored a mean KOOS‐12 of 79.3 (SD 18.1; p = 015). Patients who had ≤2° of femoral component external rotation had a mean change in KOOS‐12 of 31.3 (SD 21.3) compared to patients with >2° of femoral component external rotation who had mean change in KOOS‐12 of 43.0 (SD 21.9). Therefore, the difference in the change in KOOS‐12 between the two groups was 11.7 points (CI = 1.4–22.0, p = 0.013). Remaining PROMs and KOOS subscores are summarised in Table . There was no difference in KOOS‐12 between patients with no QTM and patients with QTM and greater than 2° of femoral component external rotation (80.3 [SD 16.1] vs. 79.3 [SD 18.1]). The comparison of KOOS subscale PASS percentages in patients with QTM and greater or less than 2° of femoral rotation is shown in Table .
The key findings of this study are, first, that the QTM is a risk factor for significantly poorer PROMs following TKA and second, in patients with QTM, additional femoral component external rotation is associated with improved outcomes. When considering the overall impact of QTM on PROMs following TKA, lower reported scores in patients with QTM were statistically significant but not clinically significant. For example, the average difference in KOOS‐12 at 12 months postoperatively was 7 points lower in patients with QTM compared to non‐QTM patients. However, the MCID for KOOS‐12 is considered to be 11.1 making this difference clinically insignificant. This was common across other PROMs including the KOOS subscales, the FJS and EQ‐VAS. The second phase of this study examined the effect of femoral component rotation on PROMs in patients with QTM. This showed an association between increased femoral component external rotation and PROMs in patients with QTM. Correlation analysis demonstrated a moderately strong association ( r = 0.345 to 0.397) between femoral component rotation and KOOS‐12, FJS and pre‐ and postoperative change in KOOS‐12. A strong correlation would require a Pearson's correlation of >0.5. The scatter graph (Figure ) shows that while there were still some patients with QTM and less femoral component rotation who did well, the likelihood of a good outcome increased significantly in patients with more than approximately 2° of external rotation. Patients with QTM were analysed in three ways to attempt to quantify the effect of femoral component rotation on PROMs. First, applying a KOOS‐12 cut‐off score based on the median score of 75 points, patients with QTM were divided into two approximately equal groups. The group with better PROMs were noted to have a higher mean femoral component external rotation. Further analysis of this subgroup showed that there was a significant relationship ( r = 0.536, p < 0.001) between the preoperative QTAx angle and the amount of femoral component external rotation required to achieve inclusion in the group of good outcome patients with a score of >75 points (Figure ). This suggests a dose‐dependent relationship between the preoperative QTM and the amount of femoral component rotation required to achieve superior PROMs. Second, the same group of patients with QTM was divided into two approximately equal groups based on femoral rotation of greater or less than the median score of 2° from the preoperative posterior condyles. There was a statistically and clinically significant difference in KOOS‐12 scores (11.7 points) benefitting the group with more than 2° of externally rotation. This suggests QTM pathophysiology can be corrected or accommodated by altering the flexion‐extension axis of the knee to compensate for the extensor mechanism deformity. Finally, the PASS percentages were calculated on the patients with more or less than 2° of femoral component external rotation. This confirmed a significant improvement in the PASS percentages for the subscales of the KOOS score in patients with QTM who had >2° of femoral external rotation (Table ). Current concepts in knee arthroplasty alignment technique generally assume the extensor mechanism is always orientated perpendicular to the flexion‐extension axis of the knee or the axial alignment of the trochlear groove, or both [ , , , , ]. Based on these assumptions, the aim of personalised alignment techniques is to recreate the native flexion–extension axis of the knee and the alignment of the trochlear groove. These assumptions cannot be maintained if the vector force of the quadriceps muscle is highly variable and not linked to the alignment of the trochlear groove . There is evidence that lateralisation relative to the mechanical axis and the trochlear groove (QTCA), or external rotation of the quadriceps tendon relative to the shaft of the femur (QTAx), leads to a mismatch between the alignment of the extensor mechanism and both the flexion–extension axis of the knee and the alignment of the trochlear groove in a subgroup of patients . QTM was shown to be a pathological deformity of the extensor mechanism by the paper from Talbot et al. which showed that it was closely linked to the development of lateral facet patellofemoral osteoarthritis. This deformity will be recreated if the native anatomy is recreated during TKA leading to a recreation of the native patella maltracking. Mizuno et al. recognised the importance of the force vectors related to osseous anatomy leading to patellar maltracking and altered tibiofemoral kinematics . Biomechanical studies have shown the influence of femoral component rotation on patellar maltracking , quadriceps weakness and impaired gait . The pull of the quadriceps tendon can, therefore, be seen as an independent soft tissue factor altering biomechanics and influencing outcomes after TKA. The current results reinforce this, demonstrating the outcomes of TKA are both statistically and clinically, significantly worse in patients with QTM if the position of the posterior condyles is recreated. These results contradict the idea that recreating the native flexion–extension axis or trochlear groove anatomy is the best approach in every patient and suggest that patients with preoperative anatomical deformity, such as QTM, changing the native anatomy to correct or accommodate deformity may be an advantage. This individualised approach provides a potential advancement to the previous paradigm of optimal femoral component alignment being parallel posterior condylar axis or to the surgical epicondylar axis . Future advancements in individualised alignment may involve the analysis of multiple anatomical factors such as the extensor mechanism alignment to determine the best component position for any individual combination of anatomical variations. This will require large databases with preoperative, intraoperative and postoperative measurements combined with accurate patient outcomes. It is likely to also require accurate assistive technologies to implement the individualised plans intraoperatively. While 19.6% of patients in this study had a degree of quadriceps malalignment which appeared to affect their outcomes, the remaining 80.4% of patients did not. There was no difference in the outcomes of patients with neutral QTAx (<15°) and those with severely medialised quadriceps tendons. This would suggest that approximately 80% of patients do not benefit from additional femoral component external rotation. These patients are likely to be best served by alignment philosophies such as kinematic alignment, functional alignment and inverse kinematic alignment which prioritise ligament balance and which lead to femoral component rotation closer to the posterior condylar axis. Previous studies which have not found a difference in outcomes between kinematic alignment and mechanical alignment may have been influenced by the inclusion of patients with preoperative extensor mechanism malalignment . This subgroup of patients may have benefitted from the additional femoral component external rotation provided by mechanical alignment. The next evolution of personalised alignment techniques for TKA should identify patients with pathological extensor mechanism alignment and alter the component position commensurate with the severity of malalignment. An algorithm to guide the rotation of the femoral component could be developed which may vary between component types due to variations in design such as trochlear groove alignment and lateralisation, anterior femoral offset, trochlear ridge height, and tibiofemoral stability. Additional surgical factors also need to be considered for their effect on patella maltracking. These include changes in alignment such as tibial component rotation, joint‐line obliquity, changes in the anterior offset of the femoral component, patella component position, the amount of bone removed by the lateral facetectomy and the angle and depth of patella resection, and changes in surgical technique including lateral retinacular release and surgical approach. Alterations in femoral component rotation will also require adjustments to coronal and sagittal component position to obtain adequate ligament balance within a safe envelope. This research has limitations, including a lack of randomisation. Femoral component position was determined intraoperatively and based on a combination of factors including the preoperative posterior condylar angle on CT scan, trochlear groove alignment, coronal alignment, ligament balance and the presence of severe patella maltracking. While coronal component alignment, tibial rotation and change in coronal component alignment were shown to have no effect on PROMs in patients with QTM, other factors such as patella resection, anterior femoral offset and component lateralisation were not measured. This has led to a wide range of component rotations which allowed for comparative analysis but may introduce undetermined confounding factors. Due to the retrospective nature of this study, QTA was not measured at the time of surgery and, therefore, did not influence the choice of alignment. The study was sufficiently powered to conduct the predetermined analyses. However, a larger sample is required to establish the optimal positioning of the femoral component for any given degree of extensor mechanism malalignment.
QTM is a predictor of patella maltracking and an risk factor for poor patient outcomes following TKA. Adjustment to the flexion–extension axis of the knee by externally rotating the femoral component relative to the native posterior condyles led to improved outcomes in this subgroup of patients.
Simon Talbot designed the study, collected data and wrote the manuscript. Rachel Zordan analysed data and wrote the manuscript. Francesca Sasanelli collected data and wrote the manuscript. Matthew Sun collected data and wrote the manuscript. All authors read and approved the final manuscript.
The authors declare no conflicts of interest.
Ethics approval was sought and gained from the St. Vincent's Health Network (HREC/18/SVH/250). Informed consent was obtained from all participants to prospectively collect and analyse scans and outcome data.
|
The Journal | 8d47495e-9a1a-4c52-b17f-0e67e21a61b7 | 11318577 | Pediatrics[mh] | The authors have no conflicts of interest to declare. No funding was received. O.D.S. and C.P.S. contributed equally to this editorial. Both took part in writing and editing. |
A Playroom Internal Waiting Area Improves Productivity in the Pediatric Emergency Department | d19592cf-2810-461d-9b45-eb19df8d997e | 7081871 | Pediatrics[mh] | Rapid rooming of patients on arrival facilitates clinical decision-making and disposition, thereby increasing the number of patients a pediatric emergency department (PED) can see per hour; rapid rooming also improves the perception of care and timeliness. Parents value being seen quickly on arrival in the PED and, particularly in non-monopoly markets, this is important. Rapid rooming, however, requires empty treatment rooms, and these are typically limited by physical or staffing constraints. – Efficiently using the available staffed spaces becomes paramount. Here we describe how we measured the effect of a PED playroom on time to rooming of patients and total length of stay (LOS). We have observed that in many PEDs most of the time in treatment rooms is spent waiting, rather than being treated or evaluated. Such waiting is typically for patient registration, imaging to be performed, test results to return, and antipyretics to take effect. While waiting, the treatment room itself adds no value to the child’s stay. Worse, treatment rooms are designed for clinical care, which is inherently child unfriendly. Frequently, parents spend a good deal of time restraining their child’s natural curiosity, adding to the stress of the encounter. The opportunity cost to keeping patients in treatment rooms for the duration of their ED stay is that it prevents other children from being seen. Despite this, there is a widespread culture in many American PEDs of keeping children in treatment rooms for the duration of their ED visit.
We created a flow system moving children who were not receiving active interventions from their treatment room to a playroom. This space is child friendly and, as with inpatient playrooms, examinations and procedures are prohibited in this “safe space.” Children in the playroom are supervised by their parents, not nursing staff. This frees up nursing staff and treatment rooms to allow the next patient to be evaluated. We are unaware of any prior attempts to implement such a playroom model in pediatric emergency medicine. Randomized controlled trials of interventions such as ours are impractical; the numbers of PEDs being opened is simply too small and the prospect of obtaining consent from hospital administrators to allow their PED to be randomized to a potentially less-efficient flow model is remote. Before and after studies are difficult because the concept is unproven and secular effects are inevitable. Implementing and comparing alternate patient-flow systems on alternate days presents logistical challenges and costs that few healthcare systems would contemplate. Consequently, we tried to demonstrate the effect of a PED playroom on patient flow by comparing patient flow characteristics at times when the playroom model could be of benefit compared with times when we knew by the limits of our design that a playroom model could not help. We would then attribute the difference in performance primarily to the playroom. This study was exempt from institutional review board review. Setting This was a community PED seeing 21,000 patients annually at the time of the study with mixed pediatric/general emergency physicians and advanced practice provider (APP) staffing model. The PED has 11 exam rooms with a guaranteed minimum staffing for eight beds and sees patients up to 21 years of age. Study Definitions Time zero was set as the time the patient was entered into the computer system. This was performed by a nurse in the arrival lobby for patients who were brought in by their parents and by the nursing team leader if a patient was brought in by ambulance. We measured the time interval from arrival (time zero as defined above) to either (1) being roomed by a nurse or (2) roomed and evaluated by a physician or an APP, whichever was shorter. This analysis method captured cases where the medical exam was initiated before or during the nursing triage process. We defined LOS as time from arrival to time the patient left the ED. For analysis purposes we derived playroom eligibility from recorded electronic health record (EHR) variables and the fact of being placed there. This assumes that children who were not placed in the playroom were ineligible to be placed there for subjective reasons (eg, medically or behaviorally unsafe, rather than staff not moving them). Population Health Research Capsule What do we already know about this issue? Children occupying treatment rooms in the pediatric emergency department while awaiting test results or to defervesce delays the evaluation of subsequent children. What was the research question? What would be the effect of moving these children from treatment rooms to a shared playroom? What was the major finding of the study? A playroom internal waiting area improved throughput times overall except during very quiet times. How does this improve population health? In cultures where parents expect to occupy a treatment room for the duration of their child’s stay, incorporating a playroom improves patient throughput times. Outcomes Our primary outcome was the effect of a playroom on rooming times measured by the hazard ratio (HR). We measured the effect of the playroom on the odds of a patient being roomed within 30 minutes of arrival. Our secondary outcome was the effect of playroom use on overall PED LOS. For this we measured the interval from rooming to discharge and added it to the interval from arrival to rooming. Intervention The intervention was a PED playroom where patients could await the next task in care. Patients were classified as eligible to be placed in the playroom (“playroom eligible”) if they met all the following criteria: required only imaging, urine testing, or venipuncture without intravenous placement; older than eight weeks; not immunocompromised; and not suspected to be medically or behaviorally dangerous to other children. (For example, a suspected case of measles or a child prone to violent outbursts could not be sent to the playroom.) Children not meeting these criteria were “not playroom eligible.” Children who were “not playroom eligible” had, except for trips to the radiology suite, to be kept in their treatment rooms for the entire duration of their ED stay. Staff, not parents, determined playroom eligibility. The PED patient-flow model expected immediate rooming and in-room triage by the nurse assigned to an exam room unless all exam rooms were occupied. Any team member could room a patient; physician evaluation could occur before, during, or after nursing triage. Nursing triage as in most EDs performs a variety of functions in addition to determining treatment priority. Prior to implementation we trained a core group of nurses who staff the PED and provided immediate feedback when the model was not being implemented. We used nurse staff meetings and weekly electronic newsletters to reinforce use of this model during the initial year. Analysis We performed a retrospective analysis using data from all PED visits extracted from the EHR from August 8, 2015, to August 8, 2017. We performed regression parameterized as a proportional hazards model with the Gompertz distribution using Stata 14.2 statistical software (Statacorp LLP, College Station, TX). We adjusted the regression for patients’ ages and triage category; individual physician or APP; the number of patients who arrived within the preceding hour; whether any laboratory testing was performed; how many of the preceding eight patients required lab testing. We used the previous eight patients due to the PED’s minimum staffing for eight beds. We tested for interactions between variables and retained those that were important. We included a cluster term for patient to adjust for repeat attendance. We also included a variable for the initial 11-month period when the PED functioned as a discrete unit embedded within an adult ED with limited physical barriers. After this period the PED was relocated within the existing space by bed re-designation and physically separated from the adult unit with three sets (rather than the previous single set) of double doors, and provided with its own ambulance entrance. This change added several minutes walking time for parents from arrival (time zero) to their treatment room. We graphed the proportional hazards regression to show the effects of the playroom on median time to rooming under differing patient acuity and volume scenarios. These graphs allow the reader to compare scenarios when there were no playroom-eligible patients and when there were more rooms than patients available (ie, the playroom could not affect patient throughput) and with a range of other scenarios when a playroom could improve patient throughput. We used logistic regression, with the same independent variables as the proportional hazards model, to estimate the odds of a patient being roomed within 30 minutes of arrival. The differences observed between these scenarios reflects the effect of the playroom on patient throughput. For our secondary outcome, we created a proportional hazards regression model of the interval from being roomed to leaving the ED. This prevented incorporation of the direct effects of the playroom noted in the first regression contaminating the second regression. Variables that lead to faster rooming (e.g., higher acuity) may also lead to longer time to discharge. We included the interval from arrival to rooming as an independent variable to see whether there were any indirect effects of changing the time to being roomed on the subsequent duration of the visit. We also included age, triage category, and blood, urine or radiology testing, as independent variables. We tested for interactions between variables and retained those that were important, including three-way interactions between the number of patients arriving in the PED during that hour, on that day, and the number of the preceding eight patients who were playroom eligible. We again included a cluster term for patient to adjust for repeat attendance. This model better reflected reality than simpler models and allowed for the possibility that the playroom intervention could variably improve or worsen rooming times depending on circumstances. Because of this variable effect, we graphed the effect of the playroom under different scenarios using the marginsplot function in Stata. We estimated the effect on our secondary outcome indirectly using the median time taken to room patients from the second regression (indirect effect) and adding the resulting median time to the time to be roomed (direct effect). We manually graphed our secondary outcome under a selected number of scenarios.
This was a community PED seeing 21,000 patients annually at the time of the study with mixed pediatric/general emergency physicians and advanced practice provider (APP) staffing model. The PED has 11 exam rooms with a guaranteed minimum staffing for eight beds and sees patients up to 21 years of age.
Time zero was set as the time the patient was entered into the computer system. This was performed by a nurse in the arrival lobby for patients who were brought in by their parents and by the nursing team leader if a patient was brought in by ambulance. We measured the time interval from arrival (time zero as defined above) to either (1) being roomed by a nurse or (2) roomed and evaluated by a physician or an APP, whichever was shorter. This analysis method captured cases where the medical exam was initiated before or during the nursing triage process. We defined LOS as time from arrival to time the patient left the ED. For analysis purposes we derived playroom eligibility from recorded electronic health record (EHR) variables and the fact of being placed there. This assumes that children who were not placed in the playroom were ineligible to be placed there for subjective reasons (eg, medically or behaviorally unsafe, rather than staff not moving them). Population Health Research Capsule What do we already know about this issue? Children occupying treatment rooms in the pediatric emergency department while awaiting test results or to defervesce delays the evaluation of subsequent children. What was the research question? What would be the effect of moving these children from treatment rooms to a shared playroom? What was the major finding of the study? A playroom internal waiting area improved throughput times overall except during very quiet times. How does this improve population health? In cultures where parents expect to occupy a treatment room for the duration of their child’s stay, incorporating a playroom improves patient throughput times.
Our primary outcome was the effect of a playroom on rooming times measured by the hazard ratio (HR). We measured the effect of the playroom on the odds of a patient being roomed within 30 minutes of arrival. Our secondary outcome was the effect of playroom use on overall PED LOS. For this we measured the interval from rooming to discharge and added it to the interval from arrival to rooming.
The intervention was a PED playroom where patients could await the next task in care. Patients were classified as eligible to be placed in the playroom (“playroom eligible”) if they met all the following criteria: required only imaging, urine testing, or venipuncture without intravenous placement; older than eight weeks; not immunocompromised; and not suspected to be medically or behaviorally dangerous to other children. (For example, a suspected case of measles or a child prone to violent outbursts could not be sent to the playroom.) Children not meeting these criteria were “not playroom eligible.” Children who were “not playroom eligible” had, except for trips to the radiology suite, to be kept in their treatment rooms for the entire duration of their ED stay. Staff, not parents, determined playroom eligibility. The PED patient-flow model expected immediate rooming and in-room triage by the nurse assigned to an exam room unless all exam rooms were occupied. Any team member could room a patient; physician evaluation could occur before, during, or after nursing triage. Nursing triage as in most EDs performs a variety of functions in addition to determining treatment priority. Prior to implementation we trained a core group of nurses who staff the PED and provided immediate feedback when the model was not being implemented. We used nurse staff meetings and weekly electronic newsletters to reinforce use of this model during the initial year.
We performed a retrospective analysis using data from all PED visits extracted from the EHR from August 8, 2015, to August 8, 2017. We performed regression parameterized as a proportional hazards model with the Gompertz distribution using Stata 14.2 statistical software (Statacorp LLP, College Station, TX). We adjusted the regression for patients’ ages and triage category; individual physician or APP; the number of patients who arrived within the preceding hour; whether any laboratory testing was performed; how many of the preceding eight patients required lab testing. We used the previous eight patients due to the PED’s minimum staffing for eight beds. We tested for interactions between variables and retained those that were important. We included a cluster term for patient to adjust for repeat attendance. We also included a variable for the initial 11-month period when the PED functioned as a discrete unit embedded within an adult ED with limited physical barriers. After this period the PED was relocated within the existing space by bed re-designation and physically separated from the adult unit with three sets (rather than the previous single set) of double doors, and provided with its own ambulance entrance. This change added several minutes walking time for parents from arrival (time zero) to their treatment room. We graphed the proportional hazards regression to show the effects of the playroom on median time to rooming under differing patient acuity and volume scenarios. These graphs allow the reader to compare scenarios when there were no playroom-eligible patients and when there were more rooms than patients available (ie, the playroom could not affect patient throughput) and with a range of other scenarios when a playroom could improve patient throughput. We used logistic regression, with the same independent variables as the proportional hazards model, to estimate the odds of a patient being roomed within 30 minutes of arrival. The differences observed between these scenarios reflects the effect of the playroom on patient throughput. For our secondary outcome, we created a proportional hazards regression model of the interval from being roomed to leaving the ED. This prevented incorporation of the direct effects of the playroom noted in the first regression contaminating the second regression. Variables that lead to faster rooming (e.g., higher acuity) may also lead to longer time to discharge. We included the interval from arrival to rooming as an independent variable to see whether there were any indirect effects of changing the time to being roomed on the subsequent duration of the visit. We also included age, triage category, and blood, urine or radiology testing, as independent variables. We tested for interactions between variables and retained those that were important, including three-way interactions between the number of patients arriving in the PED during that hour, on that day, and the number of the preceding eight patients who were playroom eligible. We again included a cluster term for patient to adjust for repeat attendance. This model better reflected reality than simpler models and allowed for the possibility that the playroom intervention could variably improve or worsen rooming times depending on circumstances. Because of this variable effect, we graphed the effect of the playroom under different scenarios using the marginsplot function in Stata. We estimated the effect on our secondary outcome indirectly using the median time taken to room patients from the second regression (indirect effect) and adding the resulting median time to the time to be roomed (direct effect). We manually graphed our secondary outcome under a selected number of scenarios.
We had 43,634 patient encounters of whom 10,134 (23%) were playroom eligible and 2,260 (5%) were admitted. summarizes the demographic characteristics. The adjusted hazards ratio (HR) of rooming from arrival was HR 1.14 (95% confidence interval [CI], 1.10–1.18) per previously arriving playroom eligible patient. There were significant interactions between the HR for initial rooming, the total number of patients seen that day (started at midnight) up to the arrival of the current patient, and the number of patients who arrived within an hour of the patient arriving. The odds ratio (OR) of a patient being roomed within 30 minutes of arrival was OR 1.46 (95% CI, 1.33–1.56) for each previously arriving playroom-eligible patient. The impact of the playroom on PED LOS varied depending on daily census and recent arrivals. For example, during a quiet period (10 patients seen before the current patient, of whom only two presented within an hour of the current patient), the decrease in PED rooming time, due to four vs zero playroom-eligible patients, was four minutes (10 vs 14 minutes) and overall improvement in LOS was two minutes (96 vs 98 minutes). In sharp contrast, when the department was busy (90 patients seen before the current patient, 12 of these presented within an hour of the current patient), the decrease in PED rooming time, due to four vs zero playroom-eligible patients, was 42 minutes (68 vs 110 minutes), and the overall improvement in PED LOS was 40 (168 vs 208) minutes. shows the effects of each variable and their interactions. Higher acuity in the current patient, lower acuity in the preceding eight patients, and fewer investigations in the preceding eight patients were associated with shorter median rooming times. Conversely, lower acuity in the current patient being treated, higher overall census, and more patients arriving within an hour of the current patient, were all associated with longer median rooming times. shows the effects of using a playroom/internal waiting room model given various scenarios. These graphs show decreased median time to rooming as the number of playroom-eligible patients increases. As patient census increases, particularly when a large number of patients arrive in the hour preceding the arrival of the current patient, the median time to rooming increases, despite increasing numbers of playroom-eligible children. This reflects the point where the number of patients to be seen exceeds staff capacity. demonstrates the effect of the playroom on total LOS in a subset of scenarios presented in . shows that the interval between being roomed and being discharged was most heavily influenced by the severity of illness and the extent of laboratory and radiological testing performed on the child him/herself rather than on the investigation testing ordered on other children. We found an association between shorter time to discharge after being roomed and the log( e ) of the interval between arrival and being roomed . This partially offsets the reduction in time to rooming on overall length of stay in the PED and the overall effect of the playroom model varies with increasing PED activity.
The answer to the question, “Does a playroom decrease time to rooming and LOS?”, is that it depends. The playroom intervention generally decreased patient rooming and LOS times. The effect size varies with how busy the PED is; up to a point, the busier the PED the greater the benefit. When all treatment rooms are filled with non-playroom-eligible patients then the benefit of the playroom disappears. Times to rooming and ED LOS under this scenario reflect the benefit of the playroom and other patient characteristics. Our results adjust for these other characteristics to the extent that we could, but our estimates remain just that. Conversely, when patient volumes are low, moving patients to the playroom (for example, to defervesce) and sometimes having to move them back to a treatment room for re-evaluation imposes a time cost without clear benefit to the next patient who has not yet presented. The practical implication is that during quiet times, typically 3 am to 8 am in our PED when there are open available exam rooms, patients can be allowed to sleep in an exam room without loss of productivity. Our other findings, higher acuity in the current patient, and lower acuity and less laboratory testing in preceding patients, was associated with more rapid rooming seem self-evident but their magnitude is important. While acuity cannot be changed, implementation of evidence-informed pathways and additional physician training may decrease reliance on laboratory investigations and thereby further improve patient throughput. While improving flow in the PED is primarily a PED priority, flow is dependent on many factors that the PED cannot easily control, such as staff and actual or functional space limitations in both the PED itself and in inpatient services. , Our approach facilitates early clinical decision-making; this is particularly effective at decreasing LOS. Interventions such as those that can be implemented by the PED itself are particularly desirable. , Decreasing waiting times and LOS decreases the number of patients who leave without being seen and improves patient satisfaction. Parents generally accept this approach. We have found that comparing our approach to Southwest Airlines boarding is both apt and readily accepted. Our approach fits squarely within the overall strategy of “internal waiting rooms” and “awaiting results” areas used in general EDs. Our data provides objective supportive evidence for general ED directors who wish to implement an internal waiting room. There are unique imperatives to PED playrooms, however. First, a playroom addresses much of the challenge of child-centered care in the ED. Second, it helps decrease parental anxieties as they see their child defervesce and resume normal behavior. In some settings parents’ notions of suitability of their children’s newly-found playmates may occasionally arise, although in our experience this is rarely verbalized. This work builds on the underlying time and space limitations thesis of Michelson et al. We effectively created more treatment space and nursing resources by removing those children who need neither from the treatment room. There are limits to what our playroom model can achieve as evidenced by a small offset in the benefit of rapid rooming on the time taken in the next phase of care. This may reflect a difference in settings. Michelson et al. describe an academic setting where doctors are plentiful; in the community setting there are far fewer medical providers delivering more care. Second, in Michelson et al space was a limiting factor 5% of the time. In our setting we anticipated a daily attendance of up to 45 patients a day, but in practice have seen 110 on busy days. This distinguishes our observed practice from Michelson et al’s computer models. The underlying principles guiding their models and our intervention are the same. Other concerns include that increasing PED efficiency results in sufficiently increased census that downstream resources (eg, lab, inpatient services) may find their workload increased. Ideally, the playroom is a separate physical space with primary stewardship belonging to child life services (play therapy). However, the same benefits could be expected to be obtained by simply moving patients back to the waiting room without the investment in child-centeredness implied by the playroom model. Whether it would be as well accepted by parents depends on the setting. In our case the opposite occurred. Our census is now substantially higher than the 14,500 patients originally planned for when we designed a “no wait” PED. Consequently, some patients now do have to wait to be roomed, and the playroom space is often shared with some patients who have just arrived. Using the playroom requires PED staff to empower the parents to observe their children as they defervesce, or as part of a head injury observation period, secure in the knowledge that PED staff are immediately available should they be needed. Empowering parents in this way teaches them how to manage simple fevers at home and reassures them in the face of a common tendency to overestimate how sick one’s own child is. It also reduces the overall cost of care by allowing staff to see other patients. When a strategy of parental observation in the playroom is used, such as for head injuries, staff need to recognize that interval development of symptoms may require rapidly returning a patient to a treatment room. This is to be expected and should not be interpreted as a failure of the approach. The concept of playroom/internal waiting area is straightforward, but successful implementation required intensive prolonged effort by physician and nurse leaders with wholehearted support from hospital administrators. This process results in more patients being seen in a shift by the same number of staff. Consequently, these staff need to be supported. Although beyond the scope of the evidence presented here, we observed that additional physician training, with PED management protocols to relieve cognitive load, order sets or order preference lists that align with PED management protocols, and physician scribes are all hugely helpful when implementing this approach. Nursing staff need to be similarly given additional training and supported with respect to streamlining processes and documentation that do not add value to the patient. A key investment is a child life specialist (CLS). We initially relied on inpatient CLS staff, but as their value became clear we brought in two of our own CLS as part of our PED staff. Future research could focus on refining playroom eligibility, measuring associated parallel flow strategies, the effects of CLS specialists, and reinventing nursing processes, the role of discharge instructions, and in identifying those processes that parents perceive as adding little value.
This was a single center where the model was implemented at the inception of the PED, prior to the establishment of a culture that would allow unnecessary in-room waiting by patients. It is intuitive that the time taken to room a patient could be affected by the number of patients seen that day, that hour, as well as by the acuity and laboratory testing required for the prior patients. Statistical models risk oversimplifying this reality. We have addressed this by using interacted models which, although more complex than parsimonious ones, had better fit characteristics and more faithfully reflect the observed reality. These complex models require graphical description to be readily understood. Even these models are simplifications of reality. Proving causality is difficult; our approach of comparing time to rooming and ED LOS and when the PED playroom can work (open treatment rooms) and cannot work (all treatment rooms occupied with non-playroom eligible patients) is an estimate, which despite adjustment for other considerations, will always be influenced by patient load and complexity. Nonetheless, given the constraints inherent in this type of research our estimates have been estimated as tightly as possible. Our results also occur in the context of parallel flow where any team member can room a patient and use an electronic tracking board to communicate that fact. This parallel flow decreases the potential for the triage process to impede overall PED productivity. This effect is approximated in other PEDs by employing multiple nurses dedicated to initial triage. Parallel rooming and a playroom/internal waiting area represent different independent processes, and the former does not alter our findings about the latter. Our work does not address other factors in PED operations and patient satisfaction such as quality of patient-staff interactions, perceptions of caring, and time spent with patients. , We were limited in the variables we could use. Triage category, although used for prioritizing patients, is relatively crude. We accept that some readers may regard our secondary outcome as more important than our primary outcome. We also did not perform a chart review to determine appropriateness of the decision-making as to which children were moved to the playroom. As a group, playroom-eligible children were less sick, older, and had less laboratory testing than those who were not.
Implementing a playroom in the PED for selected patients generally decreases time to rooming of the next patient and decreases LOS.
|
Evaluation of the Effect of Patient Education and Strengthening Exercise Therapy Using a Mobile Messaging App on Work Productivity in Japanese Patients With Chronic Low Back Pain: Open-Label, Randomized, Parallel-Group Trial | fa04e5a3-54a0-4dd6-86cd-49a8a6a94391 | 9152720 | Patient Education as Topic[mh] | Background Chronic low back pain (CLBP) is common in adults, with prevalence rates as high as >80% . In Japan, the low back is the most common site for pain in 31% of Japanese adults aged ≥20 years . Low back pain (LBP) is associated with high disability. In the Global Burden of Diseases, Injuries, and Risk Factors Study 2017, LBP ranked highest in terms of years lived with disability among the 354 conditions studied over the period of 28 years . Recurrence of pain, limitation of activity, loss of productivity, and work absenteeism contribute to the associated huge socioeconomic burden of CLBP [ - ]. In a retrospective, cross-sectional study using the 2014 Japan National Health and Wellness Survey data, 77.4% of 30,000 Japanese adults with CLBP reported presenteeism and had a poor quality of life (QoL) compared with those without presenteeism . A cross-sectional survey of 392 patients with CLBP in Japan estimated the costs for lost productivity as approximately ¥1.2 trillion (US $10 billion) per year . A recent internet-based survey of 10,000 Japanese workers reported that 36.8% of the participants had a health problem that interfered with their work during the past 4 weeks. Among the symptoms that most affect presentism, neck pain or shoulder stiffness, LBP, and mental illnesses accounted for approximately 35.7%. The annualized costs of presenteeism per capita for these conditions were US $414.05, US $407.59, and US $469.67, respectively . Several studies have reported that exercise alleviates CLBP and disability [ - ]. Furthermore, exercise regimens have been reported to reduce disability and improve the QoL of individuals with CLBP . Patients with chronic pain, including CLBP, exhibit various symptoms and signs as the duration of the pain increases. When the pain lingers, it becomes intractable and serious through a cyclical interaction with psychosocial factors. As illustrated by the fear-avoidance model of pain, pain often involves catastrophizing when it becomes intractable . There are also several psychological treatments or therapies for musculoskeletal symptoms . In a study on patients with CLBP, both groups—one that received only exercise therapy and the other that received a combination of cognitive behavioral therapy and exercise therapy—showed improvements in pain intensity and QoL compared with baseline . Despite these encouraging results, patients often show noncompliance with exercise therapy. Perceptions of the underlying illness and exercise therapy, lack of positive feedback, and degree of helplessness are factors related to noncompliance with exercise therapy . In recent years, digital devices have become popular for supporting exercise therapy for musculoskeletal pain [ - ]. These digital devices have been reported to improve adherence . Most studies have supported the role of digital interventions for LBP alleviation [ - ]. The mobile messaging app Secaide (Travoss Co, Ltd) is a digital device designed to enhance the patient’s understanding of CLBP and enable remote exercise therapy for more accessible and personalized home-based pain management. The app was nicknamed se · ca · ide by the self-care guide service . Secaide also means in the world when read in Japanese. The usefulness of mobile messaging app–based interventions in managing neck and/or shoulder stiffness and LBP is established in workers in randomized controlled trials . Objectives Previous studies have not clarified the impact of intervention in CLBP treatment on presenteeism in patients. As a hypothesis, we expected that therapeutic intervention for CLBP would have a positive effect on presenteeism. This study aims to explore the effects of patient education and strengthening exercise therapy on work productivity, symptoms, and QoL in patients with CLBP who were receiving medication and who continued to experience pain despite treatment. In a new attempt, we used web-based videos for patient education and a mobile messaging app to support the continuation of exercise therapy. Because of the COVID-19 pandemic, we devised methods for study continuation without any visits to clinics by the intervention in web-based remote exercise therapy and by using patient-reported outcomes (PROs) as an outcome evaluation method.
Chronic low back pain (CLBP) is common in adults, with prevalence rates as high as >80% . In Japan, the low back is the most common site for pain in 31% of Japanese adults aged ≥20 years . Low back pain (LBP) is associated with high disability. In the Global Burden of Diseases, Injuries, and Risk Factors Study 2017, LBP ranked highest in terms of years lived with disability among the 354 conditions studied over the period of 28 years . Recurrence of pain, limitation of activity, loss of productivity, and work absenteeism contribute to the associated huge socioeconomic burden of CLBP [ - ]. In a retrospective, cross-sectional study using the 2014 Japan National Health and Wellness Survey data, 77.4% of 30,000 Japanese adults with CLBP reported presenteeism and had a poor quality of life (QoL) compared with those without presenteeism . A cross-sectional survey of 392 patients with CLBP in Japan estimated the costs for lost productivity as approximately ¥1.2 trillion (US $10 billion) per year . A recent internet-based survey of 10,000 Japanese workers reported that 36.8% of the participants had a health problem that interfered with their work during the past 4 weeks. Among the symptoms that most affect presentism, neck pain or shoulder stiffness, LBP, and mental illnesses accounted for approximately 35.7%. The annualized costs of presenteeism per capita for these conditions were US $414.05, US $407.59, and US $469.67, respectively . Several studies have reported that exercise alleviates CLBP and disability [ - ]. Furthermore, exercise regimens have been reported to reduce disability and improve the QoL of individuals with CLBP . Patients with chronic pain, including CLBP, exhibit various symptoms and signs as the duration of the pain increases. When the pain lingers, it becomes intractable and serious through a cyclical interaction with psychosocial factors. As illustrated by the fear-avoidance model of pain, pain often involves catastrophizing when it becomes intractable . There are also several psychological treatments or therapies for musculoskeletal symptoms . In a study on patients with CLBP, both groups—one that received only exercise therapy and the other that received a combination of cognitive behavioral therapy and exercise therapy—showed improvements in pain intensity and QoL compared with baseline . Despite these encouraging results, patients often show noncompliance with exercise therapy. Perceptions of the underlying illness and exercise therapy, lack of positive feedback, and degree of helplessness are factors related to noncompliance with exercise therapy . In recent years, digital devices have become popular for supporting exercise therapy for musculoskeletal pain [ - ]. These digital devices have been reported to improve adherence . Most studies have supported the role of digital interventions for LBP alleviation [ - ]. The mobile messaging app Secaide (Travoss Co, Ltd) is a digital device designed to enhance the patient’s understanding of CLBP and enable remote exercise therapy for more accessible and personalized home-based pain management. The app was nicknamed se · ca · ide by the self-care guide service . Secaide also means in the world when read in Japanese. The usefulness of mobile messaging app–based interventions in managing neck and/or shoulder stiffness and LBP is established in workers in randomized controlled trials .
Previous studies have not clarified the impact of intervention in CLBP treatment on presenteeism in patients. As a hypothesis, we expected that therapeutic intervention for CLBP would have a positive effect on presenteeism. This study aims to explore the effects of patient education and strengthening exercise therapy on work productivity, symptoms, and QoL in patients with CLBP who were receiving medication and who continued to experience pain despite treatment. In a new attempt, we used web-based videos for patient education and a mobile messaging app to support the continuation of exercise therapy. Because of the COVID-19 pandemic, we devised methods for study continuation without any visits to clinics by the intervention in web-based remote exercise therapy and by using patient-reported outcomes (PROs) as an outcome evaluation method.
Study Design This was a multicenter, open-label, randomized, parallel-group study conducted in Japan from June 2020 to March 2021 at 16 clinics ( ). The main clinical specialty of the 16 community-based clinics included 8 (50%) orthopedic facilities, 3 (19%) pain clinics, and 5 (31%) primary care facilities. In this study, patients were followed up for 12 weeks ( ). Patients who met the eligibility criteria were randomly assigned using a stochastic minimization procedure with allocation regulators, such as age (<45 or ≥45 years), sex (male or female), and willingness to enhance exercise therapy (yes or no). Ethics Approval The study was conducted in accordance with all the international and local laws, the principles of the Declaration of Helsinki , and the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement . Written informed consent was obtained from all patients before enrollment in the study. The study protocol and all subsequent amendments were approved by the institutional review board of Takahashi Clinic (clinical research implementation plan MA2020-P-002). The study was registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN000041037). Study Population Patients who met the following criteria were included in the study: (1) having LBP for >3 months, (2) aged 20 to 64 years, (3) receiving prescribed pharmacological treatment for the pain, (4) not likely to experience any unexpected pain flare-ups for 12 weeks, (5) able to walk independently, (6) engaging in work for >3 days per week in either full-time or part-time capacity for >3 hours a day, and (7) having the skill and understanding to operate mobile communications. The CLBP diagnosis was established by qualified practicing physicians. The key exclusion criteria were as follows: (1) aged >65 years, (2) having CLBP unrelated to a musculoskeletal condition, (3) with radiculopathy or constructive spinal deformity, (4) having LBP with red flags (with chest pain, malignant tumor, HIV infection, malnutrition, significant weight loss of ≥5% within 1 month, extensive neurological symptoms, or fever of ≥37.5 °C), (5) using over-the-counter medications for CLBP, (6) pregnant women and those who were willing to be pregnant during the clinical trial period, (7) receiving steroids (intravenous injection or oral administration) or opioids, and (8) unable to understand the Japanese language. Study Treatment, Education, and Therapy The patients received the prescribed pharmacological treatment, surgical treatment, and/or patient education and exercise therapy for the management of CLBP. Pharmacological Treatment Information about the use of medications for pain was obtained from an electronic medical record system (Mebix, Inc). Pharmacological treatment included nonsteroidal anti-inflammatory drugs, acetaminophen, weak opioids, blood flow improvers, muscle relaxants, medications for osteoporosis, antidepressant drugs, steroids, antiepileptic drugs, and nerve-blocking agents, such as local anesthetic drugs. Medications were assessed at randomization; weeks 4, 8, and 12; and study discontinuation. Surgical Treatment Any surgeries for pain relief were recorded at randomization; weeks 4, 8, and 12; and study discontinuation. Patient Education and Exercise Therapy A web-based video program was used to provide evidence-based thinking regarding the importance of a cognitive behavioral approach for patients with CLBP. The exercise therapy was developed by Travoss Co, Ltd, in accordance with the recommendations for alignment, core muscles, and endogenous activation, including improvement of posture and mobility for proper alignment , stimulation and/or strengthening of deep muscles for spinal stability , and operation of intrinsic pain for the activation of endogenous substances by aerobic exercise . Secaide, a mobile messaging app for mobile communication devices such as smartphones and tablets, with download enabled by a QR code, is an aid to exercise therapy. In Japan, this mobile messaging app is used for SMS text messaging and voice calls . Patient education and exercise therapy announcements were conducted as follows. The artificial intelligence–assisted chatbot was programmed to send messages to users with exercise instructions and some tips on what they can do in their daily lives to improve their symptoms. The messages were sent every day at a fixed time through the LINE app (a smartphone app widely used for sending and receiving SMS text messages, images, and videos, and making voice calls in Japan; LINE Corporation). The notification time can be changed by users to a time convenient for them. The exercise was performed during the patient’s favorite time. The participants can complete their exercise within approximately 1 to 3 minutes each day ( - ). During the first week, Secaide provided evidence-based thinking about the importance of a cognitive-behavioral approach for CLBP to patient education. Secaide also provided guidance to carry out six simple exercise menus for 60 days. After the 14th, information on two types of exercise was optionally added to patients who desire further exercise. At each clinic, the conventional group received only routine medical care. In the exercise therapy group, in addition to the routine medical care, patient education and strengthening of exercise were provided. To avoid cross-contamination between the 2 groups, only the exercise group received patient education and daily exercise therapy via Secaide ( - ). Survey All patients were required to respond to a web-based survey that captured demographic and background information, including occupation and exercise habits. Furthermore, pharmacological and surgical treatment for CLBP and the number of institutional visits in the last 30 days were collected at weeks 0 to 4, weeks 4 to 8, and weeks 8 to 12 and at study discontinuation. Adherence to the use of mobile messaging app–based exercise therapy was measured by the rate of implementation (%), calculated as follows: (access days/observation period)×100. Category aggregation for the adherence rate was performed by 0% to 25%, by 25% to 50%, by 50% to 75%, and by ≥75%. Assessments were made from the log information (date) of Secaide and the PRO response date, that is, weeks 0 to 4, weeks 4 to 8, weeks 8 to 12, and weeks 0 to 12. Study End Points Primary End Point The primary end point was the change in work productivity at week 12. The work productivity was measured using the Quantity and Quality method (QQ method), which evaluates work productivity in terms of quality, quantity, and efficiency and is an evaluation index for absenteeism . Secondary End Points The secondary end points were changes in work productivity measured using the Work Productivity and Activity Impairment Questionnaire: General Health (WPAI-GH) , CLBP and shoulder stiffness (Numerical Rating Scale [NRS]) , subjective ratings of stiffness and LBP on a scale of 1 to 5 , disease-specific QoL (Roland-Morris Disability Questionnaire [RDQ-24]) , health-related QoL (EuroQoL 5 Dimensions 5 Level [EQ-5D-5L]) , fear of movement (Tampa Scale for Kinesiophobia [TSK-11]) , degree of depression (Kessler Screening Scale for Psychological Distress [K-6]) , drug use, and consultation status at medical institutions. All the secondary end points were measured at baseline and week 12. In addition, changes in LBP and drug use were measured at weeks 4 and 8 during the study period. Statistical Analysis The data related to changes in WPAI-GH in a 6-week randomized study of patients with LBP were used to calculate the sample size of 100 participants . The required sample size in this study was estimated to be 90 patients for 80% power at an intergroup difference of 2.7, a common SD of the 2 groups of 4.5, and an α level of .05, using the 2-sample, 2-tailed t test. Considering a dropout rate of 10%, the total sample size was 100 (n=50, 50% patients in each group). For allocation, a minimization method was used, with adjustments for age, sex, and willingness to adopt the exercise therapy. Data were summarized using descriptive statistics of the mean (SE) for continuous variables and frequencies and percentages for categorical variables. To compare continuous data in the 2 groups, an analysis of covariance model (covariates: treatment, baseline, age, sex, and willingness to adopt the exercise therapy) or mixed-effects model for repeated measures (covariates: treatment, baseline, time, time×treatment, age, sex, and willingness to adopt the exercise therapy) was used for the primary and secondary end points, depending on the times of measurements. The Fisher exact test was used to compare the percentages in the 2 groups. In patients who had data reported at week 12, post hoc analyses were performed to check the impact of the treatment compliance (<75% and ≥75% exercise groups and conventional group) on the primary end point (work productivity) and secondary end points (NRS of CLBP and RDQ-24). Data were analyzed using SAS (version 9.4; SAS Institute Inc).
This was a multicenter, open-label, randomized, parallel-group study conducted in Japan from June 2020 to March 2021 at 16 clinics ( ). The main clinical specialty of the 16 community-based clinics included 8 (50%) orthopedic facilities, 3 (19%) pain clinics, and 5 (31%) primary care facilities. In this study, patients were followed up for 12 weeks ( ). Patients who met the eligibility criteria were randomly assigned using a stochastic minimization procedure with allocation regulators, such as age (<45 or ≥45 years), sex (male or female), and willingness to enhance exercise therapy (yes or no).
The study was conducted in accordance with all the international and local laws, the principles of the Declaration of Helsinki , and the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement . Written informed consent was obtained from all patients before enrollment in the study. The study protocol and all subsequent amendments were approved by the institutional review board of Takahashi Clinic (clinical research implementation plan MA2020-P-002). The study was registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN000041037).
Patients who met the following criteria were included in the study: (1) having LBP for >3 months, (2) aged 20 to 64 years, (3) receiving prescribed pharmacological treatment for the pain, (4) not likely to experience any unexpected pain flare-ups for 12 weeks, (5) able to walk independently, (6) engaging in work for >3 days per week in either full-time or part-time capacity for >3 hours a day, and (7) having the skill and understanding to operate mobile communications. The CLBP diagnosis was established by qualified practicing physicians. The key exclusion criteria were as follows: (1) aged >65 years, (2) having CLBP unrelated to a musculoskeletal condition, (3) with radiculopathy or constructive spinal deformity, (4) having LBP with red flags (with chest pain, malignant tumor, HIV infection, malnutrition, significant weight loss of ≥5% within 1 month, extensive neurological symptoms, or fever of ≥37.5 °C), (5) using over-the-counter medications for CLBP, (6) pregnant women and those who were willing to be pregnant during the clinical trial period, (7) receiving steroids (intravenous injection or oral administration) or opioids, and (8) unable to understand the Japanese language.
The patients received the prescribed pharmacological treatment, surgical treatment, and/or patient education and exercise therapy for the management of CLBP. Pharmacological Treatment Information about the use of medications for pain was obtained from an electronic medical record system (Mebix, Inc). Pharmacological treatment included nonsteroidal anti-inflammatory drugs, acetaminophen, weak opioids, blood flow improvers, muscle relaxants, medications for osteoporosis, antidepressant drugs, steroids, antiepileptic drugs, and nerve-blocking agents, such as local anesthetic drugs. Medications were assessed at randomization; weeks 4, 8, and 12; and study discontinuation. Surgical Treatment Any surgeries for pain relief were recorded at randomization; weeks 4, 8, and 12; and study discontinuation. Patient Education and Exercise Therapy A web-based video program was used to provide evidence-based thinking regarding the importance of a cognitive behavioral approach for patients with CLBP. The exercise therapy was developed by Travoss Co, Ltd, in accordance with the recommendations for alignment, core muscles, and endogenous activation, including improvement of posture and mobility for proper alignment , stimulation and/or strengthening of deep muscles for spinal stability , and operation of intrinsic pain for the activation of endogenous substances by aerobic exercise . Secaide, a mobile messaging app for mobile communication devices such as smartphones and tablets, with download enabled by a QR code, is an aid to exercise therapy. In Japan, this mobile messaging app is used for SMS text messaging and voice calls . Patient education and exercise therapy announcements were conducted as follows. The artificial intelligence–assisted chatbot was programmed to send messages to users with exercise instructions and some tips on what they can do in their daily lives to improve their symptoms. The messages were sent every day at a fixed time through the LINE app (a smartphone app widely used for sending and receiving SMS text messages, images, and videos, and making voice calls in Japan; LINE Corporation). The notification time can be changed by users to a time convenient for them. The exercise was performed during the patient’s favorite time. The participants can complete their exercise within approximately 1 to 3 minutes each day ( - ). During the first week, Secaide provided evidence-based thinking about the importance of a cognitive-behavioral approach for CLBP to patient education. Secaide also provided guidance to carry out six simple exercise menus for 60 days. After the 14th, information on two types of exercise was optionally added to patients who desire further exercise. At each clinic, the conventional group received only routine medical care. In the exercise therapy group, in addition to the routine medical care, patient education and strengthening of exercise were provided. To avoid cross-contamination between the 2 groups, only the exercise group received patient education and daily exercise therapy via Secaide ( - ).
Information about the use of medications for pain was obtained from an electronic medical record system (Mebix, Inc). Pharmacological treatment included nonsteroidal anti-inflammatory drugs, acetaminophen, weak opioids, blood flow improvers, muscle relaxants, medications for osteoporosis, antidepressant drugs, steroids, antiepileptic drugs, and nerve-blocking agents, such as local anesthetic drugs. Medications were assessed at randomization; weeks 4, 8, and 12; and study discontinuation.
Any surgeries for pain relief were recorded at randomization; weeks 4, 8, and 12; and study discontinuation.
A web-based video program was used to provide evidence-based thinking regarding the importance of a cognitive behavioral approach for patients with CLBP. The exercise therapy was developed by Travoss Co, Ltd, in accordance with the recommendations for alignment, core muscles, and endogenous activation, including improvement of posture and mobility for proper alignment , stimulation and/or strengthening of deep muscles for spinal stability , and operation of intrinsic pain for the activation of endogenous substances by aerobic exercise . Secaide, a mobile messaging app for mobile communication devices such as smartphones and tablets, with download enabled by a QR code, is an aid to exercise therapy. In Japan, this mobile messaging app is used for SMS text messaging and voice calls . Patient education and exercise therapy announcements were conducted as follows. The artificial intelligence–assisted chatbot was programmed to send messages to users with exercise instructions and some tips on what they can do in their daily lives to improve their symptoms. The messages were sent every day at a fixed time through the LINE app (a smartphone app widely used for sending and receiving SMS text messages, images, and videos, and making voice calls in Japan; LINE Corporation). The notification time can be changed by users to a time convenient for them. The exercise was performed during the patient’s favorite time. The participants can complete their exercise within approximately 1 to 3 minutes each day ( - ). During the first week, Secaide provided evidence-based thinking about the importance of a cognitive-behavioral approach for CLBP to patient education. Secaide also provided guidance to carry out six simple exercise menus for 60 days. After the 14th, information on two types of exercise was optionally added to patients who desire further exercise. At each clinic, the conventional group received only routine medical care. In the exercise therapy group, in addition to the routine medical care, patient education and strengthening of exercise were provided. To avoid cross-contamination between the 2 groups, only the exercise group received patient education and daily exercise therapy via Secaide ( - ).
All patients were required to respond to a web-based survey that captured demographic and background information, including occupation and exercise habits. Furthermore, pharmacological and surgical treatment for CLBP and the number of institutional visits in the last 30 days were collected at weeks 0 to 4, weeks 4 to 8, and weeks 8 to 12 and at study discontinuation. Adherence to the use of mobile messaging app–based exercise therapy was measured by the rate of implementation (%), calculated as follows: (access days/observation period)×100. Category aggregation for the adherence rate was performed by 0% to 25%, by 25% to 50%, by 50% to 75%, and by ≥75%. Assessments were made from the log information (date) of Secaide and the PRO response date, that is, weeks 0 to 4, weeks 4 to 8, weeks 8 to 12, and weeks 0 to 12.
Primary End Point The primary end point was the change in work productivity at week 12. The work productivity was measured using the Quantity and Quality method (QQ method), which evaluates work productivity in terms of quality, quantity, and efficiency and is an evaluation index for absenteeism . Secondary End Points The secondary end points were changes in work productivity measured using the Work Productivity and Activity Impairment Questionnaire: General Health (WPAI-GH) , CLBP and shoulder stiffness (Numerical Rating Scale [NRS]) , subjective ratings of stiffness and LBP on a scale of 1 to 5 , disease-specific QoL (Roland-Morris Disability Questionnaire [RDQ-24]) , health-related QoL (EuroQoL 5 Dimensions 5 Level [EQ-5D-5L]) , fear of movement (Tampa Scale for Kinesiophobia [TSK-11]) , degree of depression (Kessler Screening Scale for Psychological Distress [K-6]) , drug use, and consultation status at medical institutions. All the secondary end points were measured at baseline and week 12. In addition, changes in LBP and drug use were measured at weeks 4 and 8 during the study period.
The primary end point was the change in work productivity at week 12. The work productivity was measured using the Quantity and Quality method (QQ method), which evaluates work productivity in terms of quality, quantity, and efficiency and is an evaluation index for absenteeism .
The secondary end points were changes in work productivity measured using the Work Productivity and Activity Impairment Questionnaire: General Health (WPAI-GH) , CLBP and shoulder stiffness (Numerical Rating Scale [NRS]) , subjective ratings of stiffness and LBP on a scale of 1 to 5 , disease-specific QoL (Roland-Morris Disability Questionnaire [RDQ-24]) , health-related QoL (EuroQoL 5 Dimensions 5 Level [EQ-5D-5L]) , fear of movement (Tampa Scale for Kinesiophobia [TSK-11]) , degree of depression (Kessler Screening Scale for Psychological Distress [K-6]) , drug use, and consultation status at medical institutions. All the secondary end points were measured at baseline and week 12. In addition, changes in LBP and drug use were measured at weeks 4 and 8 during the study period.
The data related to changes in WPAI-GH in a 6-week randomized study of patients with LBP were used to calculate the sample size of 100 participants . The required sample size in this study was estimated to be 90 patients for 80% power at an intergroup difference of 2.7, a common SD of the 2 groups of 4.5, and an α level of .05, using the 2-sample, 2-tailed t test. Considering a dropout rate of 10%, the total sample size was 100 (n=50, 50% patients in each group). For allocation, a minimization method was used, with adjustments for age, sex, and willingness to adopt the exercise therapy. Data were summarized using descriptive statistics of the mean (SE) for continuous variables and frequencies and percentages for categorical variables. To compare continuous data in the 2 groups, an analysis of covariance model (covariates: treatment, baseline, age, sex, and willingness to adopt the exercise therapy) or mixed-effects model for repeated measures (covariates: treatment, baseline, time, time×treatment, age, sex, and willingness to adopt the exercise therapy) was used for the primary and secondary end points, depending on the times of measurements. The Fisher exact test was used to compare the percentages in the 2 groups. In patients who had data reported at week 12, post hoc analyses were performed to check the impact of the treatment compliance (<75% and ≥75% exercise groups and conventional group) on the primary end point (work productivity) and secondary end points (NRS of CLBP and RDQ-24). Data were analyzed using SAS (version 9.4; SAS Institute Inc).
Study Population A total of 101 patients with CLBP were recruited, and consenting participants were randomly allocated to either the exercise group (n=50, 49.5% randomized; n=48, 47.5% analyzed for efficacy), who used the web-based videos and Secaide for exercise therapy, or the conventional group (n=51, 50.5% randomized and analyzed; ). Both groups continued with the prescribed pharmacological treatments. The baseline characteristics of patients in the exercise and conventional groups are shown in . No difference in many characteristics was observed between the 2 groups. However, variability in work productivity was observed (WPAI-GH). In addition, >85% of the patients in both groups requested exercise therapy (exercise group: 42/48, 88% patients; conventional group: 45/51, 88% patients), which was a group highly conscious of exercise. Of the 48 participants in the exercise group, 37 (77%) were adherent to the use of mobile messaging app–based exercise therapy in weeks 0 to 4, 31 (65%) in weeks 4 to 8, and 32 (67%) in weeks 8 to 12 ( ). Primary End Point At week 12, the mean change (SE) in work productivity (QQ method) in the exercise group (n=37) and the conventional group (n=32) was 0.062 (0.069) and 0.114 (0.069), respectively (difference between groups −0.053, 95% CI −0.184 to 0.079; P =.43). No significant difference was observed at the primary end point. Secondary End Points Work Productivity Changes in the WPAI-GH parameters in the 2 groups at week 12 are shown in . Percent overall work impairment due to health in the exercise group (n=36) and the conventional group (n=26) was −13.3 (SE 6.8) and −4.7 (SE 7.6), respectively (difference between groups −8.6, 95% CI −23.6 to 6.5; P =.26). Low Back Pain At week 12, although no statistically significant difference in the reduction of the NRS scores was observed between the exercise (mean −1.1, SE 0.3) and conventional groups (mean −0.7, SE 0.4; P =.26), the mean subjective improvement in CLBP symptoms was significantly greater in the exercise group (mean 3.2, SE 0.2) than in the conventional group (mean 3.8, SE 0.3; difference between groups −0.5, 95% CI −1.1 to 0.0; P =.04). Quality of Life At week 12, no statistically significant differences in the RDQ-24 scores were observed between the exercise and conventional groups. A significant improvement in EQ-5D-5L at week 12 was observed in the exercise group compared with that in the conventional group ( ). Kinesiophobia At week 12, a significant improvement in the TSK-11 score was observed in the exercise group (mean −2.3, SE 1.2) compared with that in the conventional group (mean 0.5, SE 1.3; difference between groups −2.8, 95% CI −5.5 to −0.1; P =.04). Depression At week 12, no significant improvement in the K-6 score was observed in the exercise group (mean −1.5, SE 0.8) compared with that in the conventional group (mean −0.6, SE 0.9; difference between groups −0.9; 95% CI −2.7 to 0.9; P =.34). Change in Consultation Status Visits to clinics were significantly reduced in the exercise group at weeks 4, 8, and 12. Similarly, a significant reduction in visits to the acupuncture and moxibustion clinics was observed in the exercise group at weeks 4 and 8 ( ). Surgical Treatment and Change in Drug Use No differences for surgical treatment or changes in drug use were observed in the conventional or exercise group throughout the study period. Post Hoc Analysis In this study, no significant difference in work productivity (QQ method), pain intensity, and RDQ-24 was observed in the exercise group. As a post hoc analysis, the effects of exercise therapy on work productivity (QQ method), pain intensity, and RDQ-24 were examined in the group with a high compliance rate of exercise (≥75%) and the other groups (<75% compliance). At week 12, patients who showed a higher (≥75%) adherence to the exercise regimen had a greater improvement in work productivity (QQ method), NRS scores, and RDQ-24 than those with <75% adherence or the conventional group ( ).
A total of 101 patients with CLBP were recruited, and consenting participants were randomly allocated to either the exercise group (n=50, 49.5% randomized; n=48, 47.5% analyzed for efficacy), who used the web-based videos and Secaide for exercise therapy, or the conventional group (n=51, 50.5% randomized and analyzed; ). Both groups continued with the prescribed pharmacological treatments. The baseline characteristics of patients in the exercise and conventional groups are shown in . No difference in many characteristics was observed between the 2 groups. However, variability in work productivity was observed (WPAI-GH). In addition, >85% of the patients in both groups requested exercise therapy (exercise group: 42/48, 88% patients; conventional group: 45/51, 88% patients), which was a group highly conscious of exercise. Of the 48 participants in the exercise group, 37 (77%) were adherent to the use of mobile messaging app–based exercise therapy in weeks 0 to 4, 31 (65%) in weeks 4 to 8, and 32 (67%) in weeks 8 to 12 ( ).
At week 12, the mean change (SE) in work productivity (QQ method) in the exercise group (n=37) and the conventional group (n=32) was 0.062 (0.069) and 0.114 (0.069), respectively (difference between groups −0.053, 95% CI −0.184 to 0.079; P =.43). No significant difference was observed at the primary end point.
Work Productivity Changes in the WPAI-GH parameters in the 2 groups at week 12 are shown in . Percent overall work impairment due to health in the exercise group (n=36) and the conventional group (n=26) was −13.3 (SE 6.8) and −4.7 (SE 7.6), respectively (difference between groups −8.6, 95% CI −23.6 to 6.5; P =.26). Low Back Pain At week 12, although no statistically significant difference in the reduction of the NRS scores was observed between the exercise (mean −1.1, SE 0.3) and conventional groups (mean −0.7, SE 0.4; P =.26), the mean subjective improvement in CLBP symptoms was significantly greater in the exercise group (mean 3.2, SE 0.2) than in the conventional group (mean 3.8, SE 0.3; difference between groups −0.5, 95% CI −1.1 to 0.0; P =.04). Quality of Life At week 12, no statistically significant differences in the RDQ-24 scores were observed between the exercise and conventional groups. A significant improvement in EQ-5D-5L at week 12 was observed in the exercise group compared with that in the conventional group ( ). Kinesiophobia At week 12, a significant improvement in the TSK-11 score was observed in the exercise group (mean −2.3, SE 1.2) compared with that in the conventional group (mean 0.5, SE 1.3; difference between groups −2.8, 95% CI −5.5 to −0.1; P =.04). Depression At week 12, no significant improvement in the K-6 score was observed in the exercise group (mean −1.5, SE 0.8) compared with that in the conventional group (mean −0.6, SE 0.9; difference between groups −0.9; 95% CI −2.7 to 0.9; P =.34). Change in Consultation Status Visits to clinics were significantly reduced in the exercise group at weeks 4, 8, and 12. Similarly, a significant reduction in visits to the acupuncture and moxibustion clinics was observed in the exercise group at weeks 4 and 8 ( ). Surgical Treatment and Change in Drug Use No differences for surgical treatment or changes in drug use were observed in the conventional or exercise group throughout the study period. Post Hoc Analysis In this study, no significant difference in work productivity (QQ method), pain intensity, and RDQ-24 was observed in the exercise group. As a post hoc analysis, the effects of exercise therapy on work productivity (QQ method), pain intensity, and RDQ-24 were examined in the group with a high compliance rate of exercise (≥75%) and the other groups (<75% compliance). At week 12, patients who showed a higher (≥75%) adherence to the exercise regimen had a greater improvement in work productivity (QQ method), NRS scores, and RDQ-24 than those with <75% adherence or the conventional group ( ).
Changes in the WPAI-GH parameters in the 2 groups at week 12 are shown in . Percent overall work impairment due to health in the exercise group (n=36) and the conventional group (n=26) was −13.3 (SE 6.8) and −4.7 (SE 7.6), respectively (difference between groups −8.6, 95% CI −23.6 to 6.5; P =.26).
At week 12, although no statistically significant difference in the reduction of the NRS scores was observed between the exercise (mean −1.1, SE 0.3) and conventional groups (mean −0.7, SE 0.4; P =.26), the mean subjective improvement in CLBP symptoms was significantly greater in the exercise group (mean 3.2, SE 0.2) than in the conventional group (mean 3.8, SE 0.3; difference between groups −0.5, 95% CI −1.1 to 0.0; P =.04).
At week 12, no statistically significant differences in the RDQ-24 scores were observed between the exercise and conventional groups. A significant improvement in EQ-5D-5L at week 12 was observed in the exercise group compared with that in the conventional group ( ).
At week 12, a significant improvement in the TSK-11 score was observed in the exercise group (mean −2.3, SE 1.2) compared with that in the conventional group (mean 0.5, SE 1.3; difference between groups −2.8, 95% CI −5.5 to −0.1; P =.04).
At week 12, no significant improvement in the K-6 score was observed in the exercise group (mean −1.5, SE 0.8) compared with that in the conventional group (mean −0.6, SE 0.9; difference between groups −0.9; 95% CI −2.7 to 0.9; P =.34).
Visits to clinics were significantly reduced in the exercise group at weeks 4, 8, and 12. Similarly, a significant reduction in visits to the acupuncture and moxibustion clinics was observed in the exercise group at weeks 4 and 8 ( ).
No differences for surgical treatment or changes in drug use were observed in the conventional or exercise group throughout the study period.
In this study, no significant difference in work productivity (QQ method), pain intensity, and RDQ-24 was observed in the exercise group. As a post hoc analysis, the effects of exercise therapy on work productivity (QQ method), pain intensity, and RDQ-24 were examined in the group with a high compliance rate of exercise (≥75%) and the other groups (<75% compliance). At week 12, patients who showed a higher (≥75%) adherence to the exercise regimen had a greater improvement in work productivity (QQ method), NRS scores, and RDQ-24 than those with <75% adherence or the conventional group ( ).
Principal Findings The exercise intervention is considered an integral part of CLBP management and has been reported to reduce pain and improve function in patients with CLBP; however, there are challenges in exploring effective exercise types and continuing exercise . In recent years, various digital interventions have attempted to address these challenges [ - ]. The web-based video patient education and strengthening exercise therapy using the mobile messaging app did not show any significant changes in work productivity or loss of workdays due to CLBP at week 12 compared with the conventional pharmacological treatment in this study. To the best of our knowledge, there is no randomized controlled trial with the intervention outcome to improve work productivity in patients with CLBP; therefore, this result cannot be compared with previous studies. It is possible that drastic changes in the working environment during the COVID-19 pandemic affected the assessment of work productivity. During the research period, the Government of Japan began to recommend remote work as a national policy. In the evaluation of work productivity, the quantity and quality of work at the time of evaluation were compared with those in the absence of CLBP. The effect of changes in working style might be greater than that of exercise therapy on work productivity. A survey of workers in remote work before and during the COVID-19 pandemic conducted in Japan in 2020 also reported that full remote work of 5 days a week reduced work productivity . Therefore, the difference in work productivity between the 2 groups due to exercise therapy may not have been observed. In fact, many secondary end points showed a significant improvement in exercise therapy. However, the work productivities did not show a significant improvement. The work productivity assessments may have been particularly susceptible to COVID-19 compared with outcomes such as pain intensity and QoL. To assess the impact of exercise therapy on work productivity in patients with CLBP, further improved clinical studies will be considered. The use of mobile devices can enhance patient engagement in self-management of CLBP and improve exercise compliance . In this study, >50% (36/47) of the participants had ≥75% compliance with the use of the mobile messaging app–based exercise therapy. In previous studies, similar adherence rates of about 50% to 70% for home-based exercise programs have been reported . The results of this study also showed high adherence to the continuation of exercise therapy using mobile devices. A problem with exercise therapy is the low level of adherence to the prescribed exercises. Two systematic reviews have reported that up to 70% of participants did not adhere to the prescribed exercises . It has been suggested that using digital devices may improve the patient’s noncompliance with exercise therapy, which is considered to have the highest level of evidence for CLBP. In this study, many end points, rather than the primary end point, showed results similar to those of previous studies. In particular, the degree of the subjective score of pain was significantly improved in workers who received exercise therapy, which is consistent with a previous study using Secaide . The end point of QoL (EQ-5D-5L) showed a significant improvement, as in previous studies using digital interventions . Kinesiophobia is a therapeutic target with exercise regimens in the management of CLBP [ - ]. To the best of our knowledge, no study has evaluated the impact of mobile-based apps on pain-related fear in patients with CLBP. In this study, we evaluated kinesiophobia using the TSK-11 scale, which has been validated for use in patients with CLBP . At week 12, a significant improvement in the TSK-11 score was observed in the exercise group. From the above results, it is considered that the effect of exercise therapy was supported in this study, as well as in previous studies. In addition, a post hoc analysis was used to evaluate the relationship between exercise therapy adherence and outcomes. High adherence showed good outcomes in work productivity (QQ method), CLBP score (NRS), and RDQ-24 score. Recently, evaluation using PROs has attracted attention in clinical trials . The concept of minimal clinically significant difference (MCID) is established, and its importance is recognized. MCID is not a statistically significant difference, but it is an indicator of the clinical benefits to patients. The MCID has been reported as an NRS ≥2 for LBP and a 30% change in score for RDQ-24 (if the score is <7) . In the post hoc analysis, patients with high adherence to exercise therapy showed an improvement of 2.28 in NRS in CLBP as a change from baseline and an improvement of approximately 38% in RDQ-24. These scores achieved MCID. This improvement was clinically meaningful. Previous studies have reported that apps improve exercise therapy adherence; therefore, Secaide used in this study may also play an important role in achieving better outcomes. In this study, we adopted the Secaide app , an interactive health promotion system, to aid education and exercise therapy in patients with CLBP. Furthermore, adopting web-based education and mobile messaging app–based exercise therapy may reduce the number of facility visits, ensure safety, and ensure continued patient care. Pain treatment based on traditional visits in clinics may be difficult because of the COVID-19 pandemic. PROs are becoming increasingly important, and the need for remote medical care, such as digital health programs, is increasing. The use of technology can be advantageous, enabling the remote collection of data during such unprecedented times. Using digital devices, the enhancement of exercise therapy yielded better results in more end points than in routine clinical practice. These results and compliance rates are due to research conditions. Although the impact of these on treatment cannot be evaluated correctly, it is hoped that they will provide an opportunity to consider the usefulness of remote medical care in CLBP. Limitations This study had certain limitations. Changes in work quality and quantity were used as outcomes for work productivity. This study was conducted during the COVID-19 pandemic, when the social working environment has evolved with the adoption of remote working. Furthermore, these changes in the work environment may have influenced the evaluation of work productivity. The study design has the inherent limitations of a short duration (12 weeks) and a small sample size (50 in each group). There have been no previous studies with the same patient population and end point, and the required number of cases was calculated using the results of secondary end point of this study. As a result, the statistical power of this study may be lower than expected. We did not assess the rate of adherence to prescribed medications, which could possibly impact work productivity outcomes with exercise therapy using the mobile messaging app. The data for the study outcomes were self-reported, and a response bias could have led to varying estimates of the severity of CLBP. Comparison of the high adherence group with the other groups should be interpreted in a limited manner because of the results of the post hoc analysis. Conclusions Web-based patient education and strengthening exercise therapy using the Secaide app may be useful for enhancing the effectiveness of exercise therapy in the treatment of CLBP. In this exploratory study, the exercise group showed consistently better trends for most end points than did the conventional group. The adherence to exercise therapy improved work productivity, NRS for CLBP, and RDQ-24, suggesting that the mobile messaging app is useful for CLBP treatment. This study did not reveal the effect of therapeutic interventions on CLBP on work productivity. Further research is required to assess work productivity with therapeutic interventions.
The exercise intervention is considered an integral part of CLBP management and has been reported to reduce pain and improve function in patients with CLBP; however, there are challenges in exploring effective exercise types and continuing exercise . In recent years, various digital interventions have attempted to address these challenges [ - ]. The web-based video patient education and strengthening exercise therapy using the mobile messaging app did not show any significant changes in work productivity or loss of workdays due to CLBP at week 12 compared with the conventional pharmacological treatment in this study. To the best of our knowledge, there is no randomized controlled trial with the intervention outcome to improve work productivity in patients with CLBP; therefore, this result cannot be compared with previous studies. It is possible that drastic changes in the working environment during the COVID-19 pandemic affected the assessment of work productivity. During the research period, the Government of Japan began to recommend remote work as a national policy. In the evaluation of work productivity, the quantity and quality of work at the time of evaluation were compared with those in the absence of CLBP. The effect of changes in working style might be greater than that of exercise therapy on work productivity. A survey of workers in remote work before and during the COVID-19 pandemic conducted in Japan in 2020 also reported that full remote work of 5 days a week reduced work productivity . Therefore, the difference in work productivity between the 2 groups due to exercise therapy may not have been observed. In fact, many secondary end points showed a significant improvement in exercise therapy. However, the work productivities did not show a significant improvement. The work productivity assessments may have been particularly susceptible to COVID-19 compared with outcomes such as pain intensity and QoL. To assess the impact of exercise therapy on work productivity in patients with CLBP, further improved clinical studies will be considered. The use of mobile devices can enhance patient engagement in self-management of CLBP and improve exercise compliance . In this study, >50% (36/47) of the participants had ≥75% compliance with the use of the mobile messaging app–based exercise therapy. In previous studies, similar adherence rates of about 50% to 70% for home-based exercise programs have been reported . The results of this study also showed high adherence to the continuation of exercise therapy using mobile devices. A problem with exercise therapy is the low level of adherence to the prescribed exercises. Two systematic reviews have reported that up to 70% of participants did not adhere to the prescribed exercises . It has been suggested that using digital devices may improve the patient’s noncompliance with exercise therapy, which is considered to have the highest level of evidence for CLBP. In this study, many end points, rather than the primary end point, showed results similar to those of previous studies. In particular, the degree of the subjective score of pain was significantly improved in workers who received exercise therapy, which is consistent with a previous study using Secaide . The end point of QoL (EQ-5D-5L) showed a significant improvement, as in previous studies using digital interventions . Kinesiophobia is a therapeutic target with exercise regimens in the management of CLBP [ - ]. To the best of our knowledge, no study has evaluated the impact of mobile-based apps on pain-related fear in patients with CLBP. In this study, we evaluated kinesiophobia using the TSK-11 scale, which has been validated for use in patients with CLBP . At week 12, a significant improvement in the TSK-11 score was observed in the exercise group. From the above results, it is considered that the effect of exercise therapy was supported in this study, as well as in previous studies. In addition, a post hoc analysis was used to evaluate the relationship between exercise therapy adherence and outcomes. High adherence showed good outcomes in work productivity (QQ method), CLBP score (NRS), and RDQ-24 score. Recently, evaluation using PROs has attracted attention in clinical trials . The concept of minimal clinically significant difference (MCID) is established, and its importance is recognized. MCID is not a statistically significant difference, but it is an indicator of the clinical benefits to patients. The MCID has been reported as an NRS ≥2 for LBP and a 30% change in score for RDQ-24 (if the score is <7) . In the post hoc analysis, patients with high adherence to exercise therapy showed an improvement of 2.28 in NRS in CLBP as a change from baseline and an improvement of approximately 38% in RDQ-24. These scores achieved MCID. This improvement was clinically meaningful. Previous studies have reported that apps improve exercise therapy adherence; therefore, Secaide used in this study may also play an important role in achieving better outcomes. In this study, we adopted the Secaide app , an interactive health promotion system, to aid education and exercise therapy in patients with CLBP. Furthermore, adopting web-based education and mobile messaging app–based exercise therapy may reduce the number of facility visits, ensure safety, and ensure continued patient care. Pain treatment based on traditional visits in clinics may be difficult because of the COVID-19 pandemic. PROs are becoming increasingly important, and the need for remote medical care, such as digital health programs, is increasing. The use of technology can be advantageous, enabling the remote collection of data during such unprecedented times. Using digital devices, the enhancement of exercise therapy yielded better results in more end points than in routine clinical practice. These results and compliance rates are due to research conditions. Although the impact of these on treatment cannot be evaluated correctly, it is hoped that they will provide an opportunity to consider the usefulness of remote medical care in CLBP.
This study had certain limitations. Changes in work quality and quantity were used as outcomes for work productivity. This study was conducted during the COVID-19 pandemic, when the social working environment has evolved with the adoption of remote working. Furthermore, these changes in the work environment may have influenced the evaluation of work productivity. The study design has the inherent limitations of a short duration (12 weeks) and a small sample size (50 in each group). There have been no previous studies with the same patient population and end point, and the required number of cases was calculated using the results of secondary end point of this study. As a result, the statistical power of this study may be lower than expected. We did not assess the rate of adherence to prescribed medications, which could possibly impact work productivity outcomes with exercise therapy using the mobile messaging app. The data for the study outcomes were self-reported, and a response bias could have led to varying estimates of the severity of CLBP. Comparison of the high adherence group with the other groups should be interpreted in a limited manner because of the results of the post hoc analysis.
Web-based patient education and strengthening exercise therapy using the Secaide app may be useful for enhancing the effectiveness of exercise therapy in the treatment of CLBP. In this exploratory study, the exercise group showed consistently better trends for most end points than did the conventional group. The adherence to exercise therapy improved work productivity, NRS for CLBP, and RDQ-24, suggesting that the mobile messaging app is useful for CLBP treatment. This study did not reveal the effect of therapeutic interventions on CLBP on work productivity. Further research is required to assess work productivity with therapeutic interventions.
|
A Low Cost and Eco-Sustainable Device to Determine the End of the Disinfection Process in SODIS | e1d6c6be-320c-41ea-b768-86ca33381118 | 9865546 | Microbiology[mh] | Lack of access to safe drinking water remains one of the main health problems in many regions of the world. According to recent reports of UNICEF and the World Health Organization (WHO) , in 2020 around one in four people lacked safely managed drinking water in their homes, which implies that about two billion people worldwide consume contaminated water. As shown on the map in , there are regions in Africa and Southeast Asia where less than 20% of the population has access to safe drinking water. Consequently, about 40,000 deaths are reported monthly, mostly infants and children, due to diseases derived from microbiological contamination of water such as diarrhea, cholera, and hepatitis. In addition, the lack of safe drinking water may arise as a temporary problem in other parts of the world, caused by emergency situations such as natural disasters or armed conflicts. In fact, according to the UNICEF report , children under the age of fifteen living in countries affected by protracted conflict are on average almost three times more likely to die from diarrhoeal diseases caused by a lack of safe water than by direct violence. Because this is a problem associated with low-income countries or emergency situations, a low-cost, simple, and sustainable solution is needed. In this sense, Solar water disinfection (SODIS) is an inexpensive Household Water Treatment (HWT) approved by the WHO that does not require the use of chemical disinfectants, exemplifying its sustainability. This solution is based on the ability of UV radiation from sunlight to inactivate pathogens that are present in water, such as bacteria, viruses, and protozoa, the effectiveness of which has been undoubtedly demonstrated over several decades . Moreover, it presents an optimal solution for low-income countries, as these regions usually have high solar irradiance levels throughout the year. In the SODIS method, microbiologically contaminated water is introduced in transparent plastic containers, such as polyethylene terephthalate (PET) bottles or polyethylene (PE) bags , as we explain in detail in , then exposed to direct sunlight until enough UV radiation is received to inactivate the pathogens. This amount of radiation is known as the “lethal UV dose” (expressed as W · h/m 2 ), and depends on the properties of the water, the level and type of the microbiological contamination, and the characteristics of the SODIS containers (transmittance to UV light, size, and shape) . There are several studies in the literature [ , , , ] that have established methods for estimating the lethal UV dose for different type of SODIS containers and pathogens. However, one of the main drawbacks of the SODIS method is that there are no practical solutions to determine when the lethal UV dose has been reached due to the great restrictions imposed by the SODIS method. On the one hand, because the purpose of the solution is to complement the low cost of SODIS containers, its price cannot increase the total cost very much. On the other hand, because a high commitment to the environment is needed in this type of project, an eco-sustainable solution should be employed. As we show in , several approaches have been proposed in the literature to determine the end of the disinfection process; however, these do not strictly accomplish both constraints, and as such cannot practically be applied in real situations. In fact, the currently used solution is not to measure the UV radiation received; rather, it is to wait a reasonable time to inactivate the pathogens, that is, to expose the SODIS containers for at least 6 h under full sunshine or 48 h on cloudy conditions . However, this solution makes inefficient use of the valuable resources provided by SODIS, as an overly extended exposure time is used to ensure that the water has been disinfected, when in fact the lethal UV dose may have already been reached well before. In this work, we propose a low cost and eco-sustainable electronic device to be used in conjunction with SODIS containers to determine the end of the disinfection process on the basis of the corresponding lethal UV dose. The proposed solution strictly accomplishes the required restrictions imposed by the SODIS method. We design a low cost device (around EUR 12) which has enough autonomy to work for months with small low cost disposable batteries, thereby avoiding the use of materials that are considered hazardous waste at the end of their useful life, such as rechargeable batteries. In addition, the physical design of this device is valid for any region and type of SODIS container, as the lethal UV dose can be programmatically changed in each case according to the relevant literature. The rest of this paper is organized as follows. describes related works in the literature, while in a general overview of the methodology that has been applied for the design and implementation of the proposed device is presented. After that, the different steps of this methodology are described in detail in the following sections; specifically, an analysis of different low cost UV sensors is summarized in in order to select the most accurate one, while describes the design of the final device and its testing in real conditions. Finally, summarizes the main conclusions of this paper.
As mentioned above, in the SODIS method transparent plastic containers are filled with microbiologically contaminated water and exposed to direct sunlight to disinfect it. This requires that two questions be answered, namely, the material used for the containers and how to determine the end of the disinfection process. With respect to the first question, we can find in the literature a great variety of works that propose and analyze the effectiveness of different type of materials. Traditionally, transparent polyethylene terephthalate (PET) bottles have been widely used due to their efficient transmission of UV-A radiation (about 85–90 percent) . However, this type of material does not transmit UV-B radiation , which produces the most powerful genome damage to viruses and bacterial pathogens through direct photo-inactivation mechanisms . In addition, there are concerns about the migration of chemical contaminants from PET bottles into water, although there is no direct scientific evidence of this . Thus, more efficient and safer materials have been studied and successfully evaluated in the literature, such as polystyrene (PS) , polycarbonate (PC) , polymethylmethacrylate (PMMA) , and polyethylene (PE) . In particular, the use of PE bags has been viewed as a promising solution, as this material has good transmission of UV-B radiation. Moreover, the bags can be easily delivered and distributed when empty because they are softer and more flexible than PET bottles. Regarding the second question, as mentioned in , the end of the disinfection process is reached when enough UV radiation is received to inactivate the pathogens that are present in the water, which is defined as the lethal UV dose. In this sense, several solutions have been proposed in the literature to measure the accumulated UV irradiation. In , an electronic control system based on an UV-A photodiode was included as part of a SODIS batch photo-reactor, consisting of a glass tube positioned at the focus of a compound parabolic collector (CPC) mirror and two reservoir tanks (one for the untreated water and other for the treated water). In this framework, the control system is used to control the water flow between the tanks by means of electronic valves. However, the proposed dosimeter has two main drawbacks: on the one hand, it is integrated into the batch photo-reactor, which is a more expensive solution than the use of classic plastic bottles and bags; thus, it is not designed to accomplish the required constraint of being a low cost device. On the other hand, it can only measure the amount of UV-A radiation, not the UV-B radiation. Similar approaches can be found in , where, although in these cases UV-B radiation has been taken into account, the proposed solution is integrated in a CPC reactor system. In , a chemical solution was proposed based on compounds that change color after receiving the appropriate UV dose. Although it is a very inexpensive approach, the main problem of this solution is its sustainability. Because these dosimeters are disposable one-use systems, they need to be replaced with each SODIS application, which implies supply chain issues. Finally, there are electronic devices which work in a similar way to our proposed approach. One of the most popular is marketed under the name “WADI” . It consists of a UV meter powered by a solar panel, which can be placed alongside the SODIS containers to determine when solar water disinfection has been reached. The main drawback of this device is its price (around USD 30), which does not accomplish the required low cost constraint, preventing its use in low-income countries. The presumed advantage of this device compared to ours is its lifetime; it can be theoretically used for several years, as it does not need to change the battery. However, this is not true in practice, because a recalibration of the UV sensors is needed every 12–24 months in order to maintain their accuracy . Thus, the useful lifetime of a similar device should not exceed this period. Another electronic device, known as “SCIPIO” (Scientific Purification Indicator), was proposed in to measure the accumulated UV irradiation in SODIS containers. In this case, the device is designed to be introduced into the SODIS container during the disinfection process. As in the previous approach, the main drawback of this solution is its price. Although the authors do not provide the manufacturing cost, there is no doubt that this device cannot accomplish the required low cost constraint, as it integrates a UV sensor, daylight sensor, temperature sensor, gyroscope, capacitance change sensor, memory LCD, Bluetooth, and solar power supply.
In this paper, we propose an electronic device to determine the end of the disinfection process in SODIS bottles and bags that is able to solve the problems with the previous approaches discussed above. The operation of the device is very simple: when SODIS containers are filled with contaminated water and exposed to sunlight, the device should be placed next to these containers in order to measure the accumulated UV irradiation, providing notice when the lethal UV dose has been reached. Because this dose may be different in different regions and types of SODIS containers, the proposed device must be programmable. In this way, the same electronic device can be mass-produced, reducing the manufacturing cost, and the corresponding lethal UV dose can be set through software depending on the region and the characteristics of the employed SODIS containers. According to these requirements, we propose a microcontroller-based device which includes a UV radiation sensor to capture the solar irradiance corresponding to the UV-A and UV-B spectrum ranges (365 ± 10 nm and 330 ± 10 nm, respectively). The microcontroller is programmed to calculate the accumulated UV irradiance, initialized by a pushbutton, which is then compared with the lethal UV dose. When this value is reached, the device provides notice that the disinfection process has been concluded by means of an LED. A power supply stage with sufficient autonomy to work for months is included, and uses batteries that are not considered hazardous waste at the end of their useful life in accordance with the commitment to environmental responsibility required for this project. In addition, a low battery indicator light is included in order to prevent the use of the device in such a state, as notification of the end of disinfection may no longer be reliable. The most challenging constraint of this project is the need for a low cost solution, which imposes severe restrictions on the design of the device and selection of the electronic components. In addition, low energy consumption is another important limitation that must be taken into account. The methodology applied for the design and implementation of the proposed device is summarized as follows. First, we analyzed the performance of several representative low cost UV radiation sensors in order to select the most accurate one. To do this, we developed a test prototype based on the Arduino platform to compare the response of these sensors with a reference pattern, as described in . After selecting the most suitable UV sensor, the next step was to design the final microcontroller-based device while accomplishing the low cost and sustainability constraints, as shown in . Finally, the device was manufactured and tested in real conditions to analyze its accuracy by comparing it with a reference pattern.
4.1. Tested Sensors For this analysis, we selected three low cost sensors able to measure both UV-A and UV-B radiation. GUVA-S12SD UV-A : this sensor, which is based in Schottky technology, can work in the spectral range from 240 nm to 370 nm, with maximum responsiveness at 350 nm. In particular, we chose the “Analog UV Light Sensor Breakout—GUVA-S12SD” board from Adafruit , which integrates this sensor, as well as a preamplifier stage to impose the operating voltage level. This board is shown on the left in . LAPIS ML8511 : this sensor works in the spectral range from 280 nm to 390 nm, with maximum responsiveness at 365 nm. In this case, the sensor integrates its own preamplifier stage, which imposes an operation voltage from 2.7 V onwards. The “SEN0175” board from DFRobot , shown in the center of , was chosen. Vishay’s VEML6075 : this sensor works in the spectral range from 315 nm to 375 nm, and can separately detect UV-A and UV-B radiation, with maximum responsiveness at 330 nm and 365 nm, respectively. It is based on CMOS technology, integrating a photodiode, amplifier, and analog/digital converter on a single chip. Reading is performed using the I2C protocol, and it operates within a power supply range of 1.7 V to 3.6 V. In this case, we chose the “PIM460” board from Pimoroni , shown on the right in . 4.2. Sensor Data Collection In order to obtain the UV irradiation values from the selected sensors, we developed a testing device based on the Arduino UNO platform , which includes a data-logger module with a mini-SD card to save these data as files . In addition, an external Real-Time Clock module is employed to register the precise time at which each value is captured, as these data must be compared with a reference pattern and both should be synchronized. shows the schematic of the developed testing device. This testing device was programmed to register data samples captured by the sensors every 10 s, which is the sampling frequency used in the pattern device. Within this interval, three different measures are saved for each sensor: the minimum value, the maximum value, and the mean value. Before saving them, the values provided by the sensors must be transformed into irradiance values (in W/m 2 ) using the manufacturer’s data sheets and recommendations. It should be noted that these values are considered raw data until final calibration of the sensors is carried out. In the case of the Adafruit GUVA-S12SD, the voltage V o provided by the board is read through an Arduino analog pin and converted into an irradiance value using the following relationship : (1) I r r a d i a n c e ( W / m 2 ) ≅ 0.1006 · V o In a similar way, the irradiance value can be calculated from the voltage V o provided by the DFRobot ML8511 board as follows : (2) I r r a d i a n c e ( W / m 2 ) = 0.384 · V o − 77.749 Finally, the Pimoroni VEML6075 module provides a digital output through I2S communication. In this case, digital values related to UV-A and UV-B irradiance are obtained using the Arduino library provided by the manufacturer. These digital values can be employed to obtain the UV irradiance as follows : (3) I r r a d i a n c e ( W / m 2 ) = u v a · 5.376 · 10 − 3 + u v b · 2.381 · 10 − 3 4.3. Sensor Selection and Calibration In order to evaluate the tested sensors, we compared the data collected from them with the corresponding reference values captured by a radiometer (our reference pattern). A CUV 5 radiometer from Kipp and Zonen Company was employed ; it is able to measure radiation between 280 nm and 400 nm, corresponding to the UV-A and UV-B spectrum. This radiometer has its own portable data logger called METEON , configured to store data every 10 s; during this frame, a sample with the maximum, minimum, and integral values was saved. To build our dataset, we collected irradiation data from the tested sensors and the radiometer for three different days (9–11 March 2021 ). Data were collected over the whole daylight range, from sunrise to sunset in Spain (approximately 8 a.m. to 7:30 p.m.) in order to include both very low irradiation data (at sunrise and sunset) and very high irradiation data (during the middle hours of the day). In addition, the weather on these days was changeable, alternating between cloudy and clear skies. Because data were stored every 10 s, a total of 12,420 samples were collected, including a wide range of irradiation levels. The data collected on the first day were used to select the most accurate sensor by analyzing the correlation between the values provided by the sensors and the values provided by the radiometer (maximum, average, and minimum values). The data collected on the second day were used to obtain the calibration parameters for the best sensor, and the data collected on the third day were used to evaluate its calibration. shows a graphical representation of a selection of the data captured on the first day. It can be seen that the response provided by all the sensors is very similar to the pattern provided by the radiometer. In order to quantify the degree of linear dependence between each sensor and the pattern, a correlation analysis was performed. As shown in , all the tested sensors had a high degree of correlation, with the highest value corresponding to the GUVA-S12SD sensor (Adafruit module), which achieved close to 96% correlation. Thus, this sensor was selected for integration into the final device. After the sensor had been selected, the next step was to calibrate it. As mentioned above, the data collected on the second day were used to obtain the calibration parameters (i.e., the regression line). First, we removed the outliers in the dataset, analyzing the atypical values according to the distribution of the data. In , the relationship between the sensor measures and the pattern after filtering out atypical values can be observed. From these data we obtained the corresponding regression line that should be applied to calibrate the sensor; the equation is shown in . In order to evaluate this calibration, we then applied the obtained regression line to the data collected on the third day, which were not used in the calibration process. In , the calibrated values of the sensor are compared with the reference pattern provided by the radiometer, showing the goodness of the proposed calibration. In order to carry out a statistical evaluation of the calibration, we calculated the absolute error of the samples with respect to the reference pattern, ordered the samples according to this absolute error, and looked for the first sample in which the error exceeded 5%. This occurred at the 90.29th percentile, i.e., only 9.71% of the calibrated values exceeded an absolute error of 5%, which demonstrates the excellent response of the selected sensor.
For this analysis, we selected three low cost sensors able to measure both UV-A and UV-B radiation. GUVA-S12SD UV-A : this sensor, which is based in Schottky technology, can work in the spectral range from 240 nm to 370 nm, with maximum responsiveness at 350 nm. In particular, we chose the “Analog UV Light Sensor Breakout—GUVA-S12SD” board from Adafruit , which integrates this sensor, as well as a preamplifier stage to impose the operating voltage level. This board is shown on the left in . LAPIS ML8511 : this sensor works in the spectral range from 280 nm to 390 nm, with maximum responsiveness at 365 nm. In this case, the sensor integrates its own preamplifier stage, which imposes an operation voltage from 2.7 V onwards. The “SEN0175” board from DFRobot , shown in the center of , was chosen. Vishay’s VEML6075 : this sensor works in the spectral range from 315 nm to 375 nm, and can separately detect UV-A and UV-B radiation, with maximum responsiveness at 330 nm and 365 nm, respectively. It is based on CMOS technology, integrating a photodiode, amplifier, and analog/digital converter on a single chip. Reading is performed using the I2C protocol, and it operates within a power supply range of 1.7 V to 3.6 V. In this case, we chose the “PIM460” board from Pimoroni , shown on the right in .
In order to obtain the UV irradiation values from the selected sensors, we developed a testing device based on the Arduino UNO platform , which includes a data-logger module with a mini-SD card to save these data as files . In addition, an external Real-Time Clock module is employed to register the precise time at which each value is captured, as these data must be compared with a reference pattern and both should be synchronized. shows the schematic of the developed testing device. This testing device was programmed to register data samples captured by the sensors every 10 s, which is the sampling frequency used in the pattern device. Within this interval, three different measures are saved for each sensor: the minimum value, the maximum value, and the mean value. Before saving them, the values provided by the sensors must be transformed into irradiance values (in W/m 2 ) using the manufacturer’s data sheets and recommendations. It should be noted that these values are considered raw data until final calibration of the sensors is carried out. In the case of the Adafruit GUVA-S12SD, the voltage V o provided by the board is read through an Arduino analog pin and converted into an irradiance value using the following relationship : (1) I r r a d i a n c e ( W / m 2 ) ≅ 0.1006 · V o In a similar way, the irradiance value can be calculated from the voltage V o provided by the DFRobot ML8511 board as follows : (2) I r r a d i a n c e ( W / m 2 ) = 0.384 · V o − 77.749 Finally, the Pimoroni VEML6075 module provides a digital output through I2S communication. In this case, digital values related to UV-A and UV-B irradiance are obtained using the Arduino library provided by the manufacturer. These digital values can be employed to obtain the UV irradiance as follows : (3) I r r a d i a n c e ( W / m 2 ) = u v a · 5.376 · 10 − 3 + u v b · 2.381 · 10 − 3
In order to evaluate the tested sensors, we compared the data collected from them with the corresponding reference values captured by a radiometer (our reference pattern). A CUV 5 radiometer from Kipp and Zonen Company was employed ; it is able to measure radiation between 280 nm and 400 nm, corresponding to the UV-A and UV-B spectrum. This radiometer has its own portable data logger called METEON , configured to store data every 10 s; during this frame, a sample with the maximum, minimum, and integral values was saved. To build our dataset, we collected irradiation data from the tested sensors and the radiometer for three different days (9–11 March 2021 ). Data were collected over the whole daylight range, from sunrise to sunset in Spain (approximately 8 a.m. to 7:30 p.m.) in order to include both very low irradiation data (at sunrise and sunset) and very high irradiation data (during the middle hours of the day). In addition, the weather on these days was changeable, alternating between cloudy and clear skies. Because data were stored every 10 s, a total of 12,420 samples were collected, including a wide range of irradiation levels. The data collected on the first day were used to select the most accurate sensor by analyzing the correlation between the values provided by the sensors and the values provided by the radiometer (maximum, average, and minimum values). The data collected on the second day were used to obtain the calibration parameters for the best sensor, and the data collected on the third day were used to evaluate its calibration. shows a graphical representation of a selection of the data captured on the first day. It can be seen that the response provided by all the sensors is very similar to the pattern provided by the radiometer. In order to quantify the degree of linear dependence between each sensor and the pattern, a correlation analysis was performed. As shown in , all the tested sensors had a high degree of correlation, with the highest value corresponding to the GUVA-S12SD sensor (Adafruit module), which achieved close to 96% correlation. Thus, this sensor was selected for integration into the final device. After the sensor had been selected, the next step was to calibrate it. As mentioned above, the data collected on the second day were used to obtain the calibration parameters (i.e., the regression line). First, we removed the outliers in the dataset, analyzing the atypical values according to the distribution of the data. In , the relationship between the sensor measures and the pattern after filtering out atypical values can be observed. From these data we obtained the corresponding regression line that should be applied to calibrate the sensor; the equation is shown in . In order to evaluate this calibration, we then applied the obtained regression line to the data collected on the third day, which were not used in the calibration process. In , the calibrated values of the sensor are compared with the reference pattern provided by the radiometer, showing the goodness of the proposed calibration. In order to carry out a statistical evaluation of the calibration, we calculated the absolute error of the samples with respect to the reference pattern, ordered the samples according to this absolute error, and looked for the first sample in which the error exceeded 5%. This occurred at the 90.29th percentile, i.e., only 9.71% of the calibrated values exceeded an absolute error of 5%, which demonstrates the excellent response of the selected sensor.
With the most suitable UV sensor for our project selected, in this section we describe the electronic design of the device, with special emphasis on the choice of components and their physical implementation on a PCB ( ). Likewise, both power consumption ( ) and cost ( ) are analysed in order to verify that the restrictions imposed in the project are met. Finally, we describe the tests carried out to evaluate the real operation of the device ( ). 5.1. Electronic Design and Implementation As previously mentioned, our proposed programmable microcontroller-based device must be capable of calculating the accumulated UV radiation and providing a notification when the lethal dose has been reached. For this purpose, the device has been divided into the following functional stages: control ( ), power supply and regulation ( ), sensor adaptation ( ), and programming and RESET ( ). All these functional stages are integrated on a PCB board, as described in . 5.1.1. Control Stage This stage consists of the microcontroller and two very low power LEDs (along with their respective bias resistors), which are used indicate the end of the disinfection process and low battery status, as shown in . A button is added to the PB2 pin to make the operation mode of the device easily understandable and intuitive. Thus, when the user presses the button, if the operation LED flashes for 3 s, the disinfection process is not finished yet; if the operation LED stays on for 3 s, the disinfection process has finished. In addition, this same button is added to pin PB5 through an RC circuit in order to make a hard reset easy by pressing it for a period of 3 s. Regarding the other LED indicator, it remains off in normal operating mode, and only lights up when a critical battery level is reached, indicating that it is advisable to stop using the device. In order to improve the power consumption of the device, when the disinfection process is over the device goes into “sleep” mode (ultra-low consumption mode) until the user realizes it and deactivates it, turning the switch to off mode. Furthermore, the device enters this state when there is an absence of UV radiation. For selection of the microcontroller, because sampling frequency was not a determining factor we analyzed only those devices with low cost, small size, and capacity to work at low voltages (See ). From these, the ATtiny85V was chosen on account of its balanced consumption and the possibility of configurating it in different sleep modes, being able to consume a minimum of 0.1 μA in this state. It is composed of six programmable I/O pins, with two PWM channels, a 10-bit A/D converter, and several communication protocols. The complete schematic can be reviewed in the manufacturer’s datasheet . As can be seen in , the power supply of this microcontroller was set at 1.8 V, which is supplied by the regulation stage described in the next section. 5.1.2. Power Supply and Regulation Stage In a project such as this, where the device must be exposed to the sun for a long time, it might be thought that a good power supply system would be to use, for example, a photovoltaic panel connected to LiPo rechargeable batteries or supercapacitors. However, this type of solution was discarded in our case because it cannot accomplish the requirements needed by the device. On the one hand, due to recent changes in legislation, batteries containing lithium are now considered as hazardous waste at the end of their life-cycle . Therefore, the use of rechargeable batteries was not considered in this project. On the other hand, although the use of supercapacitors would satisfy the eco-sustainability requirement, their price (around EUR 5–6 [ , , ], together with the photovoltaic panel at around EUR 3 [ , , ]) would excessively increase the cost of the device. As we show in , the total cost of the proposed device is around EUR 12, which would be increased by EUR 7–8 if a solution incorporating a photovoltaic panel and supercapacitors were to be used, representing an increase of around 60–70%. Instead, we used single-use Zinc alkaline batteries, which accomplishes the eco-sustainability conditions in addition to having a reduced cost and high charging capacity, as can be seen in . Because the chosen microcontroller can operate with voltages starting from 1.8 V, we used two batteries in series, achieving a nominal voltage of 3 V. shows the diagram of the power supply and regulation stage. The V_BATT signal consists of 3 V supplied by two AA batteries placed in series using a battery holder, which is connected to switch S1 to turn the device on and off. After this, the signal is split into two: the battery level measurement signal (“BATT_LEVEL” signal), which is discussed later, and the voltage regulation stage signal. The latter consists of a protective Schottky diode followed by a MIC5225-1.8YM5 voltage regulator, which provides a stable power supply to the rest of the circuit. Specifically, this regulator provides a stable voltage of 1.8 V for input voltages of at least 2.3 V, which, as mentioned before, is the minimum with which the microprocessor can operate. The connections in the regulator were implemented following the recommendations provided in the manufacturer’s datasheets . On the other hand, the “BATT_LEVEL” signal is the input to the battery level measurement stage, shown in . This stage consists of a simple asymmetrical buffer with a rail-to-rail operational amplifier, for which the input is a voltage divider that allows for adapting the battery level (up to 3 V) to a level suitable for analog reading by the microprocessor, which is powered at 1.8 V. The output signal “PB/LOW_BATT” is connected to one of the inputs of the microprocessor, as shown in , and determines when the low battery indicator LED should blink. 5.1.3. Sensor Coupling Stage The conditioning circuit of the signal from the GUVA sensor was designed according to the recommendations provided by Adafruit under the open-source license , in this case using an asymmetrical power supply of 1.8 V for the operational amplifier, as shown in the diagram in . 5.1.4. Programming and RESET Stage An external push button was included in the design, which has three functionalities. First, it allows the microcontroller to be set in read mode in order to be programmed by entering the necessary code via ISP. Second, by holding the button down for 3 s the user can reset the reading of the accumulated UV dose in case multiple disinfection measures with the SODIS containers need to be made during the day. Finally, with a simple touch the user is able to check the state of the disinfection process. Its design is the typical switching circuit shown in , consisting of an external protection diode, pull-up resistor, and a filter capacitor to avoid the rebound effect. Under normal conditions, it offers a HIGH voltage level at the output (PB2/RESET) and a LOW level when the push button is pressed. 5.1.5. PCB Board Design For the manufacture of the final device, a double-sided PCB design was chosen in order to adapt to a commercial housing (CU-1941) in a tight fit. For protection, most of the electronic components are placed inside the housing (bottom side), leaving only the ON/OFF switch, the UV sensor, the LED indicators, and the push button on the outside (top side). After the device has been assembled, an epoxy resin coating is added to the top side of the device in order to protect it from the impact of operational conditions in real environments (i.e., sun exposure, presence of dust). The final PCB is shown in . With regard to the design parameters using routing software, a GRID was chosen according to the PCB manufacturer’s capacity and width and the thickness of tracks and vias according to the IPC-2221A standard. 5.2. Power Consumption From the very beginning, consumption was an influential factor in the design and development of the device, as the dosimeter needed to be able operate for several months with two AA batteries as the only source of energy. The estimated power consumption of each of the important components of the device is, according to their data sheets, as follows: OPAMP MCP6001: 100 μ A OPAMP AD8515: 300 μ A ATtiny85V microcontroller: 280 μ A (at 1.8 V and 1 MHz) APTD2012LSURCK LED diode: 2 mA MIC5225-1.8 V regulator: 70 μ A With these data, and taking into account that the disinfection LED indicator only works when the user presses the button, there are a number of energy consumption modes: - Operating normal mode (without LEDs): 750 μ A - Verification mode with disinfection process not finished (LED blinking by PWM 45% of the time for a period of 3 s): 1.65 mA - Verification mode with disinfection process finished (LED fully operational for a period of 3 s): 2.75 mA - Sleep mode (ultra-low consumption): 0.1 μA In order to verify these theoretical values, we empirically measured the power consumption of the device, obtaining the following data: LED on: 1820 μ A LED off: 910 μ A LED flushing: 1320 μ A Using this empirical power consumption, we can calculate the average consumption considering a worst-case scenario in which the user pushes the button ten times per hour and the disinfection process finishes in 3 h: (4) Avg . consumption = 29 · 3 s · 1.32 m A + 1 · 3 s · 1.82 m A + ( 3 · 3600 s − 30 · 3 s ) · 0.91 m A 3 · 3600 s = 9866.4 m A · s 3 · 3600 s = 0.914 m A It can be seen that in bad scenarios the consumption is somewhat similar to the normal operating mode (LED off). Using 3112 mAh alkaline batteries ( ) and a consumption value even larger than the calculated one (0.92 mA), the operating time in hours can be estimated as (5) Runtime ( h ) = 3112 m A h 0.92 m A = 3382.61 h Thus, for the hypothetical case in which the UV dosimeter is in operation during all the daylight hours of the day in Spain, which in the extreme case is 3000 h per year, the lifetime of the proposed device is at least 411 days, i.e., 13.5 months of continuous active operation during daylight. In addition, it should be noted that the device enters “sleeping mode” (ultra low power consumption) when the endpoint time of disinfection is been reached or when there is no UV radiation presence. In this way, the device does not need to wait until the user realizes the disinfection process is over. This implies that the lifetime of the device without changing the batteries should be much higher than 411 days in real use. 5.3. Device Unitary Cost shows a detailed budgeting of all the components to be used in the case of manufacturing 100 units of the final device. This budget includes PCB fabrication, component assembly, and shipping. To account for fabrication and assembly, a purchasing estimate was carried out on the JLCPCB website , using standard options for fabrication along with non-priority shipping. Taking into account the budget shown, we estimate a unit manufacturing cost of less than EUR 12 for a batch of 100 units. 5.4. Device Validation Tests: Error in the Endpoint Time The objective of these tests is to calculate the difference in the time needed to reach the lethal UV dose between the proposed device and the radiometer used as reference pattern, i.e., the relative error of our device in terms of the time that indicates the end of the disinfection process. To do this, both devices were first exposed to direct sunlight simultaneously over a period of time, then the obtained UV radiation data were recorded. shows the values for both devices. The accumulated UV radiation was then calculated in both cases and the difference in the time required to reach the same values was determined. Several tests were carried out, for which the device was programmed with different values for the lethal UV dose. In particular, for a lethal UV dose of 45 Wh/m 2 (the lethal value corresponding to enterococcus), which is the most widely used in practice, the proposed device reached this value with a delay of 40 s with respect to the radiometer after approximately 3 h of measurement. According to these values, our device obtains a relative error of 0.36% for the time that indicates the end of the disinfection process. In addition, this endpoint time is delayed with respect to the real time, meaning that it does not imply any risk that might affect the disinfection of the water. On the contrary, the later indication introduces a small safety margin that eliminates the risk of drinking water being unsuitable for consumption. These results are shown in .
As previously mentioned, our proposed programmable microcontroller-based device must be capable of calculating the accumulated UV radiation and providing a notification when the lethal dose has been reached. For this purpose, the device has been divided into the following functional stages: control ( ), power supply and regulation ( ), sensor adaptation ( ), and programming and RESET ( ). All these functional stages are integrated on a PCB board, as described in . 5.1.1. Control Stage This stage consists of the microcontroller and two very low power LEDs (along with their respective bias resistors), which are used indicate the end of the disinfection process and low battery status, as shown in . A button is added to the PB2 pin to make the operation mode of the device easily understandable and intuitive. Thus, when the user presses the button, if the operation LED flashes for 3 s, the disinfection process is not finished yet; if the operation LED stays on for 3 s, the disinfection process has finished. In addition, this same button is added to pin PB5 through an RC circuit in order to make a hard reset easy by pressing it for a period of 3 s. Regarding the other LED indicator, it remains off in normal operating mode, and only lights up when a critical battery level is reached, indicating that it is advisable to stop using the device. In order to improve the power consumption of the device, when the disinfection process is over the device goes into “sleep” mode (ultra-low consumption mode) until the user realizes it and deactivates it, turning the switch to off mode. Furthermore, the device enters this state when there is an absence of UV radiation. For selection of the microcontroller, because sampling frequency was not a determining factor we analyzed only those devices with low cost, small size, and capacity to work at low voltages (See ). From these, the ATtiny85V was chosen on account of its balanced consumption and the possibility of configurating it in different sleep modes, being able to consume a minimum of 0.1 μA in this state. It is composed of six programmable I/O pins, with two PWM channels, a 10-bit A/D converter, and several communication protocols. The complete schematic can be reviewed in the manufacturer’s datasheet . As can be seen in , the power supply of this microcontroller was set at 1.8 V, which is supplied by the regulation stage described in the next section. 5.1.2. Power Supply and Regulation Stage In a project such as this, where the device must be exposed to the sun for a long time, it might be thought that a good power supply system would be to use, for example, a photovoltaic panel connected to LiPo rechargeable batteries or supercapacitors. However, this type of solution was discarded in our case because it cannot accomplish the requirements needed by the device. On the one hand, due to recent changes in legislation, batteries containing lithium are now considered as hazardous waste at the end of their life-cycle . Therefore, the use of rechargeable batteries was not considered in this project. On the other hand, although the use of supercapacitors would satisfy the eco-sustainability requirement, their price (around EUR 5–6 [ , , ], together with the photovoltaic panel at around EUR 3 [ , , ]) would excessively increase the cost of the device. As we show in , the total cost of the proposed device is around EUR 12, which would be increased by EUR 7–8 if a solution incorporating a photovoltaic panel and supercapacitors were to be used, representing an increase of around 60–70%. Instead, we used single-use Zinc alkaline batteries, which accomplishes the eco-sustainability conditions in addition to having a reduced cost and high charging capacity, as can be seen in . Because the chosen microcontroller can operate with voltages starting from 1.8 V, we used two batteries in series, achieving a nominal voltage of 3 V. shows the diagram of the power supply and regulation stage. The V_BATT signal consists of 3 V supplied by two AA batteries placed in series using a battery holder, which is connected to switch S1 to turn the device on and off. After this, the signal is split into two: the battery level measurement signal (“BATT_LEVEL” signal), which is discussed later, and the voltage regulation stage signal. The latter consists of a protective Schottky diode followed by a MIC5225-1.8YM5 voltage regulator, which provides a stable power supply to the rest of the circuit. Specifically, this regulator provides a stable voltage of 1.8 V for input voltages of at least 2.3 V, which, as mentioned before, is the minimum with which the microprocessor can operate. The connections in the regulator were implemented following the recommendations provided in the manufacturer’s datasheets . On the other hand, the “BATT_LEVEL” signal is the input to the battery level measurement stage, shown in . This stage consists of a simple asymmetrical buffer with a rail-to-rail operational amplifier, for which the input is a voltage divider that allows for adapting the battery level (up to 3 V) to a level suitable for analog reading by the microprocessor, which is powered at 1.8 V. The output signal “PB/LOW_BATT” is connected to one of the inputs of the microprocessor, as shown in , and determines when the low battery indicator LED should blink. 5.1.3. Sensor Coupling Stage The conditioning circuit of the signal from the GUVA sensor was designed according to the recommendations provided by Adafruit under the open-source license , in this case using an asymmetrical power supply of 1.8 V for the operational amplifier, as shown in the diagram in . 5.1.4. Programming and RESET Stage An external push button was included in the design, which has three functionalities. First, it allows the microcontroller to be set in read mode in order to be programmed by entering the necessary code via ISP. Second, by holding the button down for 3 s the user can reset the reading of the accumulated UV dose in case multiple disinfection measures with the SODIS containers need to be made during the day. Finally, with a simple touch the user is able to check the state of the disinfection process. Its design is the typical switching circuit shown in , consisting of an external protection diode, pull-up resistor, and a filter capacitor to avoid the rebound effect. Under normal conditions, it offers a HIGH voltage level at the output (PB2/RESET) and a LOW level when the push button is pressed. 5.1.5. PCB Board Design For the manufacture of the final device, a double-sided PCB design was chosen in order to adapt to a commercial housing (CU-1941) in a tight fit. For protection, most of the electronic components are placed inside the housing (bottom side), leaving only the ON/OFF switch, the UV sensor, the LED indicators, and the push button on the outside (top side). After the device has been assembled, an epoxy resin coating is added to the top side of the device in order to protect it from the impact of operational conditions in real environments (i.e., sun exposure, presence of dust). The final PCB is shown in . With regard to the design parameters using routing software, a GRID was chosen according to the PCB manufacturer’s capacity and width and the thickness of tracks and vias according to the IPC-2221A standard.
This stage consists of the microcontroller and two very low power LEDs (along with their respective bias resistors), which are used indicate the end of the disinfection process and low battery status, as shown in . A button is added to the PB2 pin to make the operation mode of the device easily understandable and intuitive. Thus, when the user presses the button, if the operation LED flashes for 3 s, the disinfection process is not finished yet; if the operation LED stays on for 3 s, the disinfection process has finished. In addition, this same button is added to pin PB5 through an RC circuit in order to make a hard reset easy by pressing it for a period of 3 s. Regarding the other LED indicator, it remains off in normal operating mode, and only lights up when a critical battery level is reached, indicating that it is advisable to stop using the device. In order to improve the power consumption of the device, when the disinfection process is over the device goes into “sleep” mode (ultra-low consumption mode) until the user realizes it and deactivates it, turning the switch to off mode. Furthermore, the device enters this state when there is an absence of UV radiation. For selection of the microcontroller, because sampling frequency was not a determining factor we analyzed only those devices with low cost, small size, and capacity to work at low voltages (See ). From these, the ATtiny85V was chosen on account of its balanced consumption and the possibility of configurating it in different sleep modes, being able to consume a minimum of 0.1 μA in this state. It is composed of six programmable I/O pins, with two PWM channels, a 10-bit A/D converter, and several communication protocols. The complete schematic can be reviewed in the manufacturer’s datasheet . As can be seen in , the power supply of this microcontroller was set at 1.8 V, which is supplied by the regulation stage described in the next section.
In a project such as this, where the device must be exposed to the sun for a long time, it might be thought that a good power supply system would be to use, for example, a photovoltaic panel connected to LiPo rechargeable batteries or supercapacitors. However, this type of solution was discarded in our case because it cannot accomplish the requirements needed by the device. On the one hand, due to recent changes in legislation, batteries containing lithium are now considered as hazardous waste at the end of their life-cycle . Therefore, the use of rechargeable batteries was not considered in this project. On the other hand, although the use of supercapacitors would satisfy the eco-sustainability requirement, their price (around EUR 5–6 [ , , ], together with the photovoltaic panel at around EUR 3 [ , , ]) would excessively increase the cost of the device. As we show in , the total cost of the proposed device is around EUR 12, which would be increased by EUR 7–8 if a solution incorporating a photovoltaic panel and supercapacitors were to be used, representing an increase of around 60–70%. Instead, we used single-use Zinc alkaline batteries, which accomplishes the eco-sustainability conditions in addition to having a reduced cost and high charging capacity, as can be seen in . Because the chosen microcontroller can operate with voltages starting from 1.8 V, we used two batteries in series, achieving a nominal voltage of 3 V. shows the diagram of the power supply and regulation stage. The V_BATT signal consists of 3 V supplied by two AA batteries placed in series using a battery holder, which is connected to switch S1 to turn the device on and off. After this, the signal is split into two: the battery level measurement signal (“BATT_LEVEL” signal), which is discussed later, and the voltage regulation stage signal. The latter consists of a protective Schottky diode followed by a MIC5225-1.8YM5 voltage regulator, which provides a stable power supply to the rest of the circuit. Specifically, this regulator provides a stable voltage of 1.8 V for input voltages of at least 2.3 V, which, as mentioned before, is the minimum with which the microprocessor can operate. The connections in the regulator were implemented following the recommendations provided in the manufacturer’s datasheets . On the other hand, the “BATT_LEVEL” signal is the input to the battery level measurement stage, shown in . This stage consists of a simple asymmetrical buffer with a rail-to-rail operational amplifier, for which the input is a voltage divider that allows for adapting the battery level (up to 3 V) to a level suitable for analog reading by the microprocessor, which is powered at 1.8 V. The output signal “PB/LOW_BATT” is connected to one of the inputs of the microprocessor, as shown in , and determines when the low battery indicator LED should blink.
The conditioning circuit of the signal from the GUVA sensor was designed according to the recommendations provided by Adafruit under the open-source license , in this case using an asymmetrical power supply of 1.8 V for the operational amplifier, as shown in the diagram in .
An external push button was included in the design, which has three functionalities. First, it allows the microcontroller to be set in read mode in order to be programmed by entering the necessary code via ISP. Second, by holding the button down for 3 s the user can reset the reading of the accumulated UV dose in case multiple disinfection measures with the SODIS containers need to be made during the day. Finally, with a simple touch the user is able to check the state of the disinfection process. Its design is the typical switching circuit shown in , consisting of an external protection diode, pull-up resistor, and a filter capacitor to avoid the rebound effect. Under normal conditions, it offers a HIGH voltage level at the output (PB2/RESET) and a LOW level when the push button is pressed.
For the manufacture of the final device, a double-sided PCB design was chosen in order to adapt to a commercial housing (CU-1941) in a tight fit. For protection, most of the electronic components are placed inside the housing (bottom side), leaving only the ON/OFF switch, the UV sensor, the LED indicators, and the push button on the outside (top side). After the device has been assembled, an epoxy resin coating is added to the top side of the device in order to protect it from the impact of operational conditions in real environments (i.e., sun exposure, presence of dust). The final PCB is shown in . With regard to the design parameters using routing software, a GRID was chosen according to the PCB manufacturer’s capacity and width and the thickness of tracks and vias according to the IPC-2221A standard.
From the very beginning, consumption was an influential factor in the design and development of the device, as the dosimeter needed to be able operate for several months with two AA batteries as the only source of energy. The estimated power consumption of each of the important components of the device is, according to their data sheets, as follows: OPAMP MCP6001: 100 μ A OPAMP AD8515: 300 μ A ATtiny85V microcontroller: 280 μ A (at 1.8 V and 1 MHz) APTD2012LSURCK LED diode: 2 mA MIC5225-1.8 V regulator: 70 μ A With these data, and taking into account that the disinfection LED indicator only works when the user presses the button, there are a number of energy consumption modes: - Operating normal mode (without LEDs): 750 μ A - Verification mode with disinfection process not finished (LED blinking by PWM 45% of the time for a period of 3 s): 1.65 mA - Verification mode with disinfection process finished (LED fully operational for a period of 3 s): 2.75 mA - Sleep mode (ultra-low consumption): 0.1 μA In order to verify these theoretical values, we empirically measured the power consumption of the device, obtaining the following data: LED on: 1820 μ A LED off: 910 μ A LED flushing: 1320 μ A Using this empirical power consumption, we can calculate the average consumption considering a worst-case scenario in which the user pushes the button ten times per hour and the disinfection process finishes in 3 h: (4) Avg . consumption = 29 · 3 s · 1.32 m A + 1 · 3 s · 1.82 m A + ( 3 · 3600 s − 30 · 3 s ) · 0.91 m A 3 · 3600 s = 9866.4 m A · s 3 · 3600 s = 0.914 m A It can be seen that in bad scenarios the consumption is somewhat similar to the normal operating mode (LED off). Using 3112 mAh alkaline batteries ( ) and a consumption value even larger than the calculated one (0.92 mA), the operating time in hours can be estimated as (5) Runtime ( h ) = 3112 m A h 0.92 m A = 3382.61 h Thus, for the hypothetical case in which the UV dosimeter is in operation during all the daylight hours of the day in Spain, which in the extreme case is 3000 h per year, the lifetime of the proposed device is at least 411 days, i.e., 13.5 months of continuous active operation during daylight. In addition, it should be noted that the device enters “sleeping mode” (ultra low power consumption) when the endpoint time of disinfection is been reached or when there is no UV radiation presence. In this way, the device does not need to wait until the user realizes the disinfection process is over. This implies that the lifetime of the device without changing the batteries should be much higher than 411 days in real use.
shows a detailed budgeting of all the components to be used in the case of manufacturing 100 units of the final device. This budget includes PCB fabrication, component assembly, and shipping. To account for fabrication and assembly, a purchasing estimate was carried out on the JLCPCB website , using standard options for fabrication along with non-priority shipping. Taking into account the budget shown, we estimate a unit manufacturing cost of less than EUR 12 for a batch of 100 units.
The objective of these tests is to calculate the difference in the time needed to reach the lethal UV dose between the proposed device and the radiometer used as reference pattern, i.e., the relative error of our device in terms of the time that indicates the end of the disinfection process. To do this, both devices were first exposed to direct sunlight simultaneously over a period of time, then the obtained UV radiation data were recorded. shows the values for both devices. The accumulated UV radiation was then calculated in both cases and the difference in the time required to reach the same values was determined. Several tests were carried out, for which the device was programmed with different values for the lethal UV dose. In particular, for a lethal UV dose of 45 Wh/m 2 (the lethal value corresponding to enterococcus), which is the most widely used in practice, the proposed device reached this value with a delay of 40 s with respect to the radiometer after approximately 3 h of measurement. According to these values, our device obtains a relative error of 0.36% for the time that indicates the end of the disinfection process. In addition, this endpoint time is delayed with respect to the real time, meaning that it does not imply any risk that might affect the disinfection of the water. On the contrary, the later indication introduces a small safety margin that eliminates the risk of drinking water being unsuitable for consumption. These results are shown in .
In this paper we have proposed a low cost and eco-friendly sustainable electronic device that can be used to determine the end of the disinfection process in the SODIS method. The device should be placed next to SODIS containers in order to measure the accumulated UV irradiation, notifying users when the lethal UV dose has been reached. Because this lethal UV dose may be different for different regions of the world and different types of SODIS containers, a programmable device has been designed that can be used in any case. Because the DNA repair mechanisms of exposed cells are overwhelmed by UV radiation , the damage induced by sunlight continues even when the cells are taken out of the sun and incubated in the dark. This is particularly important with respect to the possibility of bacterial re-growth during storage. The most challenging constraint of this project was to develop a low cost solution, which imposed severe restrictions on the design of the device and the selection of the electronic components. First, as mentioned in , we have analyzed different low cost UV sensors in order to select the most accurate one by comparing their response with the reference pattern provided by a radiometer. This analysis shows that low-cost sensors provide very good responsiveness in the measurement of both UV-A and UV-B radiation, reaching a correlation with respect to the reference pattern of around 96% in the case of the GUVA-S12SD sensor. In , we have described the electronic design of the proposed device, which uses the selected sensor to measure the accumulated UV radiation and compares this value with the lethal UV dose to determine the end of the disinfection process. In addition to the low cost constraint, sustainability has been taken into account in this design as well, leading to a proposed device which has sufficient autonomy to work for more than a year with small and low-cost disposable batteries, thereby avoiding the use of rechargeable batteries, which are considered hazardous waste at the end of their useful life. Finally, the device was manufactured and tested in real conditions by comparing its response with the reference pattern provided by the radiometer. According to these tests, a relative error of only 0.36% in the endpoint time was obtained.
|
Assessing emergency physicians' competency gaps in caring for acute psychiatric emergencies: a comparative analysis of self-perceived confidence and performance against training program expectations | 17677f65-d5ae-4612-b53a-85d6d06f8a6a | 11515569 | Psychiatry[mh] | The prevalence of mental disorders has steadily increased over the years and is associated with substantial economic and social costs . As the average life expectancy increases , it becomes increasingly important to identify individuals with mental illness as they are more susceptibele to medical compared to the general population [ – ]. In recent years, there has been a clear global trend showing a rise in the prevalence of mental disorders among visitors to emergency departments (EDs) [ – ]. In the United States, approximately one in eight ED visits is related to mental disorders . Recent data indicate that ED visits have increased by 10% for patients primarily presenting with mental disorders and by 50% for those with comorbid mental disorders . This reflects a consistent relationship between psychopathology and disability . Identifying individuals with mental illness is crucial globally, given their heightened vulnerability to medical comorbidities, as highlighted by the notable surge in emergency department visits worldwide . The emergency psychiatric service serves as a vital and irreplaceable gateway to the entire mental health network, playing a crucial role as a conduit for immediate psychiatric assessment and treatment within general hospitals . Research shows that patients with psychobehavioral complaints tended to have a 3.2-fold longer stay in the ED compared to those with nonpsychiatric chief complaints . Patients in psychiatric crisis often present to the emergency room (ER), and after initial stabilization, they need to be transferred to inpatient or outpatient psychiatric care for further treatment. However, a shortage of inpatient psychiatric programs, the presence of comorbid medical conditions, and insurance-related issues may lead to prolonged stays in the ER stays, further increasing the waiting time for hospitalization. Factors such as the need for hospitalization, the use of restraints, and the completion of diagnostic imaging significantly affect post-assessment boarding time. Collectively, these factors increase the risk of prolonged stays in the ER, complicating and lengthening the management of psychiatric emergency patients [ – ]. Doctors in the ED frequently provide care for patients with preexisting psychiatric conditions who are currently unwell, injured, or in a behavioral crisis . Consequently, they have become a key frontline healthcare providers for acute psychiatric emergencies . As the number of undiagnosed and untreated patients with an acute psychiatric presentation in the ED rises, so does the expectation for emergency physicians to quickly identify and manage these patients effectively . Proficiency in evaluating and managing patients with acute mental illnesses is becoming an essential component of emergency medicine (EM) residency training. However, existing training programs often rely on on-the-job training for residents, leading to variations and inconsistencies in how psychiatric emergencies are addressed across different programs. We argue that insufficient training in psychiatric emergencies can create challenges for emergency physicians in promptly identifying and effectively treating patients with acute psychiatric presentations. This deficiency may contribute to negative attitudes towards these patients, potentially affecting the quality of care provided to individuals experiencing acute mental illness . Unfortunately, training in acute mental illness has historically received insufficient attention . Taiwan's national healthcare system plays a significant role in the Asian context, characterized by good accessibility and comprehensive population coverage. Within this framework, the well-designed emergency medicine residency training program has been in place for more than twenty years. According to Taiwan Ministry of Health and Welfare, the number of individuals seeking treatment for mental disorders increased by 3.6% (equivalent to approximately 100,000 individuals) in 2019 compared to 2018. As demand for emergency services for acute mental illnesses rises, the lack of research on acute psychiatric emergencies training becomes increasingly apparent. Against this backdrop, our study aims to identify the factors influencing emergency physicians' management of acute psychiatric emergencies, with a focus on confidence levels and competency gaps to determine whether current training programs sufficiently align with societal expectations for psychiatric emergency care.
Study design This study employed a cross-sectional survey method involving emergency physicians across Taiwan. The online questionnaire was disseminated by the secretary of the Taiwan Society of Emergency Medicine (TSEM). Ethical approval for this study was obtained by the local ethics board (IRB No. 202101794B0). The informed consent was obtained from participants before they completed the questionnaires. Context and training program In Taiwan, emergency medicine training spans more than 20 years, with one-month of psychiatric emergency training comprising only 2% of the total training time. This training includes acute and chronic ward patient care, emergency psychiatric consultation, as well as experiencial learning in outpatient departments. Throughout this one-month training period, participants will develop competencies in psychiatric interviewing and history-taking, addressing alcohol and substance abuse, suicide assessment and intervention, management of emotional and psychotic disorders, utilization of chemical and physical restraint techniques, addressing mental health concerns in the elderly population, and acquiring foundational knowledge in child psychiatry. Compared to other countries, the training of emergency medicine residents in the field of psychiatry is relatively well established and meets the requirements of the emergency department setting . Participants According to the TSEM, there are 43 hospitals with emergency medicine training programs approved by the Resident Review Committee (RRC). Overall, 1385 physicians were invited (936 attending physicians and 449 residents) to participate in this study during the winter of 2021. The participants included those who had received or were currently receiving psychiatric emergency medicine training in emergency medicine residency training programs. We excluded physicians who had not rotated to emergency psychiatric courses or left RRC-approved emergency medicine training hospitals. Survey instrument A committee composed of senior emergency educators and program directors developed the questionnaire through expert consensus. The questionnaire consisted of three sections. The initial section included participants' demographic information, such as gender, clinical seniority, the level of the training hospital, duration of psychiatric ward rotation, the department responsible for the curriculum, assessors of clinical psychiatric training outcomes, the average number of psychiatric patients encountered during a daily work shift, and the presence of an acute psychiatric ward in the hospital. The second part is a list of skill requirements based on the emergency medicine (EM) residency training program. The questionnaire asked participants to rate their handling ability and mastery of ten conditions/skills (history of taking psychiatric patients, substance use disorders, mood disorders, psychotic disorders, safety planning with suicidal patients, use of physical or chemical restraints, risk assessment of harm to themselves or others, and psychiatric emergencies in the elderly/pregnant/children). We used a ten-point unipolar response scale (0 = nothing or not at all needed to learn or improve; 9 = very large amount or extremely needed to learn or improve). Finally, we asked EM residents to rate their confidence in managing acute psychiatric patients independently. We used a ten-point unipolar response scale (0 = not at all confident; 9 = extremely confident) for these questions. We also asked about their satisfaction with the rotation time and the current psychiatric training content. The last question in this questionnaire is an open question about this training program (including training patterns, advice for the training program and so on). We have conducted a reliability analysis for the questionnaire, and the Cronbach’s alpha for the overall scale was 0.94, indicating excellent internal consistency. This suggests that the items reliably measure the intended construct. Data analysis We used counts and percentages to describe discrete variables and use the chi-square test to compare two groups. Numeric questionnaire results are presented as the median (IQR), and the Wilcoxon rank-sum test was used for comparison. A radar chart was used to show the differences and comparisons of certain abilities between the two groups. A p value of < 0.05 was considered to indicate statistical significance. The statistical analysis was performed using Microsoft Excel and SAS statistical software version 9.4 (SAS Institute, Cary, NC, USA).
This study employed a cross-sectional survey method involving emergency physicians across Taiwan. The online questionnaire was disseminated by the secretary of the Taiwan Society of Emergency Medicine (TSEM). Ethical approval for this study was obtained by the local ethics board (IRB No. 202101794B0). The informed consent was obtained from participants before they completed the questionnaires.
In Taiwan, emergency medicine training spans more than 20 years, with one-month of psychiatric emergency training comprising only 2% of the total training time. This training includes acute and chronic ward patient care, emergency psychiatric consultation, as well as experiencial learning in outpatient departments. Throughout this one-month training period, participants will develop competencies in psychiatric interviewing and history-taking, addressing alcohol and substance abuse, suicide assessment and intervention, management of emotional and psychotic disorders, utilization of chemical and physical restraint techniques, addressing mental health concerns in the elderly population, and acquiring foundational knowledge in child psychiatry. Compared to other countries, the training of emergency medicine residents in the field of psychiatry is relatively well established and meets the requirements of the emergency department setting .
According to the TSEM, there are 43 hospitals with emergency medicine training programs approved by the Resident Review Committee (RRC). Overall, 1385 physicians were invited (936 attending physicians and 449 residents) to participate in this study during the winter of 2021. The participants included those who had received or were currently receiving psychiatric emergency medicine training in emergency medicine residency training programs. We excluded physicians who had not rotated to emergency psychiatric courses or left RRC-approved emergency medicine training hospitals.
A committee composed of senior emergency educators and program directors developed the questionnaire through expert consensus. The questionnaire consisted of three sections. The initial section included participants' demographic information, such as gender, clinical seniority, the level of the training hospital, duration of psychiatric ward rotation, the department responsible for the curriculum, assessors of clinical psychiatric training outcomes, the average number of psychiatric patients encountered during a daily work shift, and the presence of an acute psychiatric ward in the hospital. The second part is a list of skill requirements based on the emergency medicine (EM) residency training program. The questionnaire asked participants to rate their handling ability and mastery of ten conditions/skills (history of taking psychiatric patients, substance use disorders, mood disorders, psychotic disorders, safety planning with suicidal patients, use of physical or chemical restraints, risk assessment of harm to themselves or others, and psychiatric emergencies in the elderly/pregnant/children). We used a ten-point unipolar response scale (0 = nothing or not at all needed to learn or improve; 9 = very large amount or extremely needed to learn or improve). Finally, we asked EM residents to rate their confidence in managing acute psychiatric patients independently. We used a ten-point unipolar response scale (0 = not at all confident; 9 = extremely confident) for these questions. We also asked about their satisfaction with the rotation time and the current psychiatric training content. The last question in this questionnaire is an open question about this training program (including training patterns, advice for the training program and so on). We have conducted a reliability analysis for the questionnaire, and the Cronbach’s alpha for the overall scale was 0.94, indicating excellent internal consistency. This suggests that the items reliably measure the intended construct.
We used counts and percentages to describe discrete variables and use the chi-square test to compare two groups. Numeric questionnaire results are presented as the median (IQR), and the Wilcoxon rank-sum test was used for comparison. A radar chart was used to show the differences and comparisons of certain abilities between the two groups. A p value of < 0.05 was considered to indicate statistical significance. The statistical analysis was performed using Microsoft Excel and SAS statistical software version 9.4 (SAS Institute, Cary, NC, USA).
A total of 229 participants were enrolled, comprising 190 males and 39 females. Among them, 83 were residents, while 146 were attending physicians, constituting 63.8% of the total. Of those invited to participate, 81.7% were affiliated with medical centers. Additionally, 63.8% of the participants had received more than one month of training in psychiatric ward patient care. Full demographic and descriptive details are provided in Table . Psychiatrists led 69.9% of the emergency psychiatric training curricula, and 66.8% of the participants self-reported that their clinical psychiatric training outcomes were assessed by psychiatrists. Regarding patient volume, 66.8% of the participants in this survey encountered zero or a lower volume of acute psychiatric emergencies during each shift, with 62.9% reporting seeing only one to two patients per shift. Furthermore, ninety percent of participants’ training hospitals were capable of admitting psychiatric patients. Clinical seniority, the training hospital level, and the average number of emergency psychiatric patients encountered in a daily EM shift will influence emergency physicians' self-reported confidence when they independently manage acute psychiatric patients (Table ). However, the duration of the psychiatric ward rotation, in which the department assesses or is in charge of the psychiatric training curriculum, whether a psychiatrist is available, and whether the hospital can admit a psychiatric patient do not influence the participants' confidence to independently manage acute psychiatric patients. To determine the competence gap for addressing psychiatric-related emergencies, we chose categories that included the level of supervision (residents under supervision vs. attending physicians), the type of healthcare facility (local hospital or medical center), and the learning opportunity (average number of acute psychiatric patients seen in a daily shift). Figure (A) shows a comparison of the differences in various psychiatric-related abilities between residents under supervision and attending physicians. Compared with attending physicians, residents demonstrate areas in need of reinforcement across the following dimensions, disease diagnosis and management (substance use disorder, mood disorder, psychotic disorder) and risk assessment (self-harm and violence). We found that there was a similar large gap in confidence between the two groups in handling acute psychiatric disorders in special populations (elderly individuals/children/pregnant individuals). In Fig. (B), we compare the competence gaps among various training settings, medical centers and local hospitals. We found that EM residents under training and supervision in local hospitals are in need of reinforcement in all skill sets and competences for managing acute psychiatric emergencies. Interestingly, participants trained at medical centers self-reported a greater ability to manage psychiatric emergencies among elderly individuals than did those trained at local hospitals. In Fig. (C) shows that participants with an average patient number greater than 2 on a daily shift demonstrated more confidence in dealing with the general population. In all aspects, including psychiatric emergencies in special populations, participants with more than two patients showed a significantly smaller competence gap than did those with fewer than two patients.
This study presents a nationwide survey assessing the effectiveness of acute psychiatric emergency education within Emergency Medicine (EM) resident training programs in Taiwan. Participants spanned various career stages, representing a range of experiences and educational backgrounds accumulated over the past two decades. Compared to previous research, our study identified several key factors including physicians' ability to manage psychiatric emergencies in acute care settings, including seniority (attending physicians outperform resident physicians), training context (medical centers outperform regional hospitals), and the volume of patient encounters during duty shifts. Specifically, attending physicians, those trained in medical centers, and individuals who encountered more than two patients during duty shifts reported having stronger skill set in managing general psychiatric emergencies in the ED context. For specific patient groups, medical center training was associated with enhanced capability in managing elderly psychiatric emergencies, while increased patient encounters during EM shifts helped close the competence gap in managing children and psychiatric emergencies during pregnancy. It is important to note that there remains room for improvement in managing psychiatric emergencies, particularly in vulnerable populations such as elderly individuals, pregnant women, and children. Clinical seniority typically reflects a physician's experience with patients, with confidence stemming from accumulated clinical knowledge and practice. While attending physicians are adept at managing common psychiatric emergencies, addressing the needs of specific populations—such as the elderly, pregnant women, and children—still demands specialized training and systematic case management tracking at medical centers. Previous studies have showed that more than half of emergency inpatient physicians believe they need more proficiency in managing acute psychiatric emergencies, especially with vulnerable populations (e.g., pregnant women, elderly individuals, and children), where confidence is lacking [ , , ]. Our research echoed these findings, demonstrating a significant competence gap among emergency physicians when managing special populations, which does not diminish with greater seniority or experience. The reasons behind this competence gap are multifaceted, as special populations often have a higher comorbidity of psychiatric and physiological conditions, making their diagnosis and management more complex . Additionally, emergency physicians typically learn through case-based, hand-on problem-solving, which may not adequately address the unique challenges posed by these populations. For elderly individuals, psychiatric emergency visits occur less frequently compared to the general population. This, combined with the complexities of diagnosing psychiatric conditions in older individuals, makes it more challenging for physicians to recognize psychiatric emergencies in this age group [ – ]. Interestingly, our study found that physicians trained at medical centers had greater confidence in managing psychiatric emergencies in elderly patients compared to those trained in regional hospitals. In our study context, elderly patients with psychiatric emergencies, such as delirium or major neurocognitive disorder (dementia) accompanied by neuropsychiatric symptoms, are often admitted to geriatric medicine departments and neurology wards within medical centers. As a result, emergency physicians at medical centers have a significantly greater opportunity to interact with and manage such patients during their training compared to other special populations. This contrasts with their experiences managing psychiatric emergencies in pregnant women and children, where confidence was less correlated with training location. Training for acute psychiatric illnesses has gained increasing attention in recent years, particularly with the increase in the number of psychiatric patients seeking emergency care after the COVID-19 pandemic . The demands on emergency physicians to identify, diagnose, and manage these patients have consequently increased. Our nationwide research indicates an urgent need for psychiatric disease education and practices tailored to specific populations. Future reforms should prioritize focusing resident training for psychiatric emergencies on medical centers, particularly to enhance skills in managing elderly patients. Additionally, innovative teaching and learning methods such as simulation-based training, video-assisted teaching, and collaboration with specialized psychiatric hospitals should contribute to enhancing the ability to treat patients from special populations. Simulation-based education, which effectively translates clinical into the educational setting, has proven beneficial at various stages of psychiatric education . We believe that in the near future, an increased focus on psychiatric emergencies, adjustments to training programs, and the lifelong learning of emergency physicians can improve clinical competencies and provide more comprehensive care for psychiatric emergency patients. Limitations Firstly, this was a single cross-sectional national survey study, so cultural and contextual differences should be considered before generalizing the results to a different country or setting. Second, although we have collected diverse characteristic data on the characteristics of emergency setting at different levels, we lacked precise information on the frequency of acute psychiatry consultations, which would have been useful for exploring and interpreting the determining factors affecting emergency residency training outcomes. In addition, the response rate was 16.53% of the total population. The relatively lower response rate may be due to the impact of the COVID-19 pandemic on emergency department during the survey period, as well as the nature of the national survey, emergency work culture, and lack of incentives, which could potentially influence the validity and transferability of the results.
Firstly, this was a single cross-sectional national survey study, so cultural and contextual differences should be considered before generalizing the results to a different country or setting. Second, although we have collected diverse characteristic data on the characteristics of emergency setting at different levels, we lacked precise information on the frequency of acute psychiatry consultations, which would have been useful for exploring and interpreting the determining factors affecting emergency residency training outcomes. In addition, the response rate was 16.53% of the total population. The relatively lower response rate may be due to the impact of the COVID-19 pandemic on emergency department during the survey period, as well as the nature of the national survey, emergency work culture, and lack of incentives, which could potentially influence the validity and transferability of the results.
Our study highlights the factors influencing emergency physicians' preparedness in managing psychiatric emergencies within acute care settings. While attending physicians, those trained in medical centers, and those with higher patient encounter rates demonstrate proficiency in certain areas, there remains room for improvement, particularly in addressing the needs of specific patient populations. These findings underscore the importance of refining training curricula to bridge these gaps and enhance the quality of psychiatric emergency patient care.
|
Prognostic value of functional SMAD4 localization in extrahepatic bile duct cancer | 17670027-bf71-4167-8fed-f21126c416e6 | 9463834 | Anatomy[mh] | Biliary tract cancers (BTCs), which include intrahepatic bile duct cancer (BDC), extrahepatic BDC (eBDC), and gallbladder cancer, arise from the epithelium of the bile duct and a highly malignant neoplasm. Although curative resection is the only effective treatment, more than half of BTC patients cannot undergo surgery because they are diagnosed at an advanced stage . In addition, there is a high relapse rate in patients who undergo curative resection . Gemcitabine plus cisplatin (GC) has been the standard chemotherapy treatment for advanced/recurrent BTCs based on the results from the ABC-02 trial and the BT-22 trial . Gemcitabine and S-1 combination therapy (GS) has been shown to not be inferior to GC therapy . However, the recommended treatment options for unresectable or metastatic disease are limited, and the prognosis of these patients is poor, with a median overall survival (OS) of approximately 1 year . BTC is a genetically diverse collection of cancers. Genomic and transcriptomic analysis of BTCs has been performed to understand the molecular landscape and to develop a new molecular targeted therapy . KRAS , TP53 , ARID1A , and SMAD4 have been identified as the most prevalent mutations in eBTC . SMAD4 is a key mediator of the TGFβ signaling pathway and works via nuclear transition. The protein functions as a tumor suppressor and inhibits cell proliferation. Mutations and deletions of SMAD4 have been most commonly documented in pancreatic adenocarcinoma and biliary tract cancer and colorectal cancer . Furthermore, the loss of SMAD4 protein expression has been shown to correlate with poor prognosis in pancreatic, appendiceal, and esophageal adenocarcinomas. We previously showed that SMAD4 contributes to chemoresistance in BTCs by inducing epithelial-mesenchymal transformation (EMT) . The TGFβ signaling pathway plays a dual role as both a tumor-suppressor and tumor-promotor depending on the tumor stage and tumor microenvironment . Genomic alteration of genes encoding components of the TGFβ pathway, including SMAD4 , has been observed frequently in hepatobiliary cancer. We previously visualized more nuclear transitioning functional SMAD4 at the tumor invasion front than the central lesion . We thought that this intra-tumoral heterogeneity of functional SMAD4 was induced by tumor progression and the effect of SMAD4 on tumor progression depends on tumor stage. However, the significance of functional SMAD4 localization has not been examined in any detail. Thus, our objective in the present study was to investigate the localization of functional SMAD4 in BTC and its significance using resected specimens. We also examined the association between functional SMAD4 and chemotherapy and radiotherapy.
Resected specimens and patient characteristics We retrospectively analyzed 98 cases of eBDC including 54 perihilar bile duct cancer and 44 distal bile duct cancer who underwent R0 or R1 resection between 2004 and 2018 at Osaka University Hospital or Osaka International Cancer Institute in Osaka, Japan. The race/ethnicity of all patients in this cohort is Japanese/Asian. Patients who underwent R2 resection were excluded from this study (Fig. ). Resected specimens were formalin-fixed and preserved in paraffin blocks prior to immunohistochemistry. The use of resected samples was approved by the Human Ethics Review Committee of the Graduate School of Medicine, Osaka University (No. 20493). Written informed consent was obtained from all patients included in the study. Pre-operative treatment and follow-up treatment after surgery After routine examination of the general condition, we performed computed tomography (CT) or magnetic resonance imaging, endoscopic retrograde cholangiography, and/or percutaneous transhepatic cholangiography, electrogram, spirogram, and chest X-rays. Preoperative staging was performed using image-based diagnosis. The treatment procedure was determined by the cancer board at each institution, consisting of radiologists, gastroenterologists, hepatologists, oncologists, and surgeons. After surgery, patients regularly underwent CT and the measurement of serum carcinoembryonic antigen (CEA) or carbohydrate antigen 19–9 (CA19-9) every 3 months in the first 2 years and every 6 months thereafter. If recurrence was clinically suspected, additional blood tests and imaging were performed to confirm recurrence. Recurrence was diagnosed based on these findings. After recurrence, we treated patients with chemotherapy, radiation, surgery, or best supportive care depending on the patients’ condition and the site and number of recurrences. Immunohistochemistry Immunohistochemical staining for SMAD4 was carried out as described previously. In summary, resected specimens were cut into 3.5-µm slices, deparaffinized with xylene and ethanol, and bathed in citrate buffer at 110 °C for 20 min for antigen retrieval. Endogenous peroxidase activity was inhibited by treating the tissue sample with 3.0% hydrogen peroxidase solution in methanol for 20 min. Non-specific binding sites were blocked in 1 mol/L PBS with 10% normal goat serum from the Avidin/Biotin Blocking Kit (Vector Laboratories Inc., Burlingame, CA, USA). The slices were incubated at 4 °C overnight with anti-SMAD4 antibody (mouse monoclonal antibody, 1:50 dilution, Santa Cruz Biotechnology, USA). After washing with PBS, sections were loaded with secondary antibody from the Avidin/Biotin Blocking Kit (Vector Laboratories) for 1 h. Sections were stained with avidin–biotin complex reagents (Vector Laboratories) and 3,3′-diaminobenzidine (DAB) and counter-stained with hematoxylin. Finally, sections were dehydrated in graded concentrations of ethanol and xylene and mounted. Evaluation of immunohistochemistry Functional SMAD4 status was evaluated based on the intensity of nuclear staining. We defined ‘negative; score = 0’ when nuclear SMAD4 expression was none, and ‘weakly positive; score = 1’ when the percentage of nuclear positive SMAD4 expression was 0–25%, ‘moderately positive; score = 2’ when the percentage of nuclear positive SMAD4 expression was 25–50% and ‘strongly positive; score = 3’ when the percentage of nuclear positive SMAD4 expression was above 50%. We confirmed the cancer area with hematoxylin and eosin staining of the specimens. The invasion front was defined as the front edge between tumor cells and stromal cells, and the central lesion was defined as the central part of the tumor mass or the tissue near the bile duct lumen (Fig. a). The nuclear staining intensity of each slide was scored separately at the invasion front and central lesion. The total score was calculated in the sum of four different 400-fold visual fields. Therefore, the highest score was 12 points, and the lowest score was 0 point. Each slide was evaluated in a blinded manner by two authors (S.K. and H.T.) who did not have any clinical or pathological information regarding the sample in order to avoid bias and subjective interpretation. Statistical analysis All data were expressed as mean ± standard deviation (SD). Between group differences in clinicopathological characteristics were analyzed using Student’s t test for continuous variables and chi-squared test for non-continuous variables. Overall survival (OS) and recurrence-free survival (RFS) rates were calculated using the Kaplan–Meier method. Univariate and multivariate analyses were performed using the Cox proportional hazards regression model. P < 0.05 was considered significant. All statistical analyses were performed in JMP 14.0 software (SAS Institute, Cary, NC) by Hirotoshi Takayama.
We retrospectively analyzed 98 cases of eBDC including 54 perihilar bile duct cancer and 44 distal bile duct cancer who underwent R0 or R1 resection between 2004 and 2018 at Osaka University Hospital or Osaka International Cancer Institute in Osaka, Japan. The race/ethnicity of all patients in this cohort is Japanese/Asian. Patients who underwent R2 resection were excluded from this study (Fig. ). Resected specimens were formalin-fixed and preserved in paraffin blocks prior to immunohistochemistry. The use of resected samples was approved by the Human Ethics Review Committee of the Graduate School of Medicine, Osaka University (No. 20493). Written informed consent was obtained from all patients included in the study.
After routine examination of the general condition, we performed computed tomography (CT) or magnetic resonance imaging, endoscopic retrograde cholangiography, and/or percutaneous transhepatic cholangiography, electrogram, spirogram, and chest X-rays. Preoperative staging was performed using image-based diagnosis. The treatment procedure was determined by the cancer board at each institution, consisting of radiologists, gastroenterologists, hepatologists, oncologists, and surgeons. After surgery, patients regularly underwent CT and the measurement of serum carcinoembryonic antigen (CEA) or carbohydrate antigen 19–9 (CA19-9) every 3 months in the first 2 years and every 6 months thereafter. If recurrence was clinically suspected, additional blood tests and imaging were performed to confirm recurrence. Recurrence was diagnosed based on these findings. After recurrence, we treated patients with chemotherapy, radiation, surgery, or best supportive care depending on the patients’ condition and the site and number of recurrences.
Immunohistochemical staining for SMAD4 was carried out as described previously. In summary, resected specimens were cut into 3.5-µm slices, deparaffinized with xylene and ethanol, and bathed in citrate buffer at 110 °C for 20 min for antigen retrieval. Endogenous peroxidase activity was inhibited by treating the tissue sample with 3.0% hydrogen peroxidase solution in methanol for 20 min. Non-specific binding sites were blocked in 1 mol/L PBS with 10% normal goat serum from the Avidin/Biotin Blocking Kit (Vector Laboratories Inc., Burlingame, CA, USA). The slices were incubated at 4 °C overnight with anti-SMAD4 antibody (mouse monoclonal antibody, 1:50 dilution, Santa Cruz Biotechnology, USA). After washing with PBS, sections were loaded with secondary antibody from the Avidin/Biotin Blocking Kit (Vector Laboratories) for 1 h. Sections were stained with avidin–biotin complex reagents (Vector Laboratories) and 3,3′-diaminobenzidine (DAB) and counter-stained with hematoxylin. Finally, sections were dehydrated in graded concentrations of ethanol and xylene and mounted.
Functional SMAD4 status was evaluated based on the intensity of nuclear staining. We defined ‘negative; score = 0’ when nuclear SMAD4 expression was none, and ‘weakly positive; score = 1’ when the percentage of nuclear positive SMAD4 expression was 0–25%, ‘moderately positive; score = 2’ when the percentage of nuclear positive SMAD4 expression was 25–50% and ‘strongly positive; score = 3’ when the percentage of nuclear positive SMAD4 expression was above 50%. We confirmed the cancer area with hematoxylin and eosin staining of the specimens. The invasion front was defined as the front edge between tumor cells and stromal cells, and the central lesion was defined as the central part of the tumor mass or the tissue near the bile duct lumen (Fig. a). The nuclear staining intensity of each slide was scored separately at the invasion front and central lesion. The total score was calculated in the sum of four different 400-fold visual fields. Therefore, the highest score was 12 points, and the lowest score was 0 point. Each slide was evaluated in a blinded manner by two authors (S.K. and H.T.) who did not have any clinical or pathological information regarding the sample in order to avoid bias and subjective interpretation.
All data were expressed as mean ± standard deviation (SD). Between group differences in clinicopathological characteristics were analyzed using Student’s t test for continuous variables and chi-squared test for non-continuous variables. Overall survival (OS) and recurrence-free survival (RFS) rates were calculated using the Kaplan–Meier method. Univariate and multivariate analyses were performed using the Cox proportional hazards regression model. P < 0.05 was considered significant. All statistical analyses were performed in JMP 14.0 software (SAS Institute, Cary, NC) by Hirotoshi Takayama.
Patient characteristics The entire cohort of 98 eBDC patients is summarized in Table . This cohort includes 69 men (70.5%) and 29 women (29.5%). The mean age was 68.2 ± 9.0 years. The main tumor locations were the distal bile duct ( n = 54, 55.1%) and perihilar bile duct ( n = 44, 44.9%). Among the included patients, 78 (79.6%) achieved R0 resection, and the other 20 patients (20.4%) had microscopically positive surgical margins (R1 resection). Forty-six patients (46.9%) received adjuvant chemotherapy. Immunohistochemical findings of SMAD4 in eBDC The typical immunohistochemical expression of SMAD4 is demonstrated in Fig. b–e. SMAD4 expression was found in the cytoplasm and/or nucleus of tumor cells. We evaluated SMAD4 staining in the nuclei of tumor cells as functional SMAD4. Supplemental Figure shows a histogram of the SMAD4 immunohistochemical score at the central lesion and the invasion front of resected specimens. The median SMAD4 staining score was 6 at both the central lesion and invasion front, but the mean ± SD score was 6.13 ± 3.63 at the central lesion and 5.57 ± 3.75 at the invasion front. We defined a score ≤ 6 points as low SMAD4 function and a score ≥ 7 points as high SMAD4 function for both sites. Association between functional SMAD4 expression and clinicopathological factors among eBDC patients who underwent upfront surgery Table shows the correlation between functional SMAD4 staining at the central lesion and tumor invasion front, and Fig. a, b shows the RFS and OS curves for four groups created when stratifying by SMAD4 staining at the two sites. There was no significant difference in the analyses, but in the analysis of OS, there was a marginal difference; the group without SMAD4 expression at any site had the poorest prognosis. Next, based on SMAD4 expression at the central lesion and invasion front, we classified upfront surgery patients into two groups: no SMAD4 expression at any site ( n = 6) and other cases ( n = 67). Supplemental Table summarizes the comparison of patient characteristics between the two groups. No SMAD4 expression was non-significantly associated with higher invasion of the liver (66.7% vs. 26.9% P = 0.053) and nervous system (100.0% vs. 76.9% P = 0.078). We did not observe a significant correlation of SMAD4 status with other clinicopathological factors. Association between SMAD4 expression and prognosis among eBDC patients who underwent upfront surgery Figure c, d shows the Kaplan–Meier survival curves for RFS and OS according to SMAD4 expression. Patients without SMAD4 expression at any site had significantly poorer OS than the other patients (3-year OS rate: no SMAD4 expression, 20.83%; other cases, 67.22%; P = 0.014). There was no significant difference for RFS (3-year OS rate: no SMAD4 expression, 0.0%; other cases, 46.69%; P = 0.120). Association between intensity of SMAD4 staining at each area and clinicopathological factors in the upfront surgery group with SMAD4 expression at any site In 67 cases with SMAD4 expression at any site, the association of prognosis and clinicopathological factors with SMAD4 intensity was evaluated separately for the tumor invasion front and the central lesion. The cases were divided into a SMAD4 low group and SMAD4 high group. Supplemental Table summarizes the association between SMAD4 staining and clinicopathological factors. Cases with low SMAD4 staining at the invasion front presented higher invasion of the venous system ( P = 0.044) and nervous system ( P = 0.031). Supplemental Figure shows the survival analysis. We did not find a significant difference between the two groups at both sites. Three groups according to SMAD4 status We divided the 73 patients who underwent upfront surgery into three groups depending on the SMAD4 status in each area. The SMAD4 immunohistochemical score was 0 in the absent group, 1–6 in the low group, and 7–12 in the high group. Supplemental Tables and compare clinicopathological factors among the three groups. In the classification of the central lesion, we found no significant difference in clinicopathological factors among the three groups. In classification of the invasion front, we found a significant difference in microinvasion into the venous system and liver. We also compared the prognosis of the three groups. Figure e–h shows the Kaplan–Meier survival curves for RFS and OS. In the analysis of the central lesion, we found no significant difference among the three groups. In contrast, in the analysis on the invasion front, we found a significant difference (RFS P = 0.033; OS P = 0.047) among the three groups, and the absent group had the shortest RFS and OS. SMAD4 expression at the metastatic lymph node SMAD4 immunostaining of the metastatic lymph node was also performed in 14 cases for which a resected specimen was available. We divided the 14 patients into 6 cases with SMAD4 expression and 8 cases without SMAD4 expression. Figure shows the Kaplan–Meier survival curve stratified by SMAD4 expression in metastatic lymph node. In the RFS analysis, we found no significant difference. In the OS analysis, the group without SMAD4 expression at the metastatic lymph node had a poorer prognosis ( P = 0.011). Association between SMAD4 expression and adjuvant chemotherapy We also evaluated the correlation between the effect of adjuvant chemotherapy and SMAD4 expression. Supplemental Figure presents the Kaplan–Meier survival curves of 73 patients who underwent upfront surgery stratified by the presence or absence of adjuvant chemotherapy. In the analysis of both RFS and OS, adjuvant chemotherapy did not improve the prognosis. Notably, there was no clear evidence of an effect of adjuvant chemotherapy on eBDC. Next, we classified patients into subgroups depending on the SMAD4 status (low or high) in each area (central lesion and invasion front) and investigated the effect of adjuvant chemotherapy on the prognosis of each group. Supplemental Figures and show the Kaplan–Meier survival curves of these subgroups stratified by the presence or absence of adjuvant chemotherapy. An improving effect of adjuvant chemotherapy was not observed in any subgroup. Univariate and multivariate analysis of survival in the upfront surgery group Supplemental Tables and present the results of univariate and multivariate analyses of factors influencing survival using the Cox proportional hazards model. In the analysis of RFS, univariate analysis showed that invasion into the venous system (hazard ratio [HR] 2.034, P = 0.037), invasion of the nervous system (HR 4.711, P = 0.011), positive lymph node metastasis (HR 2.152, P = 0.024), and residual tumor (HR 2.950, P = 0.002) were associated with poor prognosis. Multivariate analysis revealed that invasion of the nervous system (HR 4.250, P = 0.023) and residual tumor (HR 2.860, P = 0.025) were independent prognostic factors. In the analysis of OS, univariate analysis showed that no SMAD4 expression at any site (HR 3.551, P = 0.022), invasion into the lymphatic system (HR 3.136, P = 0.013), and invasion of the nervous system (HR 5.269, P = 0.024) were prognostic factors. Multivariate analysis revealed that invasion into the lymphatic system (HR 3.136 P = 0.024) and invasion of the nervous system (HR 4.606, P = 0.043) were independent prognostic factors. SMAD4 expression and survival after recurrence To evaluate the association between SMAD4 and chemotherapy, we also evaluated survival time after recurrence. We consider that the recurrence site would have the same features of the invasion front. Among the 67 patients with SMAD4 expression at any site, 33 experienced recurrence. Survival time after recurrence was evaluated according to SMAD4 status at each central lesion and tumor invasion front (Fig. ). In both the central lesion and invasion front, the high SMAD4 group had a poorer prognosis than the low SMAD4 group (central lesion, P = 0.011; invasion front, P = 0.056). Change in SMAD4 expression after neoadjuvant chemotherapy Finally, to evaluate SMAD4 on the residual cancer after chemotherapy and chemo-radiation therapy, we examined functional SMAD4 in resected specimens after neoadjuvant therapy. Four cases were treated with neoadjuvant chemotherapy (NAC) and 21 cases with neoadjuvant chemo-radiation therapy (NAC-RT), and 73 cases underwent upfront surgery. The clinicopathological factors among the three groups are summarized in Supplemental Table . The survival analysis of the three groups is shown in Supplemental Figure . The NAC-RT group had better prognosis (5-year RFS rate, 82.4%; 5-year OS rate, 92.9%) than the other groups. Figure shows the SMAD4 immunohistochemical scores at the central lesion and invasion front in each group. At the central lesion, there was no significant difference among the three groups. At the invasion front, we identified a significant difference ( P = 0.039) between NAC-RT (average score 7.14 ± 3.51) and upfront surgery (average score 5.23 ± 3.75).
The entire cohort of 98 eBDC patients is summarized in Table . This cohort includes 69 men (70.5%) and 29 women (29.5%). The mean age was 68.2 ± 9.0 years. The main tumor locations were the distal bile duct ( n = 54, 55.1%) and perihilar bile duct ( n = 44, 44.9%). Among the included patients, 78 (79.6%) achieved R0 resection, and the other 20 patients (20.4%) had microscopically positive surgical margins (R1 resection). Forty-six patients (46.9%) received adjuvant chemotherapy.
The typical immunohistochemical expression of SMAD4 is demonstrated in Fig. b–e. SMAD4 expression was found in the cytoplasm and/or nucleus of tumor cells. We evaluated SMAD4 staining in the nuclei of tumor cells as functional SMAD4. Supplemental Figure shows a histogram of the SMAD4 immunohistochemical score at the central lesion and the invasion front of resected specimens. The median SMAD4 staining score was 6 at both the central lesion and invasion front, but the mean ± SD score was 6.13 ± 3.63 at the central lesion and 5.57 ± 3.75 at the invasion front. We defined a score ≤ 6 points as low SMAD4 function and a score ≥ 7 points as high SMAD4 function for both sites.
Table shows the correlation between functional SMAD4 staining at the central lesion and tumor invasion front, and Fig. a, b shows the RFS and OS curves for four groups created when stratifying by SMAD4 staining at the two sites. There was no significant difference in the analyses, but in the analysis of OS, there was a marginal difference; the group without SMAD4 expression at any site had the poorest prognosis. Next, based on SMAD4 expression at the central lesion and invasion front, we classified upfront surgery patients into two groups: no SMAD4 expression at any site ( n = 6) and other cases ( n = 67). Supplemental Table summarizes the comparison of patient characteristics between the two groups. No SMAD4 expression was non-significantly associated with higher invasion of the liver (66.7% vs. 26.9% P = 0.053) and nervous system (100.0% vs. 76.9% P = 0.078). We did not observe a significant correlation of SMAD4 status with other clinicopathological factors.
Figure c, d shows the Kaplan–Meier survival curves for RFS and OS according to SMAD4 expression. Patients without SMAD4 expression at any site had significantly poorer OS than the other patients (3-year OS rate: no SMAD4 expression, 20.83%; other cases, 67.22%; P = 0.014). There was no significant difference for RFS (3-year OS rate: no SMAD4 expression, 0.0%; other cases, 46.69%; P = 0.120).
In 67 cases with SMAD4 expression at any site, the association of prognosis and clinicopathological factors with SMAD4 intensity was evaluated separately for the tumor invasion front and the central lesion. The cases were divided into a SMAD4 low group and SMAD4 high group. Supplemental Table summarizes the association between SMAD4 staining and clinicopathological factors. Cases with low SMAD4 staining at the invasion front presented higher invasion of the venous system ( P = 0.044) and nervous system ( P = 0.031). Supplemental Figure shows the survival analysis. We did not find a significant difference between the two groups at both sites.
We divided the 73 patients who underwent upfront surgery into three groups depending on the SMAD4 status in each area. The SMAD4 immunohistochemical score was 0 in the absent group, 1–6 in the low group, and 7–12 in the high group. Supplemental Tables and compare clinicopathological factors among the three groups. In the classification of the central lesion, we found no significant difference in clinicopathological factors among the three groups. In classification of the invasion front, we found a significant difference in microinvasion into the venous system and liver. We also compared the prognosis of the three groups. Figure e–h shows the Kaplan–Meier survival curves for RFS and OS. In the analysis of the central lesion, we found no significant difference among the three groups. In contrast, in the analysis on the invasion front, we found a significant difference (RFS P = 0.033; OS P = 0.047) among the three groups, and the absent group had the shortest RFS and OS.
SMAD4 immunostaining of the metastatic lymph node was also performed in 14 cases for which a resected specimen was available. We divided the 14 patients into 6 cases with SMAD4 expression and 8 cases without SMAD4 expression. Figure shows the Kaplan–Meier survival curve stratified by SMAD4 expression in metastatic lymph node. In the RFS analysis, we found no significant difference. In the OS analysis, the group without SMAD4 expression at the metastatic lymph node had a poorer prognosis ( P = 0.011).
We also evaluated the correlation between the effect of adjuvant chemotherapy and SMAD4 expression. Supplemental Figure presents the Kaplan–Meier survival curves of 73 patients who underwent upfront surgery stratified by the presence or absence of adjuvant chemotherapy. In the analysis of both RFS and OS, adjuvant chemotherapy did not improve the prognosis. Notably, there was no clear evidence of an effect of adjuvant chemotherapy on eBDC. Next, we classified patients into subgroups depending on the SMAD4 status (low or high) in each area (central lesion and invasion front) and investigated the effect of adjuvant chemotherapy on the prognosis of each group. Supplemental Figures and show the Kaplan–Meier survival curves of these subgroups stratified by the presence or absence of adjuvant chemotherapy. An improving effect of adjuvant chemotherapy was not observed in any subgroup.
Supplemental Tables and present the results of univariate and multivariate analyses of factors influencing survival using the Cox proportional hazards model. In the analysis of RFS, univariate analysis showed that invasion into the venous system (hazard ratio [HR] 2.034, P = 0.037), invasion of the nervous system (HR 4.711, P = 0.011), positive lymph node metastasis (HR 2.152, P = 0.024), and residual tumor (HR 2.950, P = 0.002) were associated with poor prognosis. Multivariate analysis revealed that invasion of the nervous system (HR 4.250, P = 0.023) and residual tumor (HR 2.860, P = 0.025) were independent prognostic factors. In the analysis of OS, univariate analysis showed that no SMAD4 expression at any site (HR 3.551, P = 0.022), invasion into the lymphatic system (HR 3.136, P = 0.013), and invasion of the nervous system (HR 5.269, P = 0.024) were prognostic factors. Multivariate analysis revealed that invasion into the lymphatic system (HR 3.136 P = 0.024) and invasion of the nervous system (HR 4.606, P = 0.043) were independent prognostic factors.
To evaluate the association between SMAD4 and chemotherapy, we also evaluated survival time after recurrence. We consider that the recurrence site would have the same features of the invasion front. Among the 67 patients with SMAD4 expression at any site, 33 experienced recurrence. Survival time after recurrence was evaluated according to SMAD4 status at each central lesion and tumor invasion front (Fig. ). In both the central lesion and invasion front, the high SMAD4 group had a poorer prognosis than the low SMAD4 group (central lesion, P = 0.011; invasion front, P = 0.056).
Finally, to evaluate SMAD4 on the residual cancer after chemotherapy and chemo-radiation therapy, we examined functional SMAD4 in resected specimens after neoadjuvant therapy. Four cases were treated with neoadjuvant chemotherapy (NAC) and 21 cases with neoadjuvant chemo-radiation therapy (NAC-RT), and 73 cases underwent upfront surgery. The clinicopathological factors among the three groups are summarized in Supplemental Table . The survival analysis of the three groups is shown in Supplemental Figure . The NAC-RT group had better prognosis (5-year RFS rate, 82.4%; 5-year OS rate, 92.9%) than the other groups. Figure shows the SMAD4 immunohistochemical scores at the central lesion and invasion front in each group. At the central lesion, there was no significant difference among the three groups. At the invasion front, we identified a significant difference ( P = 0.039) between NAC-RT (average score 7.14 ± 3.51) and upfront surgery (average score 5.23 ± 3.75).
SMAD4 is an intracellular transcriptional mediator of the TGFβ signaling pathway. SMAD4 has been reported to function mainly as a tumor suppressor through cell cycle arrest, apoptosis, and differentiation. However, SMAD4 has also been reported to trigger EMT through the induction of EMT-associated transcriptional factors Snail , ZEB1, and ZEB2 , or to function as a tumor promoter. Along with KRAS and p53 , SMAD4 is one of the most frequently mutated genes in eBDC . To understand the molecular pathology of BTCs, it is indispensable to elucidate the function of SMAD4. Our previous studies demonstrated high SMAD4 and N-cadherin staining at the invasion front in resected BTC specimens and suggested that EMT may be induced at the cancer invasion front . Our objective was to analyze the intra-tumoral localization of SMAD4 in resected BTC specimens and determine the significance of SMAD4 localization using immunohistochemistry. We demonstrated that patients without SMAD4 expression at the central lesion or tumor invasion front have poorer OS than those with SMAD4 expression at any site. Furthermore, when patients were classified into three groups (absent, low, high) according to the intensity of functional SMAD4 in each area, the absent group had the poorest prognosis in regards to RFS and OS. In addition, patients whose SMAD4 expression in the metastatic lymph node was negative had poorer OS than SMAD4-positive patients. On the other hand, among patients with recurrence, high SMAD4 expression was significantly related to shorter survival time after recurrence. Moreover, the SMAD4 immunohistochemical score at the tumor invasion front in the group treated with chemo-radiation therapy was lower than the score in the upfront surgery group. The data presented here demonstrate that the loss of SMAD4 immunostaining was a poor prognostic factor for OS in the upfront surgery group. This result may indicate that, in eBDC, which can be radically resected, SMAD4 inactivation results in a biologically more aggressive form. Previous studies reported that patients without SMAD4 protein expression had a shorter survival time from pancreatic cancer , colon cancer , and esophageal cancer . In studies of SMAD4 in biliary tract cancer, organ-specific disruption of SMAD4 was shown to induce tumorigenesis of cholangiocarcinoma . Moreover, mutation of the SMAD4 gene was a poor prognostic factor in intrahepatic BDC and the loss of SMAD4 according to immunohistochemistry was significantly associated with distant metastasis . The present result was consistent with these previous reports. Among cases with recurrence, cases with high SMAD4 expression at any site had shorter survival time than those with low SMAD4 expression. This result may possibly indicate that high expression of functional SMAD4 induces chemoresistance or tumor cells with high SMAD4 expression are more aggressive malignancies when the tumor has progressed to distant metastasis. However, to make such an interpretation, we need to hypothesize that the SMAD4 function at the invasion front in resected specimens is almost the same as its function in recurrent tumors. No reports have examined the prognosis after recurrence in BTCs stratified by the SMAD4 immunostaining status. However, other reports have shown that, in advanced tumors, intact SMAD4 facilitates EMT and TGFβ-dependent tumor growth . We previously found that SMAD4 expression levels are enhanced in the gemcitabine-resistant BDC cell line MzChA-1_GR compared to the parent MzChA-1 cell line, and MzChA-1_GR cells acquired malignant potential through EMT depending on SMAD4 function . It has also been reported that for patients with advanced pancreatic cancer who undergo palliative chemotherapy before resection, patients with preserved SMAD4 expression have significantly shorter progression-free survival than patients with lost SMAD4 expression . TGFβ signaling has also been suggested to play a tumor suppressor role in early-stage tumors but a tumor promotor role in late-stage tumors. Our results may reflect this theory from previous reports. Functional SMAD4 was expressed at higher levels at the invasion front in patients treated with NAC-RT than in those treated with upfront surgery. OS and RFS were highest in the NAC-RT compared to the other two treatment groups. This result may indicate that tumor cells with low functional SMAD4 expression died by receiving chemo-radiation therapy and tumor cells with high functional SMAD4 expression survived. In short, tumor cells with low functional SMAD4 expression may be radio-sensitive. There have been no reports regarding the evaluation of functional SMAD4 in resected specimens for chemo- and/or radio-sensitivity, and we could not find a report supporting our results. Regarding mutation (deletion) of SMAD4, colorectal cancer and pancreatic ductal carcinoma are associated with resistance to radiotherapy. To examine our results in more detail, we need to compare biopsy specimens taken before neoadjuvant therapy to resected specimens taken after neoadjuvant therapy. We acknowledge that the present analysis has several limitations. First, this was a retrospective analysis at two institutions and included a small number of patients. Second, the evaluation only by immunohistochemistry is thought to be insufficient to evaluate the presence of SMAD4 expression. Therefore, we have to evaluate SMAD4 expression by other methods, such as Western blotting and PCR. Finally, the present evaluation method for immunostaining cannot cover all areas in a tumor. In the near future, an imaging analysis using high-performance software is needed in order to reflect the exact immunostaining. In summary, the present results demonstrate that loss of SMAD4 expression is a poor prognostic factor in resectable eBDC and high expression of functional SMAD4 is a poor prognostic factor in recurrent eBDC. These results may indicate that SMAD4 had a bidirectional function as both a tumor promotor and tumor suppressor. In the presence of SMAD4 deletion, pathways other than TGF/SMAD would work in cancer promotion. The function of SMAD4 was complicated and we could not explain the function of SMAD4 completely. In the near future, further investigations of the dual function of SMAD4 are needed.
The loss of SMAD4 protein expression was a poor prognostic factor in eBDC. The intensity of functional SMAD4 staining in eBDC is a marker of resistance to chemotherapy and radiotherapy. The localization of functional SMAD4 plays a complicated role in eBDC that related not only to the natural course of BTC after surgery, but also chemo-radio-sensitivity.
Additional file 1: Supplemental Figure 1. Histogram of the SMAD4 immunohistochemical score for resected specimens. Supplemental Figure 2. Kaplan-Meier survival curves for 67 patients with SMAD4 expression at either the central lesion or invasion front stratified by the SMAD4 status in each area. Supplemental Figure 3. Kaplan-Meier survival curves for 73 patients who underwent upfront surgery stratified by treatment with adjuvant chemotherapy. Supplemental Figure 4. Kaplan-Meier survival curves for patients who underwent upfront surgery stratified by treatment with adjuvant chemotherapy. Supplemental Figure 5. Kaplan-Meier survival curves for patients who underwent upfront surgery stratified by treatment with adjuvant chemotherapy. Supplemental Figure 6. Kaplan-Meier survival curves for 98 patients stratified by neoadjuvant treatment. Additional file 2: Supplemental Table 1. Association between clinicopathological factors and SMAD4 expression. Supplemental Table 2. Association between SMAD4 expression at each area and clinicopathological factors among 67 patients except cases without SMAD4 expression at any site. Supplemental Table 3. Association between clinicopathological factors and SMAD4 expression at central lesion. Supplemental Table 4. Association between clinicopathological factors and SMAD4 expression at invasion front. Supplemental Table 5. Univariate and Multivariate analysis for recurrence free survival of 73 patients with upfront surgery group. Supplemental Table 6. Univariate and Multivariate analysis for overall survival of 73 patients with upfront-surgery group. Supplemental Table 7. Association between clinicopathological factors and neoadjuvant treatment.
|
Diagnostic Accuracy of a Mobile AI-Based Symptom Checker and a Web-Based Self-Referral Tool in Rheumatology: Multicenter Randomized Controlled Trial | aec26288-6b75-49bf-b4de-ff6e201bd9a5 | 11303907 | Internal Medicine[mh] | Symptoms caused by inflammatory rheumatic diseases (IRDs) are often unspecific and difficult to correctly interpret for patients and even for experienced rheumatologists . This diagnostic complexity frequently results in significant delay , which can diminish treatment efficacy and lead to progressive damage . To address these challenges, a variety of freely available, patient-centered diagnostic decision support systems (DDSSs) have emerged and are increasingly being used by the general public and patients with IRDs . These DDSSs offer disease suggestions and advice for action within a few minutes and without any health care provider contact. Rheport is a web-based rheumatology referral system used in Germany to automatically triage appointments of new patients according to IRD probability . To date, Rheport has been used to schedule more than 3000 appointments . Ada , an artificial intelligence (AI)–based chatbot, is one of the most promising DDSS currently available. Multiple case-vignette studies showcased its high diagnostic accuracy , and more than 30 million symptom assessments have been completed in 130 countries . Despite the expanding usage, little evidence is available regarding the accuracy of DDSSs in rheumatology . To our knowledge, Rheport and Ada are among the most widely used DDSSs in rheumatology within Germany . However, a direct comparative study between these 2 systems has yet to be conducted. Therefore, the aim of this analysis was to evaluate the diagnostic capability of Ada and Rheport in identifying IRDs. Study Design and Participants The study design for this pragmatic, prospective, multicenter, crossover randomized controlled trial (German Register of Clinical Trials DRKS00017642) has been described elsewhere in detail . Results are presented according to the CONSORT-EHEALTH checklist . Adult patients with musculoskeletal symptoms who had been referred for the first time to 3 recruiting rheumatology outpatient clinics with a suspected diagnosis of an IRD were consecutively included in this study. Participants were instructed to enter the required data into both Ada and Rheport while waiting for their scheduled appointment with the rheumatologist. Assistance from support staff was available if needed. Patients were randomized 1:1 by computer-generated block randomization into group 1 (first Ada, then Rheport) or group 2 (first Rheport, then Ada), with each block comprising 100 patients. This crossover design was chosen to mitigate potential bias from a priming effect, where completing the first DDSS could influence responses to the second DDSS without the patient’s awareness. For instance, a priming effect was previously observed where participants who answered questions about their religiosity before reporting their alcohol consumption indicated fewer drinks on peak drinking occasions . A designated project manager, uninvolved in the recruitment process, was responsible for assigning patients to the intervention arms. The statistician was kept blinded for group allocation. Assistance from the study personnel was available for DDSS completion when needed. Ethical Considerations The study was approved by the ethics committee of the medical faculty of the University of Erlangen-Nürnberg, Germany (106_19 Bc), and was conducted in compliance with the Declaration of Helsinki. All patients provided written informed consent. The DDSSs Rheport comprises a static 23-item questionnaire designed to assess symptoms and generate an expert-derived weighted sum score . Median completion time was 8 minutes . A higher sum score correlates with an increased probability of an IRD. Rheumatologists utilizing this system can allocate slots for automatic patient scheduling. Based on the calculated IRD probability, patients are offered available appointments categorized into 4 urgency levels. Patients with scores below 1 are considered unlikely to have an IRD and do not receive an appointment. Those with a minimum score of 1 are considered likely to have an IRD and are enabled to book an appointment. The urgency levels are categorized as follows: patients with scores between 1 and 2.4 are considered intermediate, patients with scores between 2.4 and 4 are considered urgent, and patients with scores exceeding 4 are considered very urgent. Patients in the very urgent category should ideally receive appointments within 1 week . Upon the acceptance of a proposed appointment by a patient, the rheumatologist is notified and provided with a structured summary report of the questionnaire. There were no changes to the Rheport algorithm during the study period. Ada is a native app and certified medical product, designed to cover a broad spectrum of symptoms and diseases. Programmed as a chatbot, Ada mimics traditional history taking by initially requesting basic health information, such as sex and age, followed by current symptoms. Based on these responses, the app generates individualized follow-up questions. Ada’s diagnostic suggestions are driven by a Bayesian network that is continuously updated . Upon the completion of symptom querying (median time: 7 minutes ), a summary report is generated, including (1) a summary of present, absent, and uncertain symptoms; (2) up to 5 disease suggestions with the corresponding probabilities, triage recommendations (eg, call an ambulance), and symptom relevance; and (3) access to basic information about the suggested diseases. Ada was regularly updated along the course of the study to ensure functionality. Outcome The primary end point of the study was concordant detection of any IRD diagnosis (including, eg, rheumatoid arthritis or systemic lupus erythematosus) by the DDSSs and the gold standard, that is, the rheumatologist’s final diagnosis, reported on the discharge summary report and adjudicated by the attending head physician of the local rheumatology department. For Rheport, the detection of an IRD by the DDSS was defined as a sum score of 1 or higher. Regarding Ada, we analyzed whether there was an IRD diagnosis and whether the correct diagnosis was listed as the top diagnosis (Ada top 1 [D1]) or was listed at all among all suggested diagnoses (Ada top 5 [D5]). Statistical Analysis Descriptive characteristics are presented as median and IQR for interval data and as absolute (n) and relative frequency (percentage) for nominal data. The minimum necessary sample size for this study was 122 in order to detect a specificity and sensitivity of at least 70% for Ada or Rheport, with a type I error of 4.4% and type II error of 19% using a 1-sample test against a benchmark accuracy of 50% based on a previous evaluation of DDSSs . Operating characteristics of Ada and Rheport for a diagnosis of rheumatic disease was evaluated using sensitivity, specificity, negative predictive value, positive predictive value, and overall accuracy with respective 95% CIs. This evaluation was done both separately and for the combined use of the DDSSs. The agreement between the DDSSs was evaluated using the Cohen κ statistic, with values ≤0 indicating no agreement, 0.01-0.20 indicating none to slight agreement, 0.21-0.40 indicating fair agreement, 0.41-0.60 indicating moderate agreement, 0.61-0.80 indicating substantial agreement, and 0.81-1.00 indicating almost perfect agreement . We evaluated the cumulative proportion of correct diagnoses using Ada with exact CIs. We used a binomial regression with a log-link function to calculate the risk ratio for correct identification of any IRD by Rheport in comparison to Ada when the respective DDSS was used first and when it was used after the crossover. We preferred this method over logistic regression since the interpretation of risk ratios is more intuitive than that for odds ratios with high-prevalence binary outcomes. All analyses were conducted using the open-source R software (version 4.1.0; R Foundation for Statistical Computing) running under RStudio (version 1.4.1103; RStudio). The study design for this pragmatic, prospective, multicenter, crossover randomized controlled trial (German Register of Clinical Trials DRKS00017642) has been described elsewhere in detail . Results are presented according to the CONSORT-EHEALTH checklist . Adult patients with musculoskeletal symptoms who had been referred for the first time to 3 recruiting rheumatology outpatient clinics with a suspected diagnosis of an IRD were consecutively included in this study. Participants were instructed to enter the required data into both Ada and Rheport while waiting for their scheduled appointment with the rheumatologist. Assistance from support staff was available if needed. Patients were randomized 1:1 by computer-generated block randomization into group 1 (first Ada, then Rheport) or group 2 (first Rheport, then Ada), with each block comprising 100 patients. This crossover design was chosen to mitigate potential bias from a priming effect, where completing the first DDSS could influence responses to the second DDSS without the patient’s awareness. For instance, a priming effect was previously observed where participants who answered questions about their religiosity before reporting their alcohol consumption indicated fewer drinks on peak drinking occasions . A designated project manager, uninvolved in the recruitment process, was responsible for assigning patients to the intervention arms. The statistician was kept blinded for group allocation. Assistance from the study personnel was available for DDSS completion when needed. The study was approved by the ethics committee of the medical faculty of the University of Erlangen-Nürnberg, Germany (106_19 Bc), and was conducted in compliance with the Declaration of Helsinki. All patients provided written informed consent. Rheport comprises a static 23-item questionnaire designed to assess symptoms and generate an expert-derived weighted sum score . Median completion time was 8 minutes . A higher sum score correlates with an increased probability of an IRD. Rheumatologists utilizing this system can allocate slots for automatic patient scheduling. Based on the calculated IRD probability, patients are offered available appointments categorized into 4 urgency levels. Patients with scores below 1 are considered unlikely to have an IRD and do not receive an appointment. Those with a minimum score of 1 are considered likely to have an IRD and are enabled to book an appointment. The urgency levels are categorized as follows: patients with scores between 1 and 2.4 are considered intermediate, patients with scores between 2.4 and 4 are considered urgent, and patients with scores exceeding 4 are considered very urgent. Patients in the very urgent category should ideally receive appointments within 1 week . Upon the acceptance of a proposed appointment by a patient, the rheumatologist is notified and provided with a structured summary report of the questionnaire. There were no changes to the Rheport algorithm during the study period. Ada is a native app and certified medical product, designed to cover a broad spectrum of symptoms and diseases. Programmed as a chatbot, Ada mimics traditional history taking by initially requesting basic health information, such as sex and age, followed by current symptoms. Based on these responses, the app generates individualized follow-up questions. Ada’s diagnostic suggestions are driven by a Bayesian network that is continuously updated . Upon the completion of symptom querying (median time: 7 minutes ), a summary report is generated, including (1) a summary of present, absent, and uncertain symptoms; (2) up to 5 disease suggestions with the corresponding probabilities, triage recommendations (eg, call an ambulance), and symptom relevance; and (3) access to basic information about the suggested diseases. Ada was regularly updated along the course of the study to ensure functionality. The primary end point of the study was concordant detection of any IRD diagnosis (including, eg, rheumatoid arthritis or systemic lupus erythematosus) by the DDSSs and the gold standard, that is, the rheumatologist’s final diagnosis, reported on the discharge summary report and adjudicated by the attending head physician of the local rheumatology department. For Rheport, the detection of an IRD by the DDSS was defined as a sum score of 1 or higher. Regarding Ada, we analyzed whether there was an IRD diagnosis and whether the correct diagnosis was listed as the top diagnosis (Ada top 1 [D1]) or was listed at all among all suggested diagnoses (Ada top 5 [D5]). Descriptive characteristics are presented as median and IQR for interval data and as absolute (n) and relative frequency (percentage) for nominal data. The minimum necessary sample size for this study was 122 in order to detect a specificity and sensitivity of at least 70% for Ada or Rheport, with a type I error of 4.4% and type II error of 19% using a 1-sample test against a benchmark accuracy of 50% based on a previous evaluation of DDSSs . Operating characteristics of Ada and Rheport for a diagnosis of rheumatic disease was evaluated using sensitivity, specificity, negative predictive value, positive predictive value, and overall accuracy with respective 95% CIs. This evaluation was done both separately and for the combined use of the DDSSs. The agreement between the DDSSs was evaluated using the Cohen κ statistic, with values ≤0 indicating no agreement, 0.01-0.20 indicating none to slight agreement, 0.21-0.40 indicating fair agreement, 0.41-0.60 indicating moderate agreement, 0.61-0.80 indicating substantial agreement, and 0.81-1.00 indicating almost perfect agreement . We evaluated the cumulative proportion of correct diagnoses using Ada with exact CIs. We used a binomial regression with a log-link function to calculate the risk ratio for correct identification of any IRD by Rheport in comparison to Ada when the respective DDSS was used first and when it was used after the crossover. We preferred this method over logistic regression since the interpretation of risk ratios is more intuitive than that for odds ratios with high-prevalence binary outcomes. All analyses were conducted using the open-source R software (version 4.1.0; R Foundation for Statistical Computing) running under RStudio (version 1.4.1103; RStudio). Participants A total of 755 consecutive patients were approached between September 2019 and April 2021, of whom 654 (87%) agreed to participate and 600 (79.4%) were included in the analysis presented . The participation exceeded the minimal sample size calculation, since patients were very eager to participate and considered the study a welcome distraction during the waiting time for their appointment. Overall, 35.7% (214/600) of the patients were diagnosed with an IRD based on physicians’ judgment. The demographic characteristics of the patients are displayed in , and the physicians’ final diagnoses are presented in . Diagnostic Accuracy of Ada and Rheport Rheport showed an overall sensitivity of 62% and a specificity of 47% for IRDs . Ada’s D1 and D5 disease suggestions showed a sensitivity of 52% and 66%, respectively, and a specificity of 68% and 54%, respectively, concerning IRDs . The odds ratio for Rheport correctly suggesting a rheumatic disease diagnosis in comparison to Ada D5 as the first used DDSS was 0.89 (95% CI 0.83-0.97). When the initial DDSS was Ada, the accuracy of Ada D5 was 61% (95% CI 55%-66%) and the accuracy of Rheport was 53% (95% CI 47%-59%), whereas after the crossover, this odds ratio was 0.98 (95% CI 0.91-1.06) with corresponding accuracies of Ada D5 at 56% (95% CI 50%-61%) and of Rheport at 52% (95% CI 46%-58%). Ada’s diagnostic accuracy regarding individual diagnoses was heterogenous. Ada suggested the correct diagnosis of as top suggestion (Ada D1) in 42% (29/69) of patients with rheumatoid arthritis, and the correct diagnosis was suggested overall (Ada D5) in 64% (44/69); moreover, the first suggestion of Ada (Ada D1) was correct in 22% (14/65) of patients with spondyloarthritis (including axial spondyloarthritis, peripheral spondyloarthritis, and psoriatic arthritis), and the correct diagnosis was suggested overall (Ada D5) in 38% (25/65). The findings suggest that Ada performed considerably better in identifying rheumatoid arthritis in comparison to other diagnoses (see ). Agreement Between Rheport and Ada The Cohen κ statistic of Rheport for agreement on any rheumatic disease diagnosis with Ada D1 was 0.15 (95% CI 0.08-0.18) and with Ada D5 was 0.08 (95% CI 0.00-0.16), indicating poor or nonexistent agreement for the presence of any rheumatic disease between the 2 DDSSs. A total of 755 consecutive patients were approached between September 2019 and April 2021, of whom 654 (87%) agreed to participate and 600 (79.4%) were included in the analysis presented . The participation exceeded the minimal sample size calculation, since patients were very eager to participate and considered the study a welcome distraction during the waiting time for their appointment. Overall, 35.7% (214/600) of the patients were diagnosed with an IRD based on physicians’ judgment. The demographic characteristics of the patients are displayed in , and the physicians’ final diagnoses are presented in . Rheport showed an overall sensitivity of 62% and a specificity of 47% for IRDs . Ada’s D1 and D5 disease suggestions showed a sensitivity of 52% and 66%, respectively, and a specificity of 68% and 54%, respectively, concerning IRDs . The odds ratio for Rheport correctly suggesting a rheumatic disease diagnosis in comparison to Ada D5 as the first used DDSS was 0.89 (95% CI 0.83-0.97). When the initial DDSS was Ada, the accuracy of Ada D5 was 61% (95% CI 55%-66%) and the accuracy of Rheport was 53% (95% CI 47%-59%), whereas after the crossover, this odds ratio was 0.98 (95% CI 0.91-1.06) with corresponding accuracies of Ada D5 at 56% (95% CI 50%-61%) and of Rheport at 52% (95% CI 46%-58%). Ada’s diagnostic accuracy regarding individual diagnoses was heterogenous. Ada suggested the correct diagnosis of as top suggestion (Ada D1) in 42% (29/69) of patients with rheumatoid arthritis, and the correct diagnosis was suggested overall (Ada D5) in 64% (44/69); moreover, the first suggestion of Ada (Ada D1) was correct in 22% (14/65) of patients with spondyloarthritis (including axial spondyloarthritis, peripheral spondyloarthritis, and psoriatic arthritis), and the correct diagnosis was suggested overall (Ada D5) in 38% (25/65). The findings suggest that Ada performed considerably better in identifying rheumatoid arthritis in comparison to other diagnoses (see ). The Cohen κ statistic of Rheport for agreement on any rheumatic disease diagnosis with Ada D1 was 0.15 (95% CI 0.08-0.18) and with Ada D5 was 0.08 (95% CI 0.00-0.16), indicating poor or nonexistent agreement for the presence of any rheumatic disease between the 2 DDSSs. Principal Findings This prospective, multicenter randomized controlled trial investigated the diagnostic accuracy of 2 DDSSs regarding IRDs. Overall, the diagnostic accuracy of both DDSSs was limited. Rheport was less likely to correctly identify any IRD when used as the first DDSS; this could not be reproduced when it was used as the second DDSS. The diagnostic accuracy was comparably low with both tools, although Ada is an AI-based chatbot whereas Rheport is built on a simple weighted-sum-score questionnaire and expert opinions. The low negative predictive values of both Ada and Rheport suggest frequent errors when used as rheumatic screening tools. The low diagnostic accuracy is all the more alarming, as this study population already had a higher pretest probability. Patients were highly preselected with physicians explicitly only referring patients with a suspected IRD. Overall, these final results confirm the low diagnostic accuracy of both DDSSs observed in the interim analysis, which included 164 patients from a single participating center . This study also confirms the case-dependent variations in the diagnostic accuracy of Ada, which was the highest for rheumatoid arthritis, in line with the results of a previous vignette-based study . Two previous studies have demonstrated the strong user dependence of Ada’s diagnostic accuracy . The low DDSS accuracy for IRDs is in line with previous results from smaller studies. In a pilot study, Powley et al showed that only 19% of patients with IRDs were correctly identified. Similarly, Proft et al recently showed that only 19% of patients using an axial spondyloarthritis self-referral tool were actually correctly diagnosed. A general reason for the low diagnostic accuracy of symptom-based DDSSs in rheumatology could be the lack of available information compared to the physician. Ehrenstein et al previously demonstrated that the diagnostic accuracy of experienced rheumatologists regarding correct identification of IRDs solely based on medical history was only 14%. The low accuracy of DDSSs poses substantial challenges, as inaccurate diagnoses can cause misutilization of scarce health care resources, anxiety among patients, and frustration among health care professionals. Complementing subjective symptom descriptions by adding objective laboratory values obtained via self-sampling could improve the accuracy of DDSS suggestions while preserving remote care advantages. Furthermore, the application of machine learning has proven effective in improving the diagnostic accuracy of the current Rheport algorithm, highlighted by an increase in the area under the receiver operating characteristic curve from 0.534 to 0.737 . The top 5 most significant features identified by the best-performing logistic regression model for IRD classification included finger joint pain, elevated inflammatory marker levels, the presence of psoriasis, symptom duration, and female sex. Additionally, the integration of large language models such as ChatGPT could significantly improve the DDSS performance . In a recent study, we demonstrated that ChatGPT achieved diagnostic accuracy for rheumatic diseases comparable to that of experienced rheumatologists when both were provided with identical summary reports generated by patients using Ada. Additionally, we believe that usability may be improved by incorporating the free-text input and voice-enabled features of ChatGPT. A scoping review has highlighted the poor diagnostic accuracy, lack of evidence, and absence of regulation for patient-facing DDSSs . To address these issues, we echo the calls for the implementation of stricter regulatory frameworks, certification procedures, and ongoing monitoring to close these regulatory gaps . Limitations A main limitation of the study is the fact that patients were already screened by referring physicians, causing a much higher a priori chance of having an IRD. Furthermore, help from assisting personnel was available if needed for DDSS completion. To our knowledge, this is the largest comparative DDSS trial with real patients; however, the results of this study are not automatically transferable to other disciplines, languages, patient groups, and DDSSs. We therefore call for future studies involving real patients to build more solid evidence. Conclusions Overall, the diagnostic accuracy and agreement of both DDSSs regarding IRDs were limited. Improvements are needed to ensure DDSS safety and efficacy. The results suggest that physicians and the complex process of establishing a medical diagnosis cannot be replaced by an algorithm-based or AI-based DDSS. Future studies are needed to evaluate the generalizability of our findings. This prospective, multicenter randomized controlled trial investigated the diagnostic accuracy of 2 DDSSs regarding IRDs. Overall, the diagnostic accuracy of both DDSSs was limited. Rheport was less likely to correctly identify any IRD when used as the first DDSS; this could not be reproduced when it was used as the second DDSS. The diagnostic accuracy was comparably low with both tools, although Ada is an AI-based chatbot whereas Rheport is built on a simple weighted-sum-score questionnaire and expert opinions. The low negative predictive values of both Ada and Rheport suggest frequent errors when used as rheumatic screening tools. The low diagnostic accuracy is all the more alarming, as this study population already had a higher pretest probability. Patients were highly preselected with physicians explicitly only referring patients with a suspected IRD. Overall, these final results confirm the low diagnostic accuracy of both DDSSs observed in the interim analysis, which included 164 patients from a single participating center . This study also confirms the case-dependent variations in the diagnostic accuracy of Ada, which was the highest for rheumatoid arthritis, in line with the results of a previous vignette-based study . Two previous studies have demonstrated the strong user dependence of Ada’s diagnostic accuracy . The low DDSS accuracy for IRDs is in line with previous results from smaller studies. In a pilot study, Powley et al showed that only 19% of patients with IRDs were correctly identified. Similarly, Proft et al recently showed that only 19% of patients using an axial spondyloarthritis self-referral tool were actually correctly diagnosed. A general reason for the low diagnostic accuracy of symptom-based DDSSs in rheumatology could be the lack of available information compared to the physician. Ehrenstein et al previously demonstrated that the diagnostic accuracy of experienced rheumatologists regarding correct identification of IRDs solely based on medical history was only 14%. The low accuracy of DDSSs poses substantial challenges, as inaccurate diagnoses can cause misutilization of scarce health care resources, anxiety among patients, and frustration among health care professionals. Complementing subjective symptom descriptions by adding objective laboratory values obtained via self-sampling could improve the accuracy of DDSS suggestions while preserving remote care advantages. Furthermore, the application of machine learning has proven effective in improving the diagnostic accuracy of the current Rheport algorithm, highlighted by an increase in the area under the receiver operating characteristic curve from 0.534 to 0.737 . The top 5 most significant features identified by the best-performing logistic regression model for IRD classification included finger joint pain, elevated inflammatory marker levels, the presence of psoriasis, symptom duration, and female sex. Additionally, the integration of large language models such as ChatGPT could significantly improve the DDSS performance . In a recent study, we demonstrated that ChatGPT achieved diagnostic accuracy for rheumatic diseases comparable to that of experienced rheumatologists when both were provided with identical summary reports generated by patients using Ada. Additionally, we believe that usability may be improved by incorporating the free-text input and voice-enabled features of ChatGPT. A scoping review has highlighted the poor diagnostic accuracy, lack of evidence, and absence of regulation for patient-facing DDSSs . To address these issues, we echo the calls for the implementation of stricter regulatory frameworks, certification procedures, and ongoing monitoring to close these regulatory gaps . A main limitation of the study is the fact that patients were already screened by referring physicians, causing a much higher a priori chance of having an IRD. Furthermore, help from assisting personnel was available if needed for DDSS completion. To our knowledge, this is the largest comparative DDSS trial with real patients; however, the results of this study are not automatically transferable to other disciplines, languages, patient groups, and DDSSs. We therefore call for future studies involving real patients to build more solid evidence. Overall, the diagnostic accuracy and agreement of both DDSSs regarding IRDs were limited. Improvements are needed to ensure DDSS safety and efficacy. The results suggest that physicians and the complex process of establishing a medical diagnosis cannot be replaced by an algorithm-based or AI-based DDSS. Future studies are needed to evaluate the generalizability of our findings. |
Antimicrobial use practices in canine and feline patients with co-morbidities undergoing dental procedures in primary care practices in the US | b4b806e9-4835-413c-b106-155809ea46b7 | 11236167 | Dental[mh] | Antimicrobials have become a mainstay in managing oral health conditions and preventing systemic infections in veterinary dentistry. However, with the rising awareness of antimicrobial stewardship and increasing knowledge regarding optimizing the management of patient co-morbidities, the administration of antimicrobials demands careful consideration and tailored approaches. Additionally, concurrent surgical procedures (e. g., removal of cutaneous or subcutaneous neoplasia) during dental procedures pose further challenges in ensuring optimal patient care and antimicrobial stewardship. Since dental procedures invariably involve trauma to mucosal surfaces, bacterial translocation is presumably relatively common. Study in dogs and cats has been limited but in humans, bacteremia is a common event, occurring in 36–44% of individuals undergoing scaling and root planning, 62–66% undergoing extractions and in 27–28% of dental prophylaxis and probing without scaling and root planning . Bacterial translocation can pose a risk for development of extra-oral infections in some patients, most notably development of infective endocarditis . However, despite the high incidence of bacteremia during routine dental procedures, infectious consequences are rare and are of greatest concern in a small subset of the population with pre-existing and severe cardiovascular abnormalities [ , – ]. Accordingly, antimicrobial prophylaxis is not recommended for routine procedures, but is reserved for high risk patients such as those with patent ductus arteriosus, unrepaired cyanotic congenital heart defect, subaortic or aortic stenosis, embedded pacemaker leads or previous infective endocarditis . Veterinary guidance is limited. The American Veterinary Dental College has a more permissive statement that systemic antimicrobials are ‘ recommended to reduce bacteremia for animals that are immunocompromised , have underlying systemic disease (such as certain clinically-evident cardiac disease (subaortic stenosis) or severe hepatic or renal disease) and/or when severe oral infection is present ” . The American Animal Hospital Association guidelines indicate that antimicrobials ‘ may be indicated in patients with systemic risk factors , such as subaortic stenosis , systemic immunosuppression and orthopedic implants placed in the last 12–18 months ” . The British Small Animal Veterinary Association (BSAVA) has updated and released their guidelines for the use of antimicrobials in oral infections. As per the BSAVA antibacterials should be used perioperatively in patients that are immunosuppressed, that have significant comorbidities, when the procedure is long, or there is bony involvement . The objective of this retrospective study was to evaluate antimicrobial use in dogs and cats undergoing routine dental procedures, without periodontal disease or extractions, to understand the influence of underlying health issues such as cardiovascular, endocrine, hepato-renal compromise, as well as feline retroviruses (i.e., feline immunodeficiency virus and feline leukemia virus) on antimicrobial prophylaxis. The focus was on understanding the factors that likely influenced the choice of antimicrobial drugs and treatment durations, in these specific patient groups. We hypothesized that the presence of these specific co-morbid conditions in dogs and cats undergoing professional dental cleanings would be positively correlated with increased antimicrobial usage. Additionally, we expected these co-morbidities would affect duration of antimicrobial treatment. By retrospectively examining real-world scenarios, the study sought to provide valuable information that could lead to more informed and evidence-based guidelines for antimicrobial usage in veterinary dentistry.
Information regarding dogs and cats undergoing dental procedures at Banfield Pet Hospital in 2020 was acquired by searching the proprietary medical record system of the practice network, PetWare®. Data on dental procedures, including diagnosis of periodontal disease and the occurrence of extractions, were collected. Animals with a diagnosis of periodontal disease or those that underwent extractions were intentionally excluded from the dataset to exclude their potential influence on prescribing practices. Co-morbidity data, structured as clinical diagnostic codes, were obtained, and categorized into cardiovascular, endocrine, hepato-renal compromise, retroviral status (cats only), history of previous tibial plateau levelling osteotomy (TPLO) (dogs only) and whether concurrent cutaneous or subcutaneous neoplastic mass removal was performed at the time of the dental cleaning ( ). Clinical diagnoses were limited to those which would be more readily identified and attained, and more commonly seen in primary care practice. Structured pharmaceutical data included drug name, concentration, route of administration, dose, and duration of treatment. Animal species, age, and weight were also collected. Written consent for medical data analysis was obtained from clients for every pet included in the analysis, prior to treatment. Institutional Review Board approval was not required for this study as there is no access to client data and this study qualifies as quality assurance and quality improvement activities. Univariable analysis was performed using chi squared analysis for categorical data and logistic regression for continuous data. Multivariable analysis was performed using stepwise forward logistic regression. Variables with a P<0.20 were entered into the model, with the final threshold for significance set at P≤0.05. Insignificant variables were not retained in the model unless they were deemed to be confounders. Confounders were identified by observing the changes in coefficients in other variables after removing the target variable. The confounder was forced into the final model if a change of >10% occurred for any variable. Two-way interactions were tested and were retained in the model if they were deemed significant. Odds ratios and 96% confidence intervals were calculated. Model fit was evaluated using the Whole Model test. Standard least squares analysis was used to evaluate factors associated with treatment duration. Analysis was performed using JMP (SAS Institute, Cary, NC, USA).
Data were obtained for 681,541 dental cleanings (no extractions and no periodontal disease noted), performed on 592,472 dogs and 89,069 cats at 1076 veterinary clinics in 44 US states. Systemic antimicrobials were administered in 51,986 (8.8%) procedures in dogs and 6936 (7.8%) in cats. The most common antimicrobials and antimicrobial combinations used are presented in . Overall, 82,064 (92%) of cats had none of those co-morbidities described in while 6,545 (7.3%) had one, 449 (0.5%) had two and 11 (0.012%) had three. In dogs, 520,948 (88%) had none of those co-morbidities described in while 66,400 (11%) had one, 4,811 (0.8%) had two, 304 (0.05%) had three and nine (0.002%) had four. Not all cats and dogs with comorbidities were treated with antimicrobials. There were different patterns of antimicrobial use when evaluated based on the combined number of different co-morbidities (i.e., hepato-renal, cardiovascular, endocrine, retroviral infection, TPLO, cutaneous or subcutaneous neoplasia removal). Antimicrobial use by number of co-morbidities (excluding dogs with four because of the small sample size) is presented in (cats) and (dogs). In cats, a marked difference in antibiotic choice was noted in patients with three co-morbidities with all patients receiving cefovecin. In dogs, on the other hand, a difference can be noted between patients with none to one co-morbidity with noticeable increase in use of cefpodoxime. Cefovecin use was also increased in dogs with three co-morbidities however, different from cats, this wasn’t the only antimicrobial used. Univariate analyses of association with systemic antimicrobial administration The univariable analyses presented in Tables and notes significant associations between antimicrobial administration and reported co-morbidities in both cats and dogs, respectively.
The univariable analyses presented in Tables and notes significant associations between antimicrobial administration and reported co-morbidities in both cats and dogs, respectively.
Impact of age and weight on antimicrobial administration Elderly and underweight cats were administered antimicrobials, with an average age of 8.4 years (Standard Deviation (SD) 4.1) and weight of 5.2 kg (SD 1.4), respectively. In contrast, cats without antimicrobials were younger, with an average age of 7.7 years (SD 3.8), and slightly heavier, with an average weight of 5.4 kg (SD 1.4) (both P<0.0001). Endocrine or hepato-renal compromise, retroviral infection and concurrent cutaneous or subcutaneous neoplasia removal (i.e., performed at the time of dental prophylaxis) were associated with antimicrobial use in the multivariable model ( ). Older dogs and dogs that weighed more received antimicrobials with a mean of 8.4 years of age (standard deviation (SD) 3.3 y) and 16.6 kg (SD 13.1 kg) while those that did not were a mean of 7.2 years (SD 3.2 y) and 16.3 kg (SD 12.9) (both P <0.0001). Cardiovascular, hepato-renal compromise and endocrine comorbidities, as well as concurrent cutaneous or subcutaneous mass removal were associated with antimicrobial use in dogs in the multivariable analysis ( ). Impact of variables on duration of treatment Only cutaneous or subcutaneous mass removal was associated with treatment duration in cats ( P = 0.008). Counterintuitively, duration of treatment was shorter in cats that underwent concurrent cutaneous or subcutaneous mass removal as compared to those who did not have a cutaneous or subcutaneous mass removed at the time of the dental procedure (mean 8.4 vs 9.2 days). Cardiovascular and hepato-renal compromise comorbidities, as well as concurrent TPLO, were associated with variations in treatment duration in dogs (all P <0.0001). Compared to patients without co-morbidities, duration was longer in dogs with hepato-renal abnormalities (10.1 vs 9.6 days) but shorter in dogs with cardiovascular comorbidities (9.2 vs 9.6 days) and in dogs that had previously undergone TPLO (8.1 vs 9.6 days). Antimicrobial administration for different co-morbidities Relative use of antimicrobials for different co-morbidities for cats and dogs is depicted in Figs and , respectively. shows all cats with co-morbidities were mainly treated with cefovecin. shows that in dogs with cardiovascular, endocrine, and hepato-renal compromise treatment with clindamycin predominated whereas in those undergoing cutaneous or subcutaneous mass removal of with a history of TPLOs treatment with cefpodoxime predominated. In the analysis focusing on dogs and cats treated with antimicrobials, several statistically significant associations were observed between specific antimicrobials and clinical factors ( ). In cats, cutaneous or subcutaneous mass removal at the time of dental prophylaxis was positively associated with the use of cefovecin and negatively associated with the use of clindamycin and amoxicillin. On the other hand, retroviral infection was positively associated with the use of clindamycin. In dogs, cutaneous or subcutaneous mass removal at the time of dental prophylaxis was positively associated with the use of cefovecin and cefpodoxime, while use of clindamycin, amoxicillin, amoxicillin clavulanate and metronidazole were negatively associated. In dogs with hepato-renal compromise comorbidities, the use of amoxicillin, amoxicillin-clavulanate and metronidazole were positively associated whereas clindamycin and cefpodoxime were negatively associated. Additionally, cardiovascular diseases in dogs were positively associated with use of clindamycin but negatively associated with amoxicillin and cefpodoxime administration ( ). Intravenous antimicrobial administration Peri-operative intravenous antimicrobials were uncommonly used, being only administered to 1040/53,323 (2.0%) of dogs and 44/6986 (0.6%) of cats that received antimicrobials. When the analysis was performed, including only animals that received antimicrobials, in dogs, cardiovascular (OR 1.6, 95% CI 1.25–1.99, P<0.0001) or hepato-renal (OR 1.34, 95% CI 1.04–1.73, P = 0.023) comorbidities, as well as cutaneous or subcutaneous mass removal (OR 2.05, 95% CI 1.8–2.3, P<0.0001), were found to be associated with the use of intravenous antimicrobials. In cats, hepato-renal comorbidity was the only variable associated with intravenous antimicrobial use compared other routes of administration (OR 2.9, 95% CI 1.1–7.5, P = 0.027).
Elderly and underweight cats were administered antimicrobials, with an average age of 8.4 years (Standard Deviation (SD) 4.1) and weight of 5.2 kg (SD 1.4), respectively. In contrast, cats without antimicrobials were younger, with an average age of 7.7 years (SD 3.8), and slightly heavier, with an average weight of 5.4 kg (SD 1.4) (both P<0.0001). Endocrine or hepato-renal compromise, retroviral infection and concurrent cutaneous or subcutaneous neoplasia removal (i.e., performed at the time of dental prophylaxis) were associated with antimicrobial use in the multivariable model ( ). Older dogs and dogs that weighed more received antimicrobials with a mean of 8.4 years of age (standard deviation (SD) 3.3 y) and 16.6 kg (SD 13.1 kg) while those that did not were a mean of 7.2 years (SD 3.2 y) and 16.3 kg (SD 12.9) (both P <0.0001). Cardiovascular, hepato-renal compromise and endocrine comorbidities, as well as concurrent cutaneous or subcutaneous mass removal were associated with antimicrobial use in dogs in the multivariable analysis ( ).
Only cutaneous or subcutaneous mass removal was associated with treatment duration in cats ( P = 0.008). Counterintuitively, duration of treatment was shorter in cats that underwent concurrent cutaneous or subcutaneous mass removal as compared to those who did not have a cutaneous or subcutaneous mass removed at the time of the dental procedure (mean 8.4 vs 9.2 days). Cardiovascular and hepato-renal compromise comorbidities, as well as concurrent TPLO, were associated with variations in treatment duration in dogs (all P <0.0001). Compared to patients without co-morbidities, duration was longer in dogs with hepato-renal abnormalities (10.1 vs 9.6 days) but shorter in dogs with cardiovascular comorbidities (9.2 vs 9.6 days) and in dogs that had previously undergone TPLO (8.1 vs 9.6 days).
Relative use of antimicrobials for different co-morbidities for cats and dogs is depicted in Figs and , respectively. shows all cats with co-morbidities were mainly treated with cefovecin. shows that in dogs with cardiovascular, endocrine, and hepato-renal compromise treatment with clindamycin predominated whereas in those undergoing cutaneous or subcutaneous mass removal of with a history of TPLOs treatment with cefpodoxime predominated. In the analysis focusing on dogs and cats treated with antimicrobials, several statistically significant associations were observed between specific antimicrobials and clinical factors ( ). In cats, cutaneous or subcutaneous mass removal at the time of dental prophylaxis was positively associated with the use of cefovecin and negatively associated with the use of clindamycin and amoxicillin. On the other hand, retroviral infection was positively associated with the use of clindamycin. In dogs, cutaneous or subcutaneous mass removal at the time of dental prophylaxis was positively associated with the use of cefovecin and cefpodoxime, while use of clindamycin, amoxicillin, amoxicillin clavulanate and metronidazole were negatively associated. In dogs with hepato-renal compromise comorbidities, the use of amoxicillin, amoxicillin-clavulanate and metronidazole were positively associated whereas clindamycin and cefpodoxime were negatively associated. Additionally, cardiovascular diseases in dogs were positively associated with use of clindamycin but negatively associated with amoxicillin and cefpodoxime administration ( ).
Peri-operative intravenous antimicrobials were uncommonly used, being only administered to 1040/53,323 (2.0%) of dogs and 44/6986 (0.6%) of cats that received antimicrobials. When the analysis was performed, including only animals that received antimicrobials, in dogs, cardiovascular (OR 1.6, 95% CI 1.25–1.99, P<0.0001) or hepato-renal (OR 1.34, 95% CI 1.04–1.73, P = 0.023) comorbidities, as well as cutaneous or subcutaneous mass removal (OR 2.05, 95% CI 1.8–2.3, P<0.0001), were found to be associated with the use of intravenous antimicrobials. In cats, hepato-renal comorbidity was the only variable associated with intravenous antimicrobial use compared other routes of administration (OR 2.9, 95% CI 1.1–7.5, P = 0.027).
The discussion on antimicrobial use in veterinary dentistry, considering patient co-morbidities, highlights intriguing associations. A noteworthy aspect of the study is the association of different drugs with specific co-morbidities and risk factors. The study also reveals longer treatment durations in dogs with hepato-renal compromise than those without. In all these cases, clinicians may exhibit concern regarding the elevated risk of bacterial infections arising from the introduction of bacteria into the bloodstream during dental procedures or from pre-existing bacterial colonization in the oral cavity. However, the true risk of clinical bacterial infections from dental procedures remains unknown in veterinary practice. In the realm of veterinary dentistry, an important aspect to consider is the absence of established guidelines concerning antibiotic usage during dental procedures for patients with co-morbidities in the USA . Understandably, veterinarians may err on the side of caution out of a well-founded desire to prioritize patient safety and mitigate risks. Since antimicrobials are not innocuous, direct evidence of a benefit is lacking and comparative data from humans does not support such widespread use of antimicrobials, a ‘least harms’ approach could be considered instead [ , , , , – ]. Multiple comorbidities were associated with an increased likelihood of antimicrobial use in our cohorts. It is important to consider that not every animal with indications of possible co-morbidities will undergo full evaluation to obtain a definitive diagnosis prior to dental treatment under anesthesia. Consequently, at times the risk is not completely understood prior to treatment. However, despite that, most animals with comorbidities did not receive antimicrobial prophylaxis, an encouraging finding given the presumed lack of need for antimicrobials in most situations [ , – ]. While there are contrasts between human and veterinary dentistry, the pathophysiology of infection concerns is likely very similar. Therefore, it is reasonable to consider human guidelines. These tend to restrict antimicrobial use to a very narrow subset of the population and use short treatment durations [ , , , – ]. Many dogs received antimicrobials in the presence of a clinical diagnosis of a cardiovascular disease. The 2007 American Heart Association guidelines, endorsed by the American Dental Association, Infectious Diseases Society of America and Pediatric Infectious Diseases Society, recommend prophylaxis of a relatively small population of patients, namely those with a prosthetic heart valve, previous infective endocarditis, unrepaired cyanotic congenital heart disease, completely repaired heart defect with prosthetic material placed within the past 6 months, repaired congenital heart diseases with residual defects at the site or heart transplant recipients with valvulopathy . These conditions would apply to very few veterinary patients. The impetus to treat cats with antimicrobials was seen in association with retroviral infections. Retroviral infections such as feline leukemia virus and immunodeficiency virus can compromise the immune system making cats more susceptible to opportunistic bacterial infections, such as oral, respiratory, and urinary tract infections. However, the degree of immunocompromise can vary, prompting questions about the perceived immunocompromised state and whether it is warranted in most cases . In humans with human immunodeficiency virus (HIV), routine peri-dental administration of antimicrobials are not recommended unless extractions are performed in patients with neutropenia (i.e., absolute neutrophil counts that are <500 cells/mm3) . Pre-existing endocrine disease was also associated with increased likelihood of antimicrobial administration in both dogs and cats. Routine antimicrobial prophylaxis is not recommended in humans with endocrinopathies . When evaluating the prevalence of bacteriuria in dogs with hypercortisolism, this is lower than expected with only 18% tested positive and of those 83% were subclinical . A single case report detailing local and systemic complications following dental extractions in a cat with diabetes exists; however, this patient was not well controlled and risk cannot be ascertained from a single case . Taken together, the level of immunocompromise caused by these endocrine diseases, if well-controlled, is likely negligible and there may be a risk of over usage of antimicrobials in these patients. The level of clinical control over the endocrine disease was not evaluated in this study, so it is uncertain if patients were well controlled or poorly controlled at the time of the dental cleaning. Hepato-renal compromise at the time of dental procedures were also noted to be risk factors of the administration of antimicrobial prophylaxis in both dogs and cats. Prophylactic antibiotics are recommended prior to invasive dental treatment in human patients with nephritic syndrome and kidney transplants; however, in chronic kidney disease recommendations for this vary . Antimicrobial prophylaxis is also suggested in patients with end stage liver dysfunction in patients with ascites and spontaneous bacterial peritonitis . However, a large retrospective study of patients with liver transplants undergoing extractions concluded that there is no need for antimicrobial prophylaxis in that patient group as long as the technique is atraumatic . These scenarios in which antimicrobial prophylaxis is used in human medicine are not commonly seen in primary care veterinary practice and consequently, this may represent an area where significant reductions in antimicrobial use could be targeted, without negatively impacting patient care. Concurrent cutaneous or subcutaneous mass removal was a relatively common event. This is an area where there is no ability to extrapolate from human dentistry, as those procedures would not be linked in humans. Presumably, the reason that antimicrobials were more likely to be used in animals undergoing concurrent dental procedures and cutaneous or subcutaneous mass removal was concern about bacteremia and aerosolization of bacteria from the dental procedure resulting in infection at the surgical site . No data are available regarding the incidence of infection when combining these types of procedures. However, general surgical principles support avoiding combining clean with contaminated procedures considering that the mouth is populated by many bacteria in health and disease that may be harmful for the skin when undergoing an insult (i.e., surgery). This area requires further study due to its common occurrence, likely attributed to convenience, concern regarding non-compliance, and the desire to avoid multiple anesthetic procedures. Prevalence data regarding bacteremia in dogs and cats undergoing routine dental procedures are limited but the bacteria involved have been described. Blood cultures from patients undergoing these procedures have revealed various species, mostly a diverse range of oral commensals with varying disease risks. Among dogs undergoing dental procedures, the incidence of anaerobic bacteremia was higher (43%) than that of gram-positive aerobic (29%), and gram-negative aerobic (29%) bacteremia [ , – ]. However, bacteremia has also been documented to occur when tooth brushing and while chewing and in a healthy human patients with a working reticuloendothelial system this is inconsequential . Despite the high prevalence of bacteremia, clinical consequences appear to be limited in the general patient population . Further, while 12% of dogs and 8% of cats had one or more co-morbidities of interest and those were associated with an increased likelihood of antimicrobial administration, rarely would antimicrobial prophylaxis be recommended for those comorbidities in human guidelines . While veterinary data are lacking, there is no published evidence to support that dogs and cats would be at increased risk of disease following transient bacteremia compared to human counterparts. In most cases, the lack of evidence supporting the need for antibiotics raises a critical discussion point. However, there also needs to be consideration for appropriate coverage if antibiotic use is indicated. Increased clindamycin use for cardiovascular conditions would offer appropriate coverage due to concerns of streptococcal and staphylococcal infections, even though it sacrifices gram-negative coverage. However, this would be indicated only in severe cardiovascular disease which is of rare occurrence in veterinary medicine . Similarly, the use of metronidazole to prevent secondary anaerobic infections in distant organs would provide adequate coverage; however, as discussed above cases that would actually benefit from this are rarely seen in primary care practice. On the other hand, the use of third generation cephalosporins (such as cefpodoxime) for cutaneous or subcutaneous mass removal may be deemed excessive , as is the use of a higher tier broad spectrum antimicrobial in patients without evidence of periodontal disease or those requiring tooth extractions. In humans, penicillins are the main drugs recommended for prophylaxis, with other options such as macrolide and aminoglycosides used in some situations . A variety of antimicrobials were identified in this study. Amoxicillin use would be consistent with typical human guidelines, but it accounted for <10% of use here . Cefovecin was commonly used, which raises concerns because of its higher tier status and long duration of action, as well as limited activity against E . coli . Cefpodoxime was also commonly used, with similar concerns about the necessity of this higher tier drug . Cefovecin and cefpodoxime are administered every two weeks or once a day, respectively, thus administration factors could also play a role in this selection. Amoxicillin-clavulanate is lower tier option with excellent activity against common pathogens. Clindamycin is often used for oral disease because of its activity against staphylococci and streptococci, and anaerobes . Additionally, the surprisingly low use of amoxicillin raising questions about its potential underutilization in veterinary dentistry, particularly in situations such as a tibial plateau leveling osteotomy (TPLO), where it would typically be an ideal antimicrobial selection due to the risk of staphylococcal infections . Various impacts of different comorbidities on treatment duration were identified in both dogs and cats. However, analysis of large datasets can identify significant but clinically irrelevant differences. Here, the differences in duration were likely negligible from clinical and stewardship standpoints. However, they might provide some insight into clinicians’ decision-making processes. One important area that could not be specifically evaluated here was timing of administration. Prophylaxis should be administered such that therapeutic drug levels are present during the period of risk. Accordingly, antimicrobials should be administered shortly before the procedure, ideally intravenously 30–60 minute before the procedure, and typically do not need to be continued after . Administration of antimicrobials after the procedure could be minimally effective as the main period of risk would have passed. The uncommon use of intravenous antimicrobials, that would more likely have been administered prior to the procedure, indicates a need to better understand how antimicrobials are used, when they are used. Both unnecessary use and suboptimal use are concerns from patient care and antimicrobial stewardship standpoints. Development and use of clinical guidelines could improve both of those areas. Overall, this study offers valuable insights into the thought processes behind drug selection based on patient co-morbidities and risk factors in veterinary dentistry. The findings suggest opportunities for improvement and education to refine antimicrobial approaches, ensuring optimal care for patients undergoing dental procedures with varying health conditions. Further research and discussion within the veterinary community are necessary to develop evidence-based guidelines and promote the responsible use of antimicrobials in dental care, ultimately advancing the field of veterinary dentistry and patient outcomes.
|
The spread of antimicrobial resistance in the aquatic environment from faecal pollution: a scoping review of a multifaceted issue | 958638dc-a9aa-44d9-9f51-2fe35a29bc2e | 11937110 | Microbiology[mh] | Antimicrobials, in the context of this review, include antibiotics, antiseptics and disinfectants. Antimicrobial resistance (AMR) among bacteria is an urgent global health issue. The World Health Organization (WHO) has deemed it “One of the greatest threats we face as a global community” (WHO, ). Approximately 700,000 people die from AMR bacterial infection each year, and, if no action is taken, this figure is predicted to increase to 10 million deaths per year by 2050 (WHO, ). The production of antibiotics by microorganisms is a natural process which is designed to kill or inhibit the growth of competing microorganisms. However, many bacteria have evolved diverse antibiotic resistance mechanisms as a basic survival component. Recent investigations have revealed that some antibiotic resistance mechanisms can also confer cross-resistance against antiseptics and disinfectants. This is a natural unavoidable evolutionary process as AMR genes have been found in remote environments with minimum human interaction, such as the Polar Regions (Nolan et al., ). The rise in AMR is partially the result of inappropriate use of antibiotics. In some jurisdictions, healthcare providers do not restrict the use of antibiotics thereby preventing efficient monitoring of overall drug usage levels (Hartinger et al., ). Antibiotic mismanagement can occur in both human and agricultural (mainly animals) sectors resulting in the release of antibiotics and antibiotic residues into the environment (Food & Agriculture Organization of the United Nations, ; Maillard et al., ). This can create a selective pressure that favours more resistant bacteria, allowing these resistant organisms to multiply leading to greater dissemination of resistance genes (Daly et al., ). The consequence of the rise of AMR is increased hospital treatment time and cost, along with increased potential of mortality resulting in further strain on healthcare systems (Hartinger et al., ). This has led to the requirement for increased surveillance of AMR bacteria and AMR genes following the One Health approach set out by the WHO (Larsson et al., ; Tiwari et al., ; WHO, ). The One Health initiative integrates the human, animal and environmental sectors emphasising their interconnection. Surveillance in humans and animals has been well established, with environment surveillance a more recent effort towards monitoring AMR progression (Huijbers et al., ; WHO, ). In recent years (Finley et al., ), the environment has been recognised for playing a major role as a source and dissemination of AMR. Wastewater treatment plants (WWTP) are seen as one of the major sources of antimicrobial resistance genes (ARGs) and bacteria. Antibiotics consumed by animals and humans, along with antibiotic-resistant gut microbes are excreted into the environment via faeces and urine (Karkman et al., ). Faecal pollution in the aquatic environment can occur from both identifiable (or point) sources such as wastewater treatment plants and septic tank effluents and non-identifiable (or non-point) sources such as agricultural runoff and animal defecation (Camiade et al., ; Flores et al., ; Ragot & Villemur, ). Rainfall events which enable faeces to enter the water systems are a key driver of dissemination (Ahmed et al., ; González-Fernández et al., ). The combination of the various sources of AMR highlights the complexity of tracking faecal pollution in the aquatic environment. Faecal pollution can have negative impacts on health, the economy and the environment showing their interconnectedness and validating the One Health perspective (WHO, , ). As a result of faecal pollution, there is a deterioration in drinking water quality and a greater risk of waterborne disease outbreaks. This is due to increased exposure to possible pathogenic bacteria via ingestion, physical contact (dermal route) and accidental inhalation with AMR complicating treatment (Hinojosa et al., ; Valério et al., ). These factors contribute to an increase in the potential disease burden and raise the risk of transmission, resulting in increased treatment costs and deterioration of recreational water quality. This can cause temporary or permanent closure of recreational sites causing a loss of income (Díaz-Gavidia et al., ). The aim of this scoping review is to identify how faecal pollution contributes to AMR in surface water, what are current methods of identifying sources of faecal pollution and what are the most commonly studied ARGs.
No formal review protocol was registered; however, the PRISMA-ScR guidelines were followed as a framework, and the PRISMA-ScR checklist was completed to ensure transparent reporting. The following research questions were created to direct the review “how does faecal pollution contribute to antimicrobial resistance in surface water?” and “how can the source of faecal pollution in surface water be tracked?”. There were no predetermined genes or locations targeted in the study, as one of the aims was to identify the most commonly studied genes. Search terms were decided after a discussion between authors and information specialists (Appendices (See Table )). The databases used to find literature were Medline Ovid® ALL 1946 to November 13, 2023, and Scopus®. From the search terms, a search string was created and adapted to each of the databases. All Medline searches were carried out using the mapped and unmapped feature (Appendices (See Tables and ) to find literature relevant to the terms. All Scopus searches were carried out using the TITLE-ABS-KEY feature of the database highlighting specific terms in the title, abstract and keywords. All relevant papers were exported to RefWorks®. Inclusion criteria The study was conducted in November 2023, which covered the period from January 2020 to November 2023, to identify papers that were recent at the time; this was to ensure that the review captured recent developments and the latest advances in methods used to identify sources of faecal pollution and track ARGs in surface waters. Previous literature reviews have already covered pre-2020 developments in this field, so including earlier developments would lead to duplication (Fewtrell & Kay, ; Cho, S. et al., ; Mathai et al., ). Only articles published in English are considered. Articles that included “antimicrobial resistance” and “aquatic environment” and mentioned “faeces” as a cause of deterioration of water quality were included. This included articles that mentioned “wastewater treatment plants/WWTPs”, “sewage treatment plants”, “septic tanks”, “antimicrobial resistant genes/ARGs”, “faecal indicator bacteria/species” ( E . coli , streptococci, Enterococcus spp., including faecal indicator phyla Firmicutes) and articles that mentioned “microbial source tracking”. Only primary articles were accepted. Exclusion criteria The study excluded articles that were about “groundwater”, articles outside the date range, articles that were looking at “faecal carriage”, “antibiotic pollution of aquatic environments” and analysis of other substrates such as “soil”, reviews, seminars and those that only mentioned one of the three topics under investigation. Upon further screening, papers that did not identify the source through a scientific methodology such as source tracking or phylogenetic typing were rejected.
The study was conducted in November 2023, which covered the period from January 2020 to November 2023, to identify papers that were recent at the time; this was to ensure that the review captured recent developments and the latest advances in methods used to identify sources of faecal pollution and track ARGs in surface waters. Previous literature reviews have already covered pre-2020 developments in this field, so including earlier developments would lead to duplication (Fewtrell & Kay, ; Cho, S. et al., ; Mathai et al., ). Only articles published in English are considered. Articles that included “antimicrobial resistance” and “aquatic environment” and mentioned “faeces” as a cause of deterioration of water quality were included. This included articles that mentioned “wastewater treatment plants/WWTPs”, “sewage treatment plants”, “septic tanks”, “antimicrobial resistant genes/ARGs”, “faecal indicator bacteria/species” ( E . coli , streptococci, Enterococcus spp., including faecal indicator phyla Firmicutes) and articles that mentioned “microbial source tracking”. Only primary articles were accepted.
The study excluded articles that were about “groundwater”, articles outside the date range, articles that were looking at “faecal carriage”, “antibiotic pollution of aquatic environments” and analysis of other substrates such as “soil”, reviews, seminars and those that only mentioned one of the three topics under investigation. Upon further screening, papers that did not identify the source through a scientific methodology such as source tracking or phylogenetic typing were rejected.
After conducting the searches on Medline Ovid® and Scopus®, a total of 637 and 107 papers, respectively, were identified, resulting in a combined total of 744 papers retrieved from both databases. On Medline Ovid® following initial title and abstract screening, a total of 171/637 were kept and exported to RefWorks, while the other 466 papers were discarded. On Scopus following initial title and abstract screening, a total of 87/107 papers were kept and exported to RefWorks, while the remaining were discarded; this gave a total of 258 papers. In all, 40 duplicate papers were removed, and this resulted in a total of 218 papers. The full screening of papers that (i) matched the inclusion and exclusion criteria and (ii) identified the source through means of tracking resulted in 185 papers being discarded, and a total of 33 papers were used for the review (Fig. ). After the full screening of 218 papers, a notable recurrent feature was observed. Papers such as Kimera et al. ( ) did not definitively identify the source and simply suggested that they are from human or animal sources based on the sampling location. Several studies such as Herrig et al. ( ) also suggested that applying source tracking could improve the study. Data characteristics The retained papers reported on studies that were conducted in six different continents are as follows: Europe ( n = 11), Asia ( n = 11), North America ( n = 7), South America ( n = 2), Africa ( n = 1) and Oceania ( n = 1). The countries/regions that the studies were conducted in were Norway, Germany and France ( n = 2); Spain and Ireland ( n = 4): Poland and Black Sea region (Romania, Ukraine and Georgia): Israel, Vietnam and China ( n = 7); Korea, Malaysia and the USA ( n = 6); and Canada, Brazil, Bolivia, Ethiopia and Australia (Fig. ).
The retained papers reported on studies that were conducted in six different continents are as follows: Europe ( n = 11), Asia ( n = 11), North America ( n = 7), South America ( n = 2), Africa ( n = 1) and Oceania ( n = 1). The countries/regions that the studies were conducted in were Norway, Germany and France ( n = 2); Spain and Ireland ( n = 4): Poland and Black Sea region (Romania, Ukraine and Georgia): Israel, Vietnam and China ( n = 7); Korea, Malaysia and the USA ( n = 6); and Canada, Brazil, Bolivia, Ethiopia and Australia (Fig. ).
This scoping review was conducted to identify (i) how faecal pollution contributes to antimicrobial resistance in surface water, (ii) what are current methods of identifying sources of faecal pollution, and (iii) what are the most commonly studied ARGs. The search criteria ensured that relevant papers published between the years 2020 and 2023 were retrieved. The final selection for the review consisted of 33 papers that studied the impact of faecal pollution on the aquatic environment (Figs. and ). The studies were a combination of freshwater, marine water, wastewater and sediment highlighting the impact of faecal pollution in the deterioration of a variety of water sources used for drinking water, irrigation and recreation. The degradation of these water sources results in greater exposure to faecal pathogens leading to more waterborne disease outbreaks with AMR bacteria prolonging patient treatment (Valério et al., ). The majority of studies came from Europe and Asia comprising two-thirds of the entire review. Geographically spatiotemporal differences occur between countries with factors such as population density, sanitation infrastructure and environmental conditions that affect levels of faecal pollution. None of the reference material came from Northern Ireland and, by extension, the whole of the UK, identifying a potential gap in knowledge and indicating an opportunity for increased scientific contribution from the UK. A limitation of using two databases is that some relevant literature may not be covered due to publication bias and search inconsistencies. The identification of duplicates indicates overlap, which can lead to redundancy in the findings as well as add to the workload required to screen the data. Only a few of the studies followed standard methods set out by governing bodies and organisations that provide guidelines for testing procedures such as EUCAST ( ), which were for antibiotics susceptibility testing, minimum inhibitory concentration and enumeration of faecal indicator bacteria (FIB) (European Union, ). Enumerating FIB is used to assess water quality and indicate potential faecal contamination, suggesting the potential risk of exposure to faecal pathogens (European Union, ). However, there are many limitations with just enumerating FIB; it does not identify the source, pathogenicity, virulence or resistance of strains (Williams et al., ). Therefore, studies could address this by utilising genetic analysis. A variety of genetic techniques are available: whole genome sequencing (WGS), Sanger sequencing, quantitative polymerase chain reaction (qPCR), metagenomics, multi-locus sequencing typing (MLST) and phylotyping, but there was no standardised method used in these studies (Table ). However, all studies did carry out filtration of samples for the genetic analysis. A variety of methods to enumerate bacteria was used including the IDEXX Colilert test (Yakub et al., ), filtering samples and placing filters onto selective agar or adding drops/microlitres of sample directly onto agar. The majority of papers that did quantify bacterial counts used colony-forming units (CFU). However, a number of studies used most probable number (MPN) methods. Previous studies have shown that MPN figures are more variable and produce higher estimates than CFU (Gronewold & Wolpert, ; Cho, K. H. et al . ) highlighting the need for standardised methodologies to allow for comparison between results. Antimicrobial resistance genes Antimicrobial resistant (AMR) strains of FIB were identified in all the studies. The isolation of AMR isolates was carried out using media that had been supplemented with the following antibiotics: aztreonam, cefotaxime, ceftazidime, ceftriaxone, ciprofloxacin, colistin, fluconazole and meropenem. Additional investigations included antibiotic susceptibility testing (AST) and minimum inhibitory concentration (MIC), with the resulting isolates of interest being subjected to genetic analysis. These antibiotics are commonly used, and their presence in the aquatic environment may suggest whether urban or agricultural activity is influencing the waterway. Aztreonam, cefotaxime, ceftazidime, ceftriaxone and ciprofloxacin are used as first-line antibiotics for human infection. While colistin and meropenem are last resort antibiotics; the authors may have also selected these antibiotics to highlight the increase in resistance to clinically important antibiotics. The most common beta-lactamase ARGs studied were bla CTX ( N = 12), bla TEM ( N = 12), bla OXA ( N = 11) and bla SHV ( N = 10). This is concerning as bla CTX , bla OXA and bla SHV confer resistance to carbapenems which are clinically significant as a last resort antibiotic (Reynolds et al., ). Another common ARG within this group was bla KPC ( N = 8) which encodes for a carbapenemase. These enzymes have gained much global attention as they have a broad spectrum, not only inactivating carbapenem antibiotics but other clinically important beta-lactam antibiotics in both Gram-negative and Gram-positive bacteria (Chen, Z. et al . ). The predominant ARG studied encoding tetracycline resistance was the tet gene with the most common subtypes being tet A ( N = 8), tet O ( N = 7), tet Q ( N = 5), tet W ( N = 5) and tet X ( N = 5). For sulphonamides, the most common resistance gene studied was sul with the most common subtypes being sul 1 ( N = 16) and sul 2 ( N = 7). This is concerning as tetracycline and sulphonamides are not just restricted to human use but also used in livestock and poultry production (Damashek et al., ; Ma et al., ). The presence of both tetracycline and sulphonamides in waterways may indicate the potential impact of agriculture on the local resistome of the aquatic environment, with the potential for specific ARGs to be used as markers and indicators for agricultural faecal pollution. The most common aminoglycoside resistance genes detected were aph ( N = 8), aad ( N = 7) and aac ( N = 6). Aminoglycosides are first-line antibiotics in human medicine (Krause et al., ). The most frequent quinolone resistance genes studied were qnr S ( N = 11) and mfd ( N = 2). The most frequent MLSB-detected ARGs were erm B ( N = 6), erm F ( N = 6), ere A ( N = 4) and inu B (N = 4). Quinolone and MLSB antibiotics are usually reserved as the alternative when first-line options are not effective (Pham et al . 2019; Pardo et al., ). Notable phenicol ARGs detected were cat ( N = 4), cml ( N = 3) and flo R ( N = 3). The most frequent colistin resistance gene studied was mcr ( N = 7). The common resistance genes studied for vancomycin were van A ( N = 3), van B ( N = 3) and van C ( N = 3). Colistin antibiotics are a last resort in human medicine and if ineffective may result in a longer infection time and high mortality rates (Guo et al., ; Mull et al., ). The extensive list of ARGs studied highlights the scientific relevance of addressing resistance to antibiotics; however, from this list, there are some missing classes such as oxazolidinones and lipopeptides indicating a possible knowledge gap that needs to be addressed. The resistance gene studied for aminocoumarin was parY ; mupirocin was ile S1; triclosan was Tri C and diaminopyrimidine variants of dfr , dfr E, dfr F and dfr G. The few papers studying resistance in these antimicrobials indicate a possible knowledge gap, particularly for antimicrobials such as triclosan and diaminopyrimidine which are used in soap and haircare products (Alfhili & Lee, ; Garre et al., ; Vincenzi et al., ). A variety of multiple drug resistance (MDR) genes were analysed with the most prevalent being qacEdelta 1 ( N = 3). MDR occurs when bacteria are non-susceptible to at least three or more antibiotic classes (Ho et al., ). While E . coli is generally considered non-pathogenic, the majority of healthcare-associated infections are due to MDR pathotypes expressing extended spectrum beta-lactamases (ESBLs). These strains are commonly isolated from surface waters influenced by human activity including drinking water sources and recreational water (Sidhu et al., ). Overall, the common occurrence of these ARGs across separate geographical locations highlights the dissemination of resistance to these clinically important antibiotics and the potential impact on mortality rates due to the rise in AMR-associated bacterial infections. There is currently no standard set of markers for the tracking of ARGs (Leao et al., ) indicating a gap in knowledge. However, just because these genes are present does not mean that they are being transcribed and biologically active. Only a few papers carried out a phenotypic investigation through AST and MIC assays. This emphasises the need for routine monitoring and surveillance of ARGs in the aquatic environment employing complementary methodologies. Mobile genetic elements Mobile gene elements (MGEs) are genetic material that can aid the capture and transmission of exogenous genes; they include integrons, plasmids, transposons and genetic islands (Sanderson et al., ; Xie, ). MGEs aid in the dissemination of AMR between bacteria species by facilitating horizontal gene transfer (HGT) of ARGs (Fig. ). Detection of high concentrations of MGEs suggests the possibility of significant HGT in a particular environment (Li, Y. et al . ). Overall, the papers identified various mobile genetic elements such as plasmids ( N = 6), integrons ( N = 13) and transposons ( N = 6). These relatively low numbers from the studies suggest that there is a knowledge gap around their true distribution. This may be due to the difficulty in detecting MGEs as they contain many repetitive sequences, which may cause them to be misidentified as part of the non-coding regions. However, the increased availability of MGE sequences should facilitate the development of more focused markers (Xie, ). Source tracking markers Microbial source tracking (MST) assays are used to assess water quality and identify possible sources of faecal pollution by targeting specific marker genes (Lee, S. et al . ). There are a variety of genetic markers available with the main types being those that target bacteria such as the HF183 Bacteroides 16S rRNA genetic marker that can be used to detect human faecal pollution in water environments (Lee, S. et al . ), those that target viruses such as crAssphage (Agramont et al., ) a bacteriophage highly specific to human gut (Stachler & Bibby, ) and those that target specific DNA such as mitochondrial DNA (Table ). The link between a marker and source has been validated in previous studies. Accurate tracking of faecal pollution can measure the influence that human and animal faecal pollution has on ARGs in the aquatic environment (Chen, Z. et al . ). From the 33 studies, a variety of MST markers (Fig. ) were used to identify human sources with the most commonly used marker being crAssphage ( N = 13) and HF183 ( N = 13). For ruminant sources, the most common is Rum2bac ( N = 2). For avian sources, the most frequently used is GFD ( N = 3), and for pig sources, the most common is Pig-2-bac ( N = 3). The use of the same markers in different geographical regions may suggest that it may be possible in the future to have a single marker for each source to track faecal pollution worldwide enabling easier comparison of results. Microbial community of faecal polluted water sources Eight studies investigated the microbial community of samples and found that the community harboured multiple ARGs, with non-faecal indicator phyla such as Proteobacteria and Actinobacteria harbouring multi-drug resistance efflux pumps that are part of the resistance nodulation division (Bagi & Skogerbø, ). There is a correlation between the microbial community at a genus level and the profile of ARGs. Ma et al. ( ) found that the faecal indicator phyla Firmicutes harboured the largest possible amount of ARGs, one genus harbouring genes encoding resistance to MLSB ( erm f, mef A, erm B, lun B and erm G), aminoglycosides ( aad A and aad E), tetracyclines ( tet M, tet 36, tet O, tet W, tet 32, tet O, tet X 2 and tet 44), chloramphenicols ( flo R) and MDR ( qac Edelta1). Similarly, Chen et al. ( ) found six MDR genes to be related to the indigenous microbial community. Hiruy et al. ( ) found that during the wet season, greater amounts of rainfall resulted in more runoff and soil erosion, which caused more soil-related genera, such as Vicinamibacter and Legionella , to be present in the aquatic environment potentially exchanging ARGs with the indigenous microbiome. This suggests that some ARGs originate from non-faecal bacteria and that there may be an exchange of ARGs between faecal and non-faecal bacteria. Hou et al. ( ) found that in subtropical watersheds, MLSB and tetracycline resistance occur in bacterial species commonly found in the gut: Bacteroides , Faecalibacterium , Clostridium , Blautia and Ruminococcus , suggesting that faecal pollution aids in the dissemination of MSLB and tetracycline resistance. This highlights the potential role faecal pollution may have in influencing the ARG profile of watersheds, particularly urban waterways. In contrast, Ma et al. ( ) suggest that constant wastewater input contributes to the microbial community, impacting surface waters. However, this input does not affect the ARG profile, as horizontal gene transfer is not retained due to a change in environmental conditions. This change may be attributed to seasonal changes affecting pH, total organic carbon and heavy metal concentrations. Other microorganisms From the selected studies, there was little investigation into other faecal-associated microorganisms such as protozoa ( N = 2), bacteriophage ( N = 3) and viruses ( N = 1). These organisms play a significant role in disease and can act as a reservoir and disseminate AMR. Like FIB, protists can be indicators of water quality; however, there have been few broad investigations of protists in urban environments (Lee, S. et al. ; Allsing et al., ). Both papers reported the presence of protozoa with Lee et al., ( , ) profiling the protist community identifying potential for human disease outbreaks, while Allsing et al. ( ) detected the presence of protozoa and viruses that cause human disease, suggesting that the presence of any live virus or microbe may influence disease through activities such as swimming. Only three studies investigated bacteriophages outside of the MST marker crAssphage. Nolan et al. ( ) and Sala-Comorera et al. ( ) found ARGs to be harboured by the bacteriophages. While Sanderson et al. ( ) investigated HGT, this highlights the impact that bacteriophages play as a reservoir and a disseminator of AMR. Bacteriophages may also spread AMR from the environment to animals and humans with ingestion of contaminated water and shellfish (Nolan et al., ). Overall, the low number of studies that investigated these microorganisms indicates a potential knowledge gap that should be addressed. Correlation of mobile genetic elements with args The most common MGEs studied were integrons notably class 1 integrons ( int I1) being the most frequently studied. Markers associated with HGT such as integrons are usually found in areas where there is prevalent anthropogenic pressure particularly where wastewater is being deposited (Niestepski et al., ). For this reason, it has been proposed to use int I1 as an indicator of anthropogenic pollution (Nguyen et al., ). The integrons int I1, int I2 and int I3 are more common in terrestrial environments and less common in marine environments. Toubiana et al. ( ) found that int I2 and int I3 may be more specific as they were detected during peak faecal contamination during peak beach attendance, while int I1 may indicate other urban pollution. However, using int I1 as a marker may be unsuitable because it has the potential to contain ARGs, allowing for self-selection and potentially leading to challenges in distinguishing ARG dissemination from faecal pollution (Zhang et al., ). int I1 has also been identified to be utilised for the adaptation of other environmental stresses such as heavy metal in plankton-associated bacterial communities (Toubiana et al., ). Reynolds et al. ( ) highlight the complexity of integrons which are often associated with other MGEs resulting in further dissemination of ARGs that are commonly found in conjunction with int I1 such as sulphonamide resistance gene sul 1. Zhang et al. ( ) support these findings with sul 1 and sul 2 having the strongest correlation with the abundance of int I1. Metagenomic analysis by Chen et al. ( ) found MGEs with the most associated resistance genes, the most common being int I1 and mupirocin resistance gene ile S1 and transposase IS 91 contained sul 2. These findings highlight the role that MGEs particularly int I1 have in shaping the resistome of the aquatic environment. The study also found that river sediment contributed significantly to the amount of MGEs indicating that the dissemination of ARGs within the river was largely connected to horizontal transfer promoted by the MGEs. Hou et al. ( ) found that the transposases tnp A-07 fuelled by the input of faecal bacteria are the keystone of HGT in an urban lagoon. Altogether, these studies highlight the significant role MGEs play in the persistence and dissemination of AMR within the aquatic environment introduced by anthropogenic pressures such as wastewater. The use of microbial source tracking to identify sources of faecal pollution and args MST markers are a valuable tool for identifying the possible source of faecal pollution and the source of ARGs in the aquatic environment. Williams et al. ( ) found a significant correlation between the human faecal marker HF183 and drf A1, sul 1, qnr S and van B resistance genes highlighting the link of raised ARG abundance and sewage input. The study by Chen et al. ( ) found that crAssphage abundance significantly correlated with the abundance of aminoglycoside ( aad A, aac (6’)-Ib, aad A1, aad A2, aph A1, aad A5), MLSB ( erm F), tetracyclines ( tet X), quinolones ( aac (6’)-Ib), phenicols ( cat B3, flo R), sulphonamides ( sul 1, dfr A12), MDR ( tol C) and beta-lactams ( bla OXA10 ). The study by Zhang et al. ( ) found a correlation between the carbapenem resistance gene bla NDM-1 and human markers (BacHum and CPQ056) and pig markers (Pig-2-Bac, P.ND5) suggesting that occurrence was due to combined human and pig pollution. A swine fever outbreak and the subsequent decrease in pig breeding resulted in a slight decrease in bla NDM-1 abundance. However, due to human faecal pollution being prevalent throughout the watershed, this decrease was not significant. Similarly, Damashek et al. ( ) found that the human faecal marker HF183 strongly correlated with carbapenem resistance genes bla CTX-m1 , bla SHV and bla KPC in a multi-use watershed. The study found that the correlation between carbapenem resistance and ruminant and poultry markers was weaker compared to HF183; this suggests that while agricultural faecal pollution contributes to some of the ARGs, human faecal pollution remains the main source. These two studies highlight the usefulness of MST markers in identifying sources of faecal pollution and their contribution to the resistome of the aquatic environment. A variety of factors influence the efficacy of MST markers, and differences exist between markers. Human markers, such as the Bacteroides marker HF183 and Bacteroides phage crAssphage, have different decay rates. crAssphage can be detected for more than 21 days, persisting longer than HF183, which can be detected for up to 10 days (Ballesté et al., ; Nolan et al., ; Sala-Comorera et al., ). Therefore, crAssphage can travel further along the watershed (Zhang et al., ). This can influence the interpretation of results with prolonged persistence resulting in an overestimation of human faecal pollution, potentially giving false-positive results. Another important factor is the sensitivity and specificity of markers; many studies have a variety of results due to spatiotemporal differences arising between locations, countries and population. This can result in one MST marker being highly specific and highly sensitive in one area but not specific or sensitive in another area, e.g. in southern France, HF183 shows 56% sensitivity, while in California, USA, it is associated with 61% sensitivity (Toubiana et al. 2020) and in China, it has a 36% sensitivity (An et al., ). This can be attributed to differences in the gut microbiome, diet and sanitation infrastructure (Cao et al., ). This highlights the need for local investigations and ring trials to validate the use of markers. Some markers also showcase cross-reactivity between other species. Zhang et al. ( ) found that BacHum can cross-react with pigs and cattle. This cross-reaction resulted in BacHum correlating with the ruminant-associated ARG tet O, while the other human marker, CPQ056, did not. Cross-reactivity can result in false-positive and false-negative results; it is therefore important to consider this when selecting markers and highlights the importance of using multiple markers. The contribution of wastewater treatment to AMR dissemination WWTPs play a significant role in contributing to faecal pollution of surface waters illustrated by several of the studies (Agramont et al., ; Niestepski et al., ; Sanderson et al., ; Nguyen et al., ; Bagi and Skogerbo ; Damashek et al., ; Hiruy et al., ; Kneis et al., ; Chen, Z. et al . ; Li, Q. et al . ; Leao et al., ). WWTPs harbour antibiotic residues and heavy metals creating an environment that selects for resistant bacteria. Chen et al. ( ) found that ARGs encoding resistance to tetracyclines, aminoglycosides and sulphonamides such as tet T, tet W, tet X, str A, str B, sul 1 and sul 2 (Fig. ) were the most commonly found in WWTP samples. Agramont et al. ( ) found that aquatic lakes impacted by mining and wastewater effluent harboured an abundance of ARGs and int I1 indicating the impact of anthropogenic activity. This is consistent with Niestepski et al. ( ) findings on the impact of various wastewater effluent ranging from treated to urban wastewater on the aquatic environment. The study found a direct link with the contribution of wastewater to concentrations in E. coli , enterococci and Bacteroides fragilis and the abundance of ARGs. Damashek et al. ( ) extended the investigation by studying the entire watershed and found that non-point sources, such as aging septic tanks and sewage infrastructure, contribute significantly to the dissemination of AMR in aquatic environments. Consistent with this, Bagi and Skogerbø ( ) also found possible pathogenic strains of Arcobacter and Bacteroides , suggesting that continuous release of sewage into streams can result in a greater spread of sewage effluent towards beaches. Hence, the release of constant low-level wastewater may introduce potential pathogens into areas used for bathing. There are many conflicting findings between the studies. The findings from Li et al. ( ) suggest that wastewater from WWTPs contributes to the release of ESBL E. coli . Strains from WWTPs exhibit genetic similarity to those found in the aquatic environment of the study, suggesting that the resistance genes along with the virulence factors and MGEs can disseminate from this source into water used for recreation, drinking water or irrigation in agriculture. Niestepski et al. ( ) findings also suggest that WWTPs are a source of contamination of river water introducing potentially pathogenic strains of B. fragilis , E. faecalis and E. coli which harbour ARGs, along with class 1 and class 2 integrases aiding the dissemination of AMR. Conflicting with these findings, a study by Kneis et al. ( ) found that ARGs that encode for resistance to trimethoprim ( dfr B) and beta-lactams ( bla TEM and amp S) did not show a relationship with wastewater. Instead, these genes were most likely to be from the natural water microbiome independent of human influence or could be the result of non-point anthropogenic activities thereby emphasising the need for validated markers to unequivocally determine sources. These studies confirm that wastewater significantly contributes to faecal pollution of surface waters through the release of ARGs and potentially pathogenic microorganisms, emphasising the public health risk along with the ecological concern. This highlights the need for standardised and robust wastewater management practices. The influence of the environment on faecal pollution Environmental factors can influence faecal contamination and ARGs with temperature, pH and UV light exposure affecting the local microbial community (Leao et al., ). The presence of heavy metals like zinc can exert pressure selecting for resistant bacteria in soil which can be introduced into the aquatic environment via runoff; the presence of zinc and lead correlates with an increased abundance of genes encoding resistance to erythromycin (Agramont et al., ). To complicate this, further DNA from both humans and animals along with ARG decay rates can be influenced by the flow, dilution, sediment absorption and exposures such as UV light exposure (Chen, Z. et al . ). Li et al. ( ) found that the detection rate of ESBL E. coli decreased with higher levels of rainfall suggesting a dilution effect of rain to the river system. This is consistent with Hiruy et al.’s ( ) findings which found seasonal differences in bacterial communities in both wastewater influent and riverine bacterial communities, with the wet season causing a reduction in faecal streptococci and ESBL E. coli . Williams et al. ( ) found that 4 days of rainfall resulted in raised levels of human faecal markers. However, this may be due to the fact that Bacteroides can persist longer than FIB or that the genetic assays that quantify these markers detected nonviable bacteria which otherwise would not grow using traditional cultural-based methods. Stormwater infrastructure can play a significant role in faecal pollution and the subsequent dissemination of ARGs. Often, stormwater infrastructure is built near sewage infrastructure and can receive inputs from sewage during dry weather due to leaks and blockages, as well as overflow during rainfall events introducing untreated sewage into the aquatic environment (Williams et al., ). Rainfall can also introduce nutrients, silt and other pollutants that have built up during dry periods, and this can alter the composition of the local microbiota with pathogens that were present in the silt and sediment being introduced (Lee, S. et al . ). This change in species composition can also alter the resistome. Differences in the antibiotics used can affect the local resistome; in Ireland, ciprofloxacin, sulphonamide, fluoroquinolone, tetracycline, trimethoprim and beta-lactam class antibiotics are some of the most used in both human and veterinary medicine (Nolan et al., , ; Reynolds et al., ; Sala-Comorera et al., ). Beta-lactam antibiotics such as penicillins and cephalosporin are the most prescribed antibiotic in Irish hospitals accounting for 50% of prescriptions (Reynolds et al., ); this resulted in bla TEM genes being the most abundant ARG found in all four papers followed by bla SHV and bla CTX which were also found in high abundance in two of the papers. All three ARGs also correlated with human faecal markers HF183 (Nolan et al., ; Reynolds et al., ; Sala-Comorera et al., ) and crAssphage (Nolan et al., ). However, these particular ARGs may have been selected because beta-lactams are the most common antibiotic used in the healthcare sector within Ireland and therefore are clinically important. Animals can contribute to faecal pollution through fouling along waterways. Wild animals such as birds and deer overlap habitats with livestock where they may indirectly ingest AMR by grazing on faecal-contaminated pasture. Many of these animals often use wastewater effluent impacted sites for drinking water thereby ingesting AMR bacteria which can colonise the guts of these animals. They disseminate AMR bacteria by migrating to another area and defecating, this can be very prevalent with birds due to their ability to cover large distances. Some pathogenic strains of bacteria are often associated with animals; deer have been found to harbour virulent strains of Salmonella enterica serovar Typhimurium (Lee, S. et al . ). Dogs and birds have been known to harbour clinically important ARGs and therefore are a potential vector for the dissemination of AMR bacteria within the environment further increasing faecal contamination (Reynolds et al., ; Williams et al., ). The study by Williams et al. ( ) found that dog faeces and sewage on beaches increase the levels of Enterococcus by 95%. The persistence of ARGs within the aquatic environment at sites with minimal human impact suggests that wild animals such as deer and birds have a role in contaminating water sources (Nolan et al., ). Domestic animals can also contribute; dogs are often walked in popular recreation sites such as beaches, hiking trails and rivers thus becoming a possible non-point source of faecal pollution (Reynolds et al., ). Tetracycline and sulphonamide resistance genes were some of the most abundant ARGs studied by the papers and could be attributed to both non-point and source point faecal pollution. As stated, these antibiotics are not restricted to human usage but employed in both livestock and poultry farming. Damashek et al. ( ) found that tetracycline resistance showed a strong correlation with areas within the watershed that had a greater agricultural influence. However, Zhang et al. ( ) showed that even though tet W and tet O are efflux pumps for tetracycline, the source and variation of locations were inconsistent. This may be because the occurrence and abundance of tet W were strongly associated with human faecal markers. While the occurrence and abundance of tet O being associated with pig, ruminant and poultry faecal sources, this suggests that different species may have different associated ARGs which could be utilised as a possible marker for agricultural and urban sources. Manure from these animals can be a source for these genes to enter the environment; antibiotic residues in livestock and poultry can be consumed from their meat (Ma et al., ) leading to exposure and selection for resistant bacteria within the intestinal tract. Thornton et al. ( ) found int I1to be present within the entire watershed despite different land uses and levels of pollution; sulphonamide resistance is often associated with int I1 which may indicate that it may be persistent within the natural environment due to other stresses.
Antimicrobial resistant (AMR) strains of FIB were identified in all the studies. The isolation of AMR isolates was carried out using media that had been supplemented with the following antibiotics: aztreonam, cefotaxime, ceftazidime, ceftriaxone, ciprofloxacin, colistin, fluconazole and meropenem. Additional investigations included antibiotic susceptibility testing (AST) and minimum inhibitory concentration (MIC), with the resulting isolates of interest being subjected to genetic analysis. These antibiotics are commonly used, and their presence in the aquatic environment may suggest whether urban or agricultural activity is influencing the waterway. Aztreonam, cefotaxime, ceftazidime, ceftriaxone and ciprofloxacin are used as first-line antibiotics for human infection. While colistin and meropenem are last resort antibiotics; the authors may have also selected these antibiotics to highlight the increase in resistance to clinically important antibiotics. The most common beta-lactamase ARGs studied were bla CTX ( N = 12), bla TEM ( N = 12), bla OXA ( N = 11) and bla SHV ( N = 10). This is concerning as bla CTX , bla OXA and bla SHV confer resistance to carbapenems which are clinically significant as a last resort antibiotic (Reynolds et al., ). Another common ARG within this group was bla KPC ( N = 8) which encodes for a carbapenemase. These enzymes have gained much global attention as they have a broad spectrum, not only inactivating carbapenem antibiotics but other clinically important beta-lactam antibiotics in both Gram-negative and Gram-positive bacteria (Chen, Z. et al . ). The predominant ARG studied encoding tetracycline resistance was the tet gene with the most common subtypes being tet A ( N = 8), tet O ( N = 7), tet Q ( N = 5), tet W ( N = 5) and tet X ( N = 5). For sulphonamides, the most common resistance gene studied was sul with the most common subtypes being sul 1 ( N = 16) and sul 2 ( N = 7). This is concerning as tetracycline and sulphonamides are not just restricted to human use but also used in livestock and poultry production (Damashek et al., ; Ma et al., ). The presence of both tetracycline and sulphonamides in waterways may indicate the potential impact of agriculture on the local resistome of the aquatic environment, with the potential for specific ARGs to be used as markers and indicators for agricultural faecal pollution. The most common aminoglycoside resistance genes detected were aph ( N = 8), aad ( N = 7) and aac ( N = 6). Aminoglycosides are first-line antibiotics in human medicine (Krause et al., ). The most frequent quinolone resistance genes studied were qnr S ( N = 11) and mfd ( N = 2). The most frequent MLSB-detected ARGs were erm B ( N = 6), erm F ( N = 6), ere A ( N = 4) and inu B (N = 4). Quinolone and MLSB antibiotics are usually reserved as the alternative when first-line options are not effective (Pham et al . 2019; Pardo et al., ). Notable phenicol ARGs detected were cat ( N = 4), cml ( N = 3) and flo R ( N = 3). The most frequent colistin resistance gene studied was mcr ( N = 7). The common resistance genes studied for vancomycin were van A ( N = 3), van B ( N = 3) and van C ( N = 3). Colistin antibiotics are a last resort in human medicine and if ineffective may result in a longer infection time and high mortality rates (Guo et al., ; Mull et al., ). The extensive list of ARGs studied highlights the scientific relevance of addressing resistance to antibiotics; however, from this list, there are some missing classes such as oxazolidinones and lipopeptides indicating a possible knowledge gap that needs to be addressed. The resistance gene studied for aminocoumarin was parY ; mupirocin was ile S1; triclosan was Tri C and diaminopyrimidine variants of dfr , dfr E, dfr F and dfr G. The few papers studying resistance in these antimicrobials indicate a possible knowledge gap, particularly for antimicrobials such as triclosan and diaminopyrimidine which are used in soap and haircare products (Alfhili & Lee, ; Garre et al., ; Vincenzi et al., ). A variety of multiple drug resistance (MDR) genes were analysed with the most prevalent being qacEdelta 1 ( N = 3). MDR occurs when bacteria are non-susceptible to at least three or more antibiotic classes (Ho et al., ). While E . coli is generally considered non-pathogenic, the majority of healthcare-associated infections are due to MDR pathotypes expressing extended spectrum beta-lactamases (ESBLs). These strains are commonly isolated from surface waters influenced by human activity including drinking water sources and recreational water (Sidhu et al., ). Overall, the common occurrence of these ARGs across separate geographical locations highlights the dissemination of resistance to these clinically important antibiotics and the potential impact on mortality rates due to the rise in AMR-associated bacterial infections. There is currently no standard set of markers for the tracking of ARGs (Leao et al., ) indicating a gap in knowledge. However, just because these genes are present does not mean that they are being transcribed and biologically active. Only a few papers carried out a phenotypic investigation through AST and MIC assays. This emphasises the need for routine monitoring and surveillance of ARGs in the aquatic environment employing complementary methodologies.
Mobile gene elements (MGEs) are genetic material that can aid the capture and transmission of exogenous genes; they include integrons, plasmids, transposons and genetic islands (Sanderson et al., ; Xie, ). MGEs aid in the dissemination of AMR between bacteria species by facilitating horizontal gene transfer (HGT) of ARGs (Fig. ). Detection of high concentrations of MGEs suggests the possibility of significant HGT in a particular environment (Li, Y. et al . ). Overall, the papers identified various mobile genetic elements such as plasmids ( N = 6), integrons ( N = 13) and transposons ( N = 6). These relatively low numbers from the studies suggest that there is a knowledge gap around their true distribution. This may be due to the difficulty in detecting MGEs as they contain many repetitive sequences, which may cause them to be misidentified as part of the non-coding regions. However, the increased availability of MGE sequences should facilitate the development of more focused markers (Xie, ).
Microbial source tracking (MST) assays are used to assess water quality and identify possible sources of faecal pollution by targeting specific marker genes (Lee, S. et al . ). There are a variety of genetic markers available with the main types being those that target bacteria such as the HF183 Bacteroides 16S rRNA genetic marker that can be used to detect human faecal pollution in water environments (Lee, S. et al . ), those that target viruses such as crAssphage (Agramont et al., ) a bacteriophage highly specific to human gut (Stachler & Bibby, ) and those that target specific DNA such as mitochondrial DNA (Table ). The link between a marker and source has been validated in previous studies. Accurate tracking of faecal pollution can measure the influence that human and animal faecal pollution has on ARGs in the aquatic environment (Chen, Z. et al . ). From the 33 studies, a variety of MST markers (Fig. ) were used to identify human sources with the most commonly used marker being crAssphage ( N = 13) and HF183 ( N = 13). For ruminant sources, the most common is Rum2bac ( N = 2). For avian sources, the most frequently used is GFD ( N = 3), and for pig sources, the most common is Pig-2-bac ( N = 3). The use of the same markers in different geographical regions may suggest that it may be possible in the future to have a single marker for each source to track faecal pollution worldwide enabling easier comparison of results.
Eight studies investigated the microbial community of samples and found that the community harboured multiple ARGs, with non-faecal indicator phyla such as Proteobacteria and Actinobacteria harbouring multi-drug resistance efflux pumps that are part of the resistance nodulation division (Bagi & Skogerbø, ). There is a correlation between the microbial community at a genus level and the profile of ARGs. Ma et al. ( ) found that the faecal indicator phyla Firmicutes harboured the largest possible amount of ARGs, one genus harbouring genes encoding resistance to MLSB ( erm f, mef A, erm B, lun B and erm G), aminoglycosides ( aad A and aad E), tetracyclines ( tet M, tet 36, tet O, tet W, tet 32, tet O, tet X 2 and tet 44), chloramphenicols ( flo R) and MDR ( qac Edelta1). Similarly, Chen et al. ( ) found six MDR genes to be related to the indigenous microbial community. Hiruy et al. ( ) found that during the wet season, greater amounts of rainfall resulted in more runoff and soil erosion, which caused more soil-related genera, such as Vicinamibacter and Legionella , to be present in the aquatic environment potentially exchanging ARGs with the indigenous microbiome. This suggests that some ARGs originate from non-faecal bacteria and that there may be an exchange of ARGs between faecal and non-faecal bacteria. Hou et al. ( ) found that in subtropical watersheds, MLSB and tetracycline resistance occur in bacterial species commonly found in the gut: Bacteroides , Faecalibacterium , Clostridium , Blautia and Ruminococcus , suggesting that faecal pollution aids in the dissemination of MSLB and tetracycline resistance. This highlights the potential role faecal pollution may have in influencing the ARG profile of watersheds, particularly urban waterways. In contrast, Ma et al. ( ) suggest that constant wastewater input contributes to the microbial community, impacting surface waters. However, this input does not affect the ARG profile, as horizontal gene transfer is not retained due to a change in environmental conditions. This change may be attributed to seasonal changes affecting pH, total organic carbon and heavy metal concentrations.
From the selected studies, there was little investigation into other faecal-associated microorganisms such as protozoa ( N = 2), bacteriophage ( N = 3) and viruses ( N = 1). These organisms play a significant role in disease and can act as a reservoir and disseminate AMR. Like FIB, protists can be indicators of water quality; however, there have been few broad investigations of protists in urban environments (Lee, S. et al. ; Allsing et al., ). Both papers reported the presence of protozoa with Lee et al., ( , ) profiling the protist community identifying potential for human disease outbreaks, while Allsing et al. ( ) detected the presence of protozoa and viruses that cause human disease, suggesting that the presence of any live virus or microbe may influence disease through activities such as swimming. Only three studies investigated bacteriophages outside of the MST marker crAssphage. Nolan et al. ( ) and Sala-Comorera et al. ( ) found ARGs to be harboured by the bacteriophages. While Sanderson et al. ( ) investigated HGT, this highlights the impact that bacteriophages play as a reservoir and a disseminator of AMR. Bacteriophages may also spread AMR from the environment to animals and humans with ingestion of contaminated water and shellfish (Nolan et al., ). Overall, the low number of studies that investigated these microorganisms indicates a potential knowledge gap that should be addressed. Correlation of mobile genetic elements with args The most common MGEs studied were integrons notably class 1 integrons ( int I1) being the most frequently studied. Markers associated with HGT such as integrons are usually found in areas where there is prevalent anthropogenic pressure particularly where wastewater is being deposited (Niestepski et al., ). For this reason, it has been proposed to use int I1 as an indicator of anthropogenic pollution (Nguyen et al., ). The integrons int I1, int I2 and int I3 are more common in terrestrial environments and less common in marine environments. Toubiana et al. ( ) found that int I2 and int I3 may be more specific as they were detected during peak faecal contamination during peak beach attendance, while int I1 may indicate other urban pollution. However, using int I1 as a marker may be unsuitable because it has the potential to contain ARGs, allowing for self-selection and potentially leading to challenges in distinguishing ARG dissemination from faecal pollution (Zhang et al., ). int I1 has also been identified to be utilised for the adaptation of other environmental stresses such as heavy metal in plankton-associated bacterial communities (Toubiana et al., ). Reynolds et al. ( ) highlight the complexity of integrons which are often associated with other MGEs resulting in further dissemination of ARGs that are commonly found in conjunction with int I1 such as sulphonamide resistance gene sul 1. Zhang et al. ( ) support these findings with sul 1 and sul 2 having the strongest correlation with the abundance of int I1. Metagenomic analysis by Chen et al. ( ) found MGEs with the most associated resistance genes, the most common being int I1 and mupirocin resistance gene ile S1 and transposase IS 91 contained sul 2. These findings highlight the role that MGEs particularly int I1 have in shaping the resistome of the aquatic environment. The study also found that river sediment contributed significantly to the amount of MGEs indicating that the dissemination of ARGs within the river was largely connected to horizontal transfer promoted by the MGEs. Hou et al. ( ) found that the transposases tnp A-07 fuelled by the input of faecal bacteria are the keystone of HGT in an urban lagoon. Altogether, these studies highlight the significant role MGEs play in the persistence and dissemination of AMR within the aquatic environment introduced by anthropogenic pressures such as wastewater.
The most common MGEs studied were integrons notably class 1 integrons ( int I1) being the most frequently studied. Markers associated with HGT such as integrons are usually found in areas where there is prevalent anthropogenic pressure particularly where wastewater is being deposited (Niestepski et al., ). For this reason, it has been proposed to use int I1 as an indicator of anthropogenic pollution (Nguyen et al., ). The integrons int I1, int I2 and int I3 are more common in terrestrial environments and less common in marine environments. Toubiana et al. ( ) found that int I2 and int I3 may be more specific as they were detected during peak faecal contamination during peak beach attendance, while int I1 may indicate other urban pollution. However, using int I1 as a marker may be unsuitable because it has the potential to contain ARGs, allowing for self-selection and potentially leading to challenges in distinguishing ARG dissemination from faecal pollution (Zhang et al., ). int I1 has also been identified to be utilised for the adaptation of other environmental stresses such as heavy metal in plankton-associated bacterial communities (Toubiana et al., ). Reynolds et al. ( ) highlight the complexity of integrons which are often associated with other MGEs resulting in further dissemination of ARGs that are commonly found in conjunction with int I1 such as sulphonamide resistance gene sul 1. Zhang et al. ( ) support these findings with sul 1 and sul 2 having the strongest correlation with the abundance of int I1. Metagenomic analysis by Chen et al. ( ) found MGEs with the most associated resistance genes, the most common being int I1 and mupirocin resistance gene ile S1 and transposase IS 91 contained sul 2. These findings highlight the role that MGEs particularly int I1 have in shaping the resistome of the aquatic environment. The study also found that river sediment contributed significantly to the amount of MGEs indicating that the dissemination of ARGs within the river was largely connected to horizontal transfer promoted by the MGEs. Hou et al. ( ) found that the transposases tnp A-07 fuelled by the input of faecal bacteria are the keystone of HGT in an urban lagoon. Altogether, these studies highlight the significant role MGEs play in the persistence and dissemination of AMR within the aquatic environment introduced by anthropogenic pressures such as wastewater.
MST markers are a valuable tool for identifying the possible source of faecal pollution and the source of ARGs in the aquatic environment. Williams et al. ( ) found a significant correlation between the human faecal marker HF183 and drf A1, sul 1, qnr S and van B resistance genes highlighting the link of raised ARG abundance and sewage input. The study by Chen et al. ( ) found that crAssphage abundance significantly correlated with the abundance of aminoglycoside ( aad A, aac (6’)-Ib, aad A1, aad A2, aph A1, aad A5), MLSB ( erm F), tetracyclines ( tet X), quinolones ( aac (6’)-Ib), phenicols ( cat B3, flo R), sulphonamides ( sul 1, dfr A12), MDR ( tol C) and beta-lactams ( bla OXA10 ). The study by Zhang et al. ( ) found a correlation between the carbapenem resistance gene bla NDM-1 and human markers (BacHum and CPQ056) and pig markers (Pig-2-Bac, P.ND5) suggesting that occurrence was due to combined human and pig pollution. A swine fever outbreak and the subsequent decrease in pig breeding resulted in a slight decrease in bla NDM-1 abundance. However, due to human faecal pollution being prevalent throughout the watershed, this decrease was not significant. Similarly, Damashek et al. ( ) found that the human faecal marker HF183 strongly correlated with carbapenem resistance genes bla CTX-m1 , bla SHV and bla KPC in a multi-use watershed. The study found that the correlation between carbapenem resistance and ruminant and poultry markers was weaker compared to HF183; this suggests that while agricultural faecal pollution contributes to some of the ARGs, human faecal pollution remains the main source. These two studies highlight the usefulness of MST markers in identifying sources of faecal pollution and their contribution to the resistome of the aquatic environment. A variety of factors influence the efficacy of MST markers, and differences exist between markers. Human markers, such as the Bacteroides marker HF183 and Bacteroides phage crAssphage, have different decay rates. crAssphage can be detected for more than 21 days, persisting longer than HF183, which can be detected for up to 10 days (Ballesté et al., ; Nolan et al., ; Sala-Comorera et al., ). Therefore, crAssphage can travel further along the watershed (Zhang et al., ). This can influence the interpretation of results with prolonged persistence resulting in an overestimation of human faecal pollution, potentially giving false-positive results. Another important factor is the sensitivity and specificity of markers; many studies have a variety of results due to spatiotemporal differences arising between locations, countries and population. This can result in one MST marker being highly specific and highly sensitive in one area but not specific or sensitive in another area, e.g. in southern France, HF183 shows 56% sensitivity, while in California, USA, it is associated with 61% sensitivity (Toubiana et al. 2020) and in China, it has a 36% sensitivity (An et al., ). This can be attributed to differences in the gut microbiome, diet and sanitation infrastructure (Cao et al., ). This highlights the need for local investigations and ring trials to validate the use of markers. Some markers also showcase cross-reactivity between other species. Zhang et al. ( ) found that BacHum can cross-react with pigs and cattle. This cross-reaction resulted in BacHum correlating with the ruminant-associated ARG tet O, while the other human marker, CPQ056, did not. Cross-reactivity can result in false-positive and false-negative results; it is therefore important to consider this when selecting markers and highlights the importance of using multiple markers.
WWTPs play a significant role in contributing to faecal pollution of surface waters illustrated by several of the studies (Agramont et al., ; Niestepski et al., ; Sanderson et al., ; Nguyen et al., ; Bagi and Skogerbo ; Damashek et al., ; Hiruy et al., ; Kneis et al., ; Chen, Z. et al . ; Li, Q. et al . ; Leao et al., ). WWTPs harbour antibiotic residues and heavy metals creating an environment that selects for resistant bacteria. Chen et al. ( ) found that ARGs encoding resistance to tetracyclines, aminoglycosides and sulphonamides such as tet T, tet W, tet X, str A, str B, sul 1 and sul 2 (Fig. ) were the most commonly found in WWTP samples. Agramont et al. ( ) found that aquatic lakes impacted by mining and wastewater effluent harboured an abundance of ARGs and int I1 indicating the impact of anthropogenic activity. This is consistent with Niestepski et al. ( ) findings on the impact of various wastewater effluent ranging from treated to urban wastewater on the aquatic environment. The study found a direct link with the contribution of wastewater to concentrations in E. coli , enterococci and Bacteroides fragilis and the abundance of ARGs. Damashek et al. ( ) extended the investigation by studying the entire watershed and found that non-point sources, such as aging septic tanks and sewage infrastructure, contribute significantly to the dissemination of AMR in aquatic environments. Consistent with this, Bagi and Skogerbø ( ) also found possible pathogenic strains of Arcobacter and Bacteroides , suggesting that continuous release of sewage into streams can result in a greater spread of sewage effluent towards beaches. Hence, the release of constant low-level wastewater may introduce potential pathogens into areas used for bathing. There are many conflicting findings between the studies. The findings from Li et al. ( ) suggest that wastewater from WWTPs contributes to the release of ESBL E. coli . Strains from WWTPs exhibit genetic similarity to those found in the aquatic environment of the study, suggesting that the resistance genes along with the virulence factors and MGEs can disseminate from this source into water used for recreation, drinking water or irrigation in agriculture. Niestepski et al. ( ) findings also suggest that WWTPs are a source of contamination of river water introducing potentially pathogenic strains of B. fragilis , E. faecalis and E. coli which harbour ARGs, along with class 1 and class 2 integrases aiding the dissemination of AMR. Conflicting with these findings, a study by Kneis et al. ( ) found that ARGs that encode for resistance to trimethoprim ( dfr B) and beta-lactams ( bla TEM and amp S) did not show a relationship with wastewater. Instead, these genes were most likely to be from the natural water microbiome independent of human influence or could be the result of non-point anthropogenic activities thereby emphasising the need for validated markers to unequivocally determine sources. These studies confirm that wastewater significantly contributes to faecal pollution of surface waters through the release of ARGs and potentially pathogenic microorganisms, emphasising the public health risk along with the ecological concern. This highlights the need for standardised and robust wastewater management practices.
Environmental factors can influence faecal contamination and ARGs with temperature, pH and UV light exposure affecting the local microbial community (Leao et al., ). The presence of heavy metals like zinc can exert pressure selecting for resistant bacteria in soil which can be introduced into the aquatic environment via runoff; the presence of zinc and lead correlates with an increased abundance of genes encoding resistance to erythromycin (Agramont et al., ). To complicate this, further DNA from both humans and animals along with ARG decay rates can be influenced by the flow, dilution, sediment absorption and exposures such as UV light exposure (Chen, Z. et al . ). Li et al. ( ) found that the detection rate of ESBL E. coli decreased with higher levels of rainfall suggesting a dilution effect of rain to the river system. This is consistent with Hiruy et al.’s ( ) findings which found seasonal differences in bacterial communities in both wastewater influent and riverine bacterial communities, with the wet season causing a reduction in faecal streptococci and ESBL E. coli . Williams et al. ( ) found that 4 days of rainfall resulted in raised levels of human faecal markers. However, this may be due to the fact that Bacteroides can persist longer than FIB or that the genetic assays that quantify these markers detected nonviable bacteria which otherwise would not grow using traditional cultural-based methods. Stormwater infrastructure can play a significant role in faecal pollution and the subsequent dissemination of ARGs. Often, stormwater infrastructure is built near sewage infrastructure and can receive inputs from sewage during dry weather due to leaks and blockages, as well as overflow during rainfall events introducing untreated sewage into the aquatic environment (Williams et al., ). Rainfall can also introduce nutrients, silt and other pollutants that have built up during dry periods, and this can alter the composition of the local microbiota with pathogens that were present in the silt and sediment being introduced (Lee, S. et al . ). This change in species composition can also alter the resistome. Differences in the antibiotics used can affect the local resistome; in Ireland, ciprofloxacin, sulphonamide, fluoroquinolone, tetracycline, trimethoprim and beta-lactam class antibiotics are some of the most used in both human and veterinary medicine (Nolan et al., , ; Reynolds et al., ; Sala-Comorera et al., ). Beta-lactam antibiotics such as penicillins and cephalosporin are the most prescribed antibiotic in Irish hospitals accounting for 50% of prescriptions (Reynolds et al., ); this resulted in bla TEM genes being the most abundant ARG found in all four papers followed by bla SHV and bla CTX which were also found in high abundance in two of the papers. All three ARGs also correlated with human faecal markers HF183 (Nolan et al., ; Reynolds et al., ; Sala-Comorera et al., ) and crAssphage (Nolan et al., ). However, these particular ARGs may have been selected because beta-lactams are the most common antibiotic used in the healthcare sector within Ireland and therefore are clinically important. Animals can contribute to faecal pollution through fouling along waterways. Wild animals such as birds and deer overlap habitats with livestock where they may indirectly ingest AMR by grazing on faecal-contaminated pasture. Many of these animals often use wastewater effluent impacted sites for drinking water thereby ingesting AMR bacteria which can colonise the guts of these animals. They disseminate AMR bacteria by migrating to another area and defecating, this can be very prevalent with birds due to their ability to cover large distances. Some pathogenic strains of bacteria are often associated with animals; deer have been found to harbour virulent strains of Salmonella enterica serovar Typhimurium (Lee, S. et al . ). Dogs and birds have been known to harbour clinically important ARGs and therefore are a potential vector for the dissemination of AMR bacteria within the environment further increasing faecal contamination (Reynolds et al., ; Williams et al., ). The study by Williams et al. ( ) found that dog faeces and sewage on beaches increase the levels of Enterococcus by 95%. The persistence of ARGs within the aquatic environment at sites with minimal human impact suggests that wild animals such as deer and birds have a role in contaminating water sources (Nolan et al., ). Domestic animals can also contribute; dogs are often walked in popular recreation sites such as beaches, hiking trails and rivers thus becoming a possible non-point source of faecal pollution (Reynolds et al., ). Tetracycline and sulphonamide resistance genes were some of the most abundant ARGs studied by the papers and could be attributed to both non-point and source point faecal pollution. As stated, these antibiotics are not restricted to human usage but employed in both livestock and poultry farming. Damashek et al. ( ) found that tetracycline resistance showed a strong correlation with areas within the watershed that had a greater agricultural influence. However, Zhang et al. ( ) showed that even though tet W and tet O are efflux pumps for tetracycline, the source and variation of locations were inconsistent. This may be because the occurrence and abundance of tet W were strongly associated with human faecal markers. While the occurrence and abundance of tet O being associated with pig, ruminant and poultry faecal sources, this suggests that different species may have different associated ARGs which could be utilised as a possible marker for agricultural and urban sources. Manure from these animals can be a source for these genes to enter the environment; antibiotic residues in livestock and poultry can be consumed from their meat (Ma et al., ) leading to exposure and selection for resistant bacteria within the intestinal tract. Thornton et al. ( ) found int I1to be present within the entire watershed despite different land uses and levels of pollution; sulphonamide resistance is often associated with int I1 which may indicate that it may be persistent within the natural environment due to other stresses.
These studies highlight the complexity of ARG dissemination in the aquatic environment driven by faecal pollution from both urban and rural sources with human and animal inputs. Faecal pollution introduces different microorganisms particularly bacteria, which facilitate the dissemination of ARGs and MGEs into the environment across different geographical locations, emphasising the need for routine monitoring and surveillance of these genetic components within the aquatic environment. A major difficulty arises from investigators using different measurements highlighting the need for standardised methods to ensure comparability of interpretation and therefore to recognise emerging transnational issues. A mix of genetic and culture-based methods will aid in determining which ARGs are being transcribed. MST has proven to be a valuable tool in the determination of faecal pollution sources, but its’ efficacy depends on accounting for spatiotemporal dynamics indicating that local investigations should identify the best marker for usage and there are currently no standards set for the tracking of ARGs, which needs to be addressed. Robust wastewater management practices and future research into AMR dissemination and surveillance should address human health, animal health and environmental concerns with a focus on a One Health approach to encompass the multitude of factors affecting faecal pollution. A potential limitation of the study was the period (January 2020 to November 2023) of the literature identified for the scoping study. Some research laboratories were closed and/or working to minimal levels during the pandemic period, or their work was refocused to the national need for coronavirus SARS-CoV-2 testing. This may have restricted the collection of data during that time.
|
Global, regional and national burden and quality of care index (QCI) of leukaemia and brain and central nervous system tumours in children and adolescents aged 0–19 years: a systematic analysis of the Global Burden of Disease Study 1990–2019 | c41aebe4-d8d4-4b82-9a8b-d835d29d838e | 11931942 | Medicine[mh] | Childhood cancer represents a significant global health challenge, affecting the lives and health of children and adolescents. Defined by the WHO, childhood cancer encompasses cancers diagnosed between birth and 19 years of age, a period that includes both childhood (0–14 years) and adolescence (15–19 years). This age group is often used in global reports to capture a comprehensive understanding of cancer incidence across childhood and adolescence. Among childhood cancers, leukaemia and brain and central nervous system (CNS) tumours are the most prevalent types. Nearly 400 000 children aged 0–19 years develop cancer each year globally, and 13.7 million cases will be diagnosed and 11.1 million children will die of childhood cancer in the next 30 years, posing a huge challenge to children’s health and social development. In response, WHO launched the global childhood cancer initiative in 2018, aiming to achieve at least 60% survival for childhood cancers worldwide by 2030, thereby saving one million children with cancer over the next decade. Large geographic inequality existed across regions and countries regarding the disease burden as well as the prognosis of childhood cancer, and the disparity in childhood cancer care quality was assumed to be the reason behind this. For example, the 5-year survival rate of childhood cancer reached 80% in high-income countries (HICs); however, the survival rate was only 15% to 45% in low- and middle-income countries (LMICs) even though LMICs accounted for 90% of global childhood cancer patients. Quality of care is crucial to improve the outcome and reduce disease burden, especially for childhood cancer due to unclear carcinogenesis mechanisms as well as limited prevention measures. The quality of care index (QCI) has been a widely applied indicator to measure the cancer care quality at the country, regional and global levels, and previous studies have demonstrated that QCI is strongly associated with tumour prognosis and disease burden. However, the indicators as well as the measurement of quality of care for childhood cancer were limited, and neither had evaluated the disparity in childhood cancer care quality across regions, counties and social development levels. In this study, we present the development and application of the QCI to measure and evaluate disparities in the quality of childhood cancer care across geographic regions, development levels, genders and age groups. Our findings provide essential insights into the populations most vulnerable to poor care, which is critical for informing targeted interventions in childhood cancer prevention and treatment. By examining these disparities, we aim to contribute to the global initiative and help guide efforts to reduce childhood cancer burden and improve outcomes.
Data resources Data on the disease burden of leukaemia and CNS tumours in children and adolescents aged 0–19 years were obtained from the Global Health Data Exchange (GHDx, https://ghdx.healthdata.org ) and Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019. The GBD dataset was established through a systematic approach to global, national and other categories of countries and regions to describe epidemiological data on various diseases, risk factors and injuries stratified by sex, age and geographical categories from 2002. The GBD 2019 incorporated nationally representative surveys, censuses and meta-analysis results to estimate the incidence, prevalence, mortality, years of life lost (YLLs), years lived with disability (YLDs) and disability adjusted life years (DALYs) for 369 diseases and injuries in 204 countries and territories. In this study, six indicators (incidence, prevalence, mortality, DALYs, YLLs and YLDs) of leukaemia and CNS tumours were collected from the GHDx dataset. Leukaemia was defined as C91–C91.0, C91.2–C91.3, C91.6, C92–C92.6, C93–C93.1, C93.3, C93.8, C94–C95.9 according to the 10th revision of the International Classification of Diseases system, and CNS tumours were defined as malignant neoplasm of meninges, brains and C70–C72.9. Age-standardised measurements were reported per 100 000 persons. Quality of care index The QCI was constructed for leukaemia and CNS tumours to represent the quality of childhood cancer care using the typical methodology applied in previous studies. The construction of the QCI followed a two-step procedure. First, four secondary indicators from six primary parameters were calculated, that is, (1) ratio of YLLs to YLDs ratio, (2) DALYs to prevalence ratio, (3) mortality to incidence ratio and (4) prevalence to incidence ratio. R a t i o o f Y L L s a n d Y L D s = Y L L s Y L D s R a t i o o f D A L Y s t o p r e v a l e n c e = D A L Y s P r e v a l e n c e M o r t a l i t y − t o − i n c i d e n c e r a t i o = M o r t a l i t y I n c i d e n c e P r e v a l e n c e − t o − i n c i d e n c e r a t i o = P r e v a l e n c e I n c i d e n c e Second, principal component analysis (PCA) performed on the four secondary indicators to extract the first principal component, which explained the majority of the variance across regions. This component was then scaled to a 0–100 range to produce the final QCI score, where higher values indicated better quality of care. The rationale for PCA was its ability to reduce dimensionality and combine correlated indicators into a single, interpretable metric. The detailed calculation procedure has been described in previous studies. Statistical analysis Estimated annual percentage change (EAPC) was calculated to quantify the temporal trend of the QCI by applying a generalised linear model based on Gaussian distribution. For this analysis, a generalised linear regression model was applied to the natural logarithm of age-standardised ratios (ASRs) for QCI, YLLs, YLDs and other relevant metrics. The regression model took the form ln(ASR)=α+β·Year+ε, where α represents the intercept, β represents the positive or negative ASR trends and ε is the error term. The EAPC was derived using the formula 100×(exp[β]−1) and its 95% CIs were obtained directly from the regression coefficients. A positive EAPC value indicated an increasing trend, whereas a negative EAPC signified a decreasing trend over the study period. An increasing temporal trend was identified with an EAPC>0, and a decreasing trend was identified with an EAPC<0. Correlations between QCI and the sociodemographic index (SDI) were assessed using Pearson correlation coefficients to examine whether higher levels of socioeconomic development were associated with better quality of care. SDI is a composite indicator of a country’s lag-distributed income per capita, average years of schooling and the fertility rate in females under the age of 25 years. The SDI indicator was also extracted from the GBD 2019 data set ( https://ghdx.healthdata.org/record/ihme-data/gbd-2019-socio-demographic-index-sdi1950-2019 ). In 2019, countries and regions were divided into five levels: high (0.81–1.00), high-middle (0.70–0.81), middle (0.61–0.69), low-middle (0.46–0.60) and low (0.00–0.45). Additionally, disparities in QCI across sex and age groups were investigated. For gender disparity, the gender disparity ratio (GDR) was calculated as the ratio of QCI in girls divided by the QCI score in boys, where GDR>1 represented a better QCI level in girls than boys. The QCI score was also analysed across five age groups (<1 year, 1–4 years, 5–9 years, 10–14 years and 15–19 years) to identify age-related trends. G D R = Q C I o f g i r l s Q C I o f b o y s Uncertainty intervals (UIs) for all GBD-derived estimates were calculated using standardised GBD methods, which incorporate sampling variability, non-sampling error and model uncertainty. These intervals were used to determine statistical significance, defined as non-overlapping UIs between comparison groups. All statistical analyses in this study were conducted using Stata MP (V.18.0; Stata Corp LLC). All tests were two-sided and p values<0.05 were considered statistically significant.
Data on the disease burden of leukaemia and CNS tumours in children and adolescents aged 0–19 years were obtained from the Global Health Data Exchange (GHDx, https://ghdx.healthdata.org ) and Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019. The GBD dataset was established through a systematic approach to global, national and other categories of countries and regions to describe epidemiological data on various diseases, risk factors and injuries stratified by sex, age and geographical categories from 2002. The GBD 2019 incorporated nationally representative surveys, censuses and meta-analysis results to estimate the incidence, prevalence, mortality, years of life lost (YLLs), years lived with disability (YLDs) and disability adjusted life years (DALYs) for 369 diseases and injuries in 204 countries and territories. In this study, six indicators (incidence, prevalence, mortality, DALYs, YLLs and YLDs) of leukaemia and CNS tumours were collected from the GHDx dataset. Leukaemia was defined as C91–C91.0, C91.2–C91.3, C91.6, C92–C92.6, C93–C93.1, C93.3, C93.8, C94–C95.9 according to the 10th revision of the International Classification of Diseases system, and CNS tumours were defined as malignant neoplasm of meninges, brains and C70–C72.9. Age-standardised measurements were reported per 100 000 persons.
The QCI was constructed for leukaemia and CNS tumours to represent the quality of childhood cancer care using the typical methodology applied in previous studies. The construction of the QCI followed a two-step procedure. First, four secondary indicators from six primary parameters were calculated, that is, (1) ratio of YLLs to YLDs ratio, (2) DALYs to prevalence ratio, (3) mortality to incidence ratio and (4) prevalence to incidence ratio. R a t i o o f Y L L s a n d Y L D s = Y L L s Y L D s R a t i o o f D A L Y s t o p r e v a l e n c e = D A L Y s P r e v a l e n c e M o r t a l i t y − t o − i n c i d e n c e r a t i o = M o r t a l i t y I n c i d e n c e P r e v a l e n c e − t o − i n c i d e n c e r a t i o = P r e v a l e n c e I n c i d e n c e Second, principal component analysis (PCA) performed on the four secondary indicators to extract the first principal component, which explained the majority of the variance across regions. This component was then scaled to a 0–100 range to produce the final QCI score, where higher values indicated better quality of care. The rationale for PCA was its ability to reduce dimensionality and combine correlated indicators into a single, interpretable metric. The detailed calculation procedure has been described in previous studies.
Estimated annual percentage change (EAPC) was calculated to quantify the temporal trend of the QCI by applying a generalised linear model based on Gaussian distribution. For this analysis, a generalised linear regression model was applied to the natural logarithm of age-standardised ratios (ASRs) for QCI, YLLs, YLDs and other relevant metrics. The regression model took the form ln(ASR)=α+β·Year+ε, where α represents the intercept, β represents the positive or negative ASR trends and ε is the error term. The EAPC was derived using the formula 100×(exp[β]−1) and its 95% CIs were obtained directly from the regression coefficients. A positive EAPC value indicated an increasing trend, whereas a negative EAPC signified a decreasing trend over the study period. An increasing temporal trend was identified with an EAPC>0, and a decreasing trend was identified with an EAPC<0. Correlations between QCI and the sociodemographic index (SDI) were assessed using Pearson correlation coefficients to examine whether higher levels of socioeconomic development were associated with better quality of care. SDI is a composite indicator of a country’s lag-distributed income per capita, average years of schooling and the fertility rate in females under the age of 25 years. The SDI indicator was also extracted from the GBD 2019 data set ( https://ghdx.healthdata.org/record/ihme-data/gbd-2019-socio-demographic-index-sdi1950-2019 ). In 2019, countries and regions were divided into five levels: high (0.81–1.00), high-middle (0.70–0.81), middle (0.61–0.69), low-middle (0.46–0.60) and low (0.00–0.45). Additionally, disparities in QCI across sex and age groups were investigated. For gender disparity, the gender disparity ratio (GDR) was calculated as the ratio of QCI in girls divided by the QCI score in boys, where GDR>1 represented a better QCI level in girls than boys. The QCI score was also analysed across five age groups (<1 year, 1–4 years, 5–9 years, 10–14 years and 15–19 years) to identify age-related trends. G D R = Q C I o f g i r l s Q C I o f b o y s Uncertainty intervals (UIs) for all GBD-derived estimates were calculated using standardised GBD methods, which incorporate sampling variability, non-sampling error and model uncertainty. These intervals were used to determine statistical significance, defined as non-overlapping UIs between comparison groups. All statistical analyses in this study were conducted using Stata MP (V.18.0; Stata Corp LLC). All tests were two-sided and p values<0.05 were considered statistically significant.
Disease burden of childhood cancer In 2019, childhood neoplasms caused 10 752 050.79 DALYs (95% UI: 12 163 650.21 to 9 524 498.86), with a total of 132 194 deaths. Leukaemia and brain and CNS tumours are the two most prevalent malignant cancers in children. As shown in , deaths due to leukaemia decreased from 82 302 to 43 193, with an age-standardised rate decreasing from 3.62 (95% UI: 4.74 to 2.74) to 1.67 (95% UI: 1.91 to 1.45) per 100 000 person-years. Similarly, deaths due to brain and CNS tumours decreased from 29 735 to 23,538, with the age-standardised rate decreasing from 1.31 (95% UI: 2.09 to 0.87) to 0.91 (95% UI: 1.07 to 0.70) per 100 000 person-years ( ). The proportion of deaths from CNS tumour out of total childhood cancer deaths increased from 15.73% in 1990 to 17.81% in 2019, while the proportion of leukaemia deaths fluctuated from 43.53 in 1990 to 46.34 in 2003, before decreasing thereafter. Disparity of QCI across geographic regions In 2019, the estimated QCI for leukaemia was 74.71 and the QCI of brain and CNS tumours was 56.59. From 1990 to 2019, the QCI of brain and CNS tumour displayed an increasing temporal trend with an EAPC of 1.45 (95% CI: 1.41 to 1.50), while Central Sub-Saharan Africa showed the smallest increase with an EAPC of 0.12 (95% CI: 0.06 to 0.19) from 1990 to 2019 ( ). The QCI of leukaemia and brain and CNS tumours was of disparity across geographic regions and countries. In 2019, Western Europe had the highest QCI for leukaemia (94.50), South Asia had the lowest QCI (57.64); for brain and CNS tumours, high-income Asia Pacific and Central Sub-Saharan Africa had the highest and lowest QCI, respectively. From 1990 to 2019, Eastern Europe was of the highest growing trend of QCI for leukaemia with an EAPC of 0.79 (95 CI%: 0.59 to 0.98), while Central Latin America was of the largest decreasing trend of QCI for leukaemia with an EAPC of −0.41 (95 CI%: −0.45 to –0.38). For CNS tumour, East Asia was of the highest growing trend of QCI with an EAPC of 2.93 (95 CI%: 2.78 to 3.08), while Oceania was of the largest decreasing trend of QCI with an EAPC of −0.03 (95 CI%: −0.11 to 0.05) from 1990 to 2019 ( ). The distribution of the two childhood cancers’ QCI among different countries in 2019 is as shown in . At the country level, San Marino had the highest QCI for leukaemia (97.07), Ghana had the lowest QCI (48.71); for CNS tumours, Denmark and the Central African Republic had the highest and lowest QCI, respectively. From 1990 to 2019, Hungary showed the highest growing trend of the QCI (EAPC=1.12, 95% CI: 0.99 to 1.24) and Kyrgyzstan showed the largest decreasing trend of the QCI (EAPC=−0.77, 95% CI: −0.84 to –0.71) for leukaemia; China showed the highest growing trend of the QCI (EAPC=3.02, 95% CI: 2.87 to 3.18) and Zimbabwe showed the largest decreasing trend of the QCI (EAPC=−0.99, 95% CI: −1.20 to –0.78) for brain and CNS tumours ( and ). Disparity of QCI across social development levels The QCI was closely correlated with the sociodemographic level for both leukaemia and the brain and CNS tumours. shows the trend of the QCI in different SDI regions from 1990 to 2019. In high, high-middle and middle SDI countries, the QCI for leukaemia was above the global average, while in low-middle and low SDI countries, it was below the global average. For brain and CNS tumours, the disparity between SDI levels was pronounced, with QCI in low SDI countries being only 35.74% of that in high SDI countries. As shown in , the country-level QCI of leukaemia was correlated with the country-level SDI (r=0.591, p<0.001), and a similar trend was observed for CNS tumour (r=0.812, p<0.001). From 1990 to 2019, countries in high, high-middle and low SDI levels have experienced an increasing trend, with the EAPC of 0.32 (95% CI: 0.29 to 0.35), 0.27 (95% CI: 0.24 to 0.30), 0.16 (95% CI: 0.13 to 0.18), respectively. However, there was a continuous downward trend in the middle SDI and low-middle SDI regions, with EAPC of −0.13 (95% CI: −0.16 to −0.09) and −0.19 (95% CI: −0.20 to −0.18), respectively. This phenomenon suggests that the differences in the quality of leukaemia care between middle and low-middle SDI regions may expand with other regions in the future, triggering further attention to middle SDI and low-middle SDI regions. From 1990 to 2019, the QCI of brain and CNS tumours showed an overall increasing trend ( ). Gender and age trend of QCI The QCI of leukaemia and CNS tumours across genders was explored through the calculated GDR. Overall, the QCI of boys was lower than that of girls, for both leukaemia and the CNS tumour. The gender difference of brain and CNS tumour fluctuated, and the GDR increased from 1.147 in 1990 to 1.160 in 2019, suggesting that the gender difference was gradually expanding. From 1990 to 2019, QCI for leukaemia was of a significantly decreasing trend, with EAPC of −0.03 (95% CI: −0.05 to −0.01) and −0.03 (95% CI: −0.06 to −0.01) for boys and girls separately ( and ). The QCI of leukaemia showed a decreasing trend with age, with the lowest QCI of 64.15 in the 15–19 years age group. The QCI of CNS tumour fluctuated with age, that lower QCIs were observed in age groups less than 1 year and 5–9 years of age, while a high QCI was observed in 15–19 years of age ( and ).
In 2019, childhood neoplasms caused 10 752 050.79 DALYs (95% UI: 12 163 650.21 to 9 524 498.86), with a total of 132 194 deaths. Leukaemia and brain and CNS tumours are the two most prevalent malignant cancers in children. As shown in , deaths due to leukaemia decreased from 82 302 to 43 193, with an age-standardised rate decreasing from 3.62 (95% UI: 4.74 to 2.74) to 1.67 (95% UI: 1.91 to 1.45) per 100 000 person-years. Similarly, deaths due to brain and CNS tumours decreased from 29 735 to 23,538, with the age-standardised rate decreasing from 1.31 (95% UI: 2.09 to 0.87) to 0.91 (95% UI: 1.07 to 0.70) per 100 000 person-years ( ). The proportion of deaths from CNS tumour out of total childhood cancer deaths increased from 15.73% in 1990 to 17.81% in 2019, while the proportion of leukaemia deaths fluctuated from 43.53 in 1990 to 46.34 in 2003, before decreasing thereafter.
In 2019, the estimated QCI for leukaemia was 74.71 and the QCI of brain and CNS tumours was 56.59. From 1990 to 2019, the QCI of brain and CNS tumour displayed an increasing temporal trend with an EAPC of 1.45 (95% CI: 1.41 to 1.50), while Central Sub-Saharan Africa showed the smallest increase with an EAPC of 0.12 (95% CI: 0.06 to 0.19) from 1990 to 2019 ( ). The QCI of leukaemia and brain and CNS tumours was of disparity across geographic regions and countries. In 2019, Western Europe had the highest QCI for leukaemia (94.50), South Asia had the lowest QCI (57.64); for brain and CNS tumours, high-income Asia Pacific and Central Sub-Saharan Africa had the highest and lowest QCI, respectively. From 1990 to 2019, Eastern Europe was of the highest growing trend of QCI for leukaemia with an EAPC of 0.79 (95 CI%: 0.59 to 0.98), while Central Latin America was of the largest decreasing trend of QCI for leukaemia with an EAPC of −0.41 (95 CI%: −0.45 to –0.38). For CNS tumour, East Asia was of the highest growing trend of QCI with an EAPC of 2.93 (95 CI%: 2.78 to 3.08), while Oceania was of the largest decreasing trend of QCI with an EAPC of −0.03 (95 CI%: −0.11 to 0.05) from 1990 to 2019 ( ). The distribution of the two childhood cancers’ QCI among different countries in 2019 is as shown in . At the country level, San Marino had the highest QCI for leukaemia (97.07), Ghana had the lowest QCI (48.71); for CNS tumours, Denmark and the Central African Republic had the highest and lowest QCI, respectively. From 1990 to 2019, Hungary showed the highest growing trend of the QCI (EAPC=1.12, 95% CI: 0.99 to 1.24) and Kyrgyzstan showed the largest decreasing trend of the QCI (EAPC=−0.77, 95% CI: −0.84 to –0.71) for leukaemia; China showed the highest growing trend of the QCI (EAPC=3.02, 95% CI: 2.87 to 3.18) and Zimbabwe showed the largest decreasing trend of the QCI (EAPC=−0.99, 95% CI: −1.20 to –0.78) for brain and CNS tumours ( and ).
The QCI was closely correlated with the sociodemographic level for both leukaemia and the brain and CNS tumours. shows the trend of the QCI in different SDI regions from 1990 to 2019. In high, high-middle and middle SDI countries, the QCI for leukaemia was above the global average, while in low-middle and low SDI countries, it was below the global average. For brain and CNS tumours, the disparity between SDI levels was pronounced, with QCI in low SDI countries being only 35.74% of that in high SDI countries. As shown in , the country-level QCI of leukaemia was correlated with the country-level SDI (r=0.591, p<0.001), and a similar trend was observed for CNS tumour (r=0.812, p<0.001). From 1990 to 2019, countries in high, high-middle and low SDI levels have experienced an increasing trend, with the EAPC of 0.32 (95% CI: 0.29 to 0.35), 0.27 (95% CI: 0.24 to 0.30), 0.16 (95% CI: 0.13 to 0.18), respectively. However, there was a continuous downward trend in the middle SDI and low-middle SDI regions, with EAPC of −0.13 (95% CI: −0.16 to −0.09) and −0.19 (95% CI: −0.20 to −0.18), respectively. This phenomenon suggests that the differences in the quality of leukaemia care between middle and low-middle SDI regions may expand with other regions in the future, triggering further attention to middle SDI and low-middle SDI regions. From 1990 to 2019, the QCI of brain and CNS tumours showed an overall increasing trend ( ).
The QCI of leukaemia and CNS tumours across genders was explored through the calculated GDR. Overall, the QCI of boys was lower than that of girls, for both leukaemia and the CNS tumour. The gender difference of brain and CNS tumour fluctuated, and the GDR increased from 1.147 in 1990 to 1.160 in 2019, suggesting that the gender difference was gradually expanding. From 1990 to 2019, QCI for leukaemia was of a significantly decreasing trend, with EAPC of −0.03 (95% CI: −0.05 to −0.01) and −0.03 (95% CI: −0.06 to −0.01) for boys and girls separately ( and ). The QCI of leukaemia showed a decreasing trend with age, with the lowest QCI of 64.15 in the 15–19 years age group. The QCI of CNS tumour fluctuated with age, that lower QCIs were observed in age groups less than 1 year and 5–9 years of age, while a high QCI was observed in 15–19 years of age ( and ).
Childhood cancer is a global public health issue, and the WHO has proposed a Global Initiative with the goal of increasing the survival rate of children with cancer globally to at least 60% by 2030 while reducing their suffering and improving their quality of life. In this study, we established a QCI to represent the quality of childhood cancer care, and the results suggested that overall QCI was associated with sociodemographic levels, while the QCI of leukaemia in the middle SDI and low-middle SDI regions showed a decreasing temporal trend, and the gender disparity of QCI for CNS tumours increased over 30 years. In this study, we focused on two kinds of childhood cancer types, that is, leukaemia and CNS tumour, and quality of cancer care significantly valued in the two. Leukaemia was of the highest burden out of all childhood cancers, which has caused 43 193 deaths and 3 544 099.33 DALYs in 2019, taking a proportion of 32.67% and 32.96% out of the total childhood cancer occurring in 0–19 years of age. Leukaemia may appear at all ages, but different subtypes of leukaemia have different prevalence rates at different ages. Acute lymphoblastic leukaemia (ALL) is most common in early childhood and is more prevalent in males than in females. Acute myeloid leukaemia (AML) is highly prevalent in the elderly population, whereas chronic myeloid leukaemia (CML) and chronic lymphoid leukaemia (CLL) are rare in young children. Many genetic factors have been shown to be associated with an increased risk of ALL, including Down syndrome, germline mutations in PAX5 and ETV6 and polymorphic variants in specific genes. Prospective cohort studies based on older adults have found that an increase in leukaemia may also be associated with complications of haematologic malignancies, and increased exposure to radiotherapy and chemotherapy, but the extent of their impact on the development of leukaemia in children is unclear. Significantly higher 5-year relative survival rates for ALL and AML in adolescents compared with older patients, reflecting inequalities in access to care between children and older patients, differences in treatment regimens, and more aggressive disease in older leukaemia patients. With unclear carcinogenesis of leukaemia disease, early detection of specific symptoms, such as hepatomegaly, splenomegaly, bruising, fever, limb and bone pain, pallor, fatigue and anorexia, and followed by appropriate treatment and high-quality care, is the primary strategy for leukaemia prevention and control. CNS tumours are the second most common childhood malignancy and the most common solid tumour in children, and are the most common cause of death among all childhood cancers. Given the unclear process of CNS tumour development, enhancing the quality of childhood cancer care is the key to improving childhood cancer prognosis as well as to reducing the disease burden, as recorded that regimens for management have been emphasised as key pillars in the WHO global initiative. A recent systematic review by Uwishema et al highlights the significant burden of CNS tumours in Africa, where limited access to diagnostic and treatment facilities exacerbates poor outcomes. While their study focuses on the African context, it underscores the global disparities in CNS tumour care and the urgent need for targeted interventions in low-resource settings to improve outcomes for children and adolescents worldwide. In this study, we observed a correlation between the quality of leukaemia and CNS tumour care and social development levels. This observation was similarly reported in the global estimation that high-income regions had higher QCIs than the global average values, which could be explained by the country’s capacity to deliver qualified childhood cancer services. The NCD Country Capacity Survey conducted by the WHO showed that over 90% of HICs had the ability to deliver fundamental cancer diagnosis and treatment services including pathology services (laboratories), cancer surgery, chemotherapy and radiotherapy. However, 55% of LMICs reported that none of these services were available. This greatly varied service capacity would explain the disparity of childhood cancer QCI across countries in different social development levels. Second, financial expenses were also a barrier against a high QCI, especially in LMICs. Lack of universal health coverage leads to significant inequalities in access to and quality of information on cancer in LMICs, and the capacity for early diagnosis and management of paediatric cancer cases remains limited and often lacks effective investment in childhood cancer patients. In contrast, universal health coverage efforts in HICs have resulted in greater access to early diagnosis and treatment and quality services for more childhood cancer patients. Meanwhile, factors from the parents’ perspective would also be a barrier against timely, high-quality childhood cancer care. The most common reasons for treatment abandonment include poverty, a lack of interest in their own disease, cultural myths, feelings of guilt and/or social discrimination among their peers. Some studies have shown that the incidence of and deaths from leukaemia have increased globally over the past three decades, with higher incidence and lower mortality rates in regions with higher economic levels, reflecting years of relentless efforts at the prevention, early detection, diagnosis and treatment of haematologic malignancies. This study observed an increasing QCI since 1990, while the QCI of leukaemia in the low SDI and low-middle SDI levels significantly decreased. The decreasing QCI trend may be related to the highly increased disease burden of leukaemia in low SDI and low-middle SDI levels and an unproportionate increasing ability to deliver corresponding leukaemia healthcare services in these countries. GBD reported that 37.43% of the incident cases were emerging in low and low-middle countries in 2019, which was 1.31 times higher than the value of 28.53% in 1990. However, the improvement of national ability for leukaemia early detection, treatment and long-term care was limited. For example, in Asia, 5-year survival estimates for LMICs range from 34.3% to 73.1%, compared with 77.1% to 85.0% in HICs. Early deaths due to infection, haemorrhage and abandonment of treatment are more frequent, with up to 50%–60% of children abandoning treatment in some areas. Meanwhile, children with leukaemia needed long-term care, during which a family approach was also valued. However, the related social intervention and support were far from satisfactory in low SDI and low-middle SDI countries. Lutz Goldbeck certified that promoting communication between parents about their coping strategies and about the reactions of their child could improve the goodness of fit of the family’s joint efforts in coping with childhood cancer. However, delayed diagnosis, early deaths, abandonment of treatment and increased relapse rates are major challenges for families of leukaemia children patients in low-income countries and disparities in the capacity of health services contribute to the lack of timely access to effective health resources for local children. Thus, the inconsistency in fasting increases leukaemia burden with limited healthcare service ability and calls for more attention to be paid in low SDI and low-middle SDI regions with a high incidence of leukaemia, especially in the development of a resilient and sustainable health system to deliver timely, affordable and high-quality childhood cancer care to respond to the emerging childhood cancer burden. It also requires us to raise health awareness, increase investment in healthcare, strengthen global partnerships to improve imbalances in socioeconomic development and reduce the burden of disease in LMICs. The QCI of childhood cancer differed between the sexes. Two observations were identified in this study. First, the QCI for childhood cancer was higher in girls than that in boys. This may be related to the high burden and worse prognosis of childhood cancer in boys than in girls. Sex genotype plays a significant role in gender disparity in childhood cancer care. A study by Soon et al showed that all tumours had a higher incidence in boys, regardless of tumour subtype, patient age or region. It has also been confirmed that the incidence and mortality rates of different subtypes of leukaemia in boys tend to be higher than those in girls in different SDI regions. The incidence of different subtypes of leukaemia is similar in all countries, with the largest gender differences in AML and CLL, and smaller gender differences in ALL, which is generally male-dominated. We also need to admit that this observation could be biased if a higher proportion of girls remained undiagnosed of childhood cancer due to boys’ preferential attention in certain cultural backgrounds. To address this issue, it is crucial for policymakers to consider gender as a factor in the design of childhood cancer care programmes, particularly in LMICs, where disparities in care access are often exacerbated by socio-economic and cultural factors. Promoting gender-sensitive healthcare interventions and ensuring equal access to diagnosis, treatment and supportive care for both sexes should be prioritised in national cancer strategies. Additionally, further research is needed to explore the underlying causes of these disparities and develop targeted interventions. Second, the gender disparity of QCI in brain and CNS tumour showed an enlarging temporal trend. Distinguished subtype distribution in girls and boys could explain the increasing gender disparity in QCI. There are more than 100 different histological subtypes of CNS tumours, and the incidence varies according to age and histological subtype. The WHO’s CNS tumours classification released in 2021 uses extensive data from molecular testing, confirming higher mortality rates for some subtypes. Some studies have confirmed the high incidence of malignant CNS tumours in boys and the high incidence of non-malignant tumours in girls, both of which are on an upward trend, and brain and CNS tumours are the most common causes of cancer death in boys. Although the quality of care for girls with CNS tumours is higher than that of boys globally, the GDR value of <1 in the low SDI region suggests that boys receive a higher quality of care than females in this region, whereas the GDR value of close to 1 in the high SDI region suggests that the quality of care received by boys and girls is almost equal, which is in line with the findings of our study ( ). However, potential confounders, such as differences in healthcare access, socioeconomic status and regional healthcare policies, may influence these results and should be considered when interpreting the gender disparities in QCI. Other possible explanations should be further explored. Nevertheless, the alarmingly low QCI for brain and CNS tumour in boys should be paid more attention to and deserve further innovation research. According to previous studies, there are large differences in the quality of care between adult and childhood patients with cancer. Globally, the QCI of paediatric leukaemia patients was lower than that of adult leukaemia patients, and the quality of care of paediatric leukaemia patients in low SDI and low-middle SDI regions was rather higher than that of adult leukaemia patients. In addition, a previous study also found that the patients in their early adulthood with CNS tumours have higher QCI. Paediatric patients had better quality of care in the high SDI and high-middle SDI regions, while the QCI for CNS tumours was poor for all ages in the low and low-middle SDI regions, which was consistent with our study. These differences may be due to the different SDI levels and the subtypes of tumours. Results of this study will draw public attention to childhood cancer patients and the quality of care, especially focusing on the unmet needs in LMICs and the poorer regions of HICs, with a view to formulating better policies and regulations to enable them to receive better care. This study has several strengths and advantages. We systematically estimated the quality of care for two common types of childhood cancer at global, regional and national levels. The results of this study could provide essential data on vulnerable populations to better implement the CureAll approach to accomplish the global initiative by increasing access, advancing quality and saving lives. The study also has some limitations. First, our results may be interpreted with caution in some areas due to the limitations of the IHME-GBD dataset in national data registries. These limitations include potential reporting biases, incomplete data and varying standards of data collection across different countries. Second, we did not assess variables such as disease subtypes and ethnicity separately due to data unavailability. Additionally, the completeness of childhood cancer registration may vary across countries, particularly those with differing socioeconomic status. In countries with well-established cancer registries, data is generally more complete, while in low-income or less-developed countries, challenges such as underreporting and limited resources may affect data accuracy and availability. Furthermore, the cross-sectional nature of this study prevents us from drawing causal conclusions about the relationship between quality of care and the various factors considered. These discrepancies and methodological constraints should be considered when interpreting the findings from the GBD dataset.
In summary, we estimated the quality of care for children with leukaemia and CNS tumours. Overall, QCI showed an improving temporal trend, and QCI was positively associated with country-level social development levels, while the QCI of leukaemia in the middle and low-middle SDI regions showed a decreasing trend. Boys had a lower QCI level than girls, and the sex disparity was increasing in CNS tumours. This estimation highlighted the vulnerable regions and populations in accessing high-quality childhood cancer care. Countries with low social development levels and boys should be prioritised in policy interventions to reduce health disparities. To address these issues, policymakers in LMICs should focus on improving access to quality care, particularly for boys and populations in lower SDI regions. Ensuring equitable healthcare policies, increasing access to early diagnosis and enhancing treatment options will be crucial in addressing these disparities. Efforts to implement gender-sensitive approaches and targeted interventions can help bridge the care gaps and ultimately improve survival rates.
10.1136/bmjopen-2024-093397 online supplemental file 1
|
Dietary phytochemical indole-3-carbinol regulates metabolic reprogramming in mouse prostate tissue | 5815baba-df1f-4397-82fd-535e38130fbf | 11880055 | Biochemistry[mh] | Prostate cancer (PCa) is a leading cause of male cancer deaths in the developed world . Prostate carcinogenesis is a resultant of chronic inflammation due to consumption of high dietary fats and heterocyclic amines or due to unknown pathogenic infections . Because of the late stage diagnosis of the disease, surgical methods seem to be the only treatment options. The phosphatase and tensin homolog deleted on chromosome 10 (Pten) is a tumor suppressor gene and is frequently mutated or deleted in various human cancers . Disorders such as Cowden syndrome and related diseases have been associated with germline mutation in the Pten gene. . Pten mutation is reported as the most common genetic alteration in 30% of primary prostate cancer cases and 63% of metastatic prostate cancer cases . Cellular energy metabolism consists of a series of biochemical processes including glycolysis, oxidative phosphorylation via the tricarboxylic acid (TCA) cycle, lipogenesis and urea cycle. These pathways typically involve specific nutrients including amino acids, fatty acids and carbohydrates . Metabolic reprogramming is an important hallmark of PCa development and progression. Tumor-specific metabolic alterations provide great potential to stratify patient’s risk and identify new biomarkers. Co-targeting central metabolic nodes instead of targeting a single gene or pathway can provide useful information about the genetic events or activated pathways to improve cancer-specific therapeutic efficacy. Therefore, combination of drugs targeting prostate cancer metabolism and standard therapies like chemotherapy and targeted therapy approaches may prove beneficial. Indole-3-carbinol (I3C) is a dietary phytochemical belonging to the Brassicaceae family, which is extensively studied as a regulator of various signaling pathways and targets that mediate cell division, angiogenesis or apoptosis . Preclinical and clinical studies suggest that I3C has great potential in preventing chronic diseases including cardiovascular disease, obesity and diabetes and . Previously, in our lab we established antioxidant and anti-inflammatory mechanism of I3C based on regulation of glutathione S-transferase Ya subunit gene at transcription level and regulation of phase II drug metabolizing and antioxidant genes mediated by Nrf2 . I3C is also identified as a promising chemotherapeutic agent against gastrointestinal tumors . Based on treatment with I3C in in-vitro studies conducted in prostate cancer cells, I3C was revealed as an up regulator of phase I and phase II detoxification enzymes, inducer of apoptosis via G1 cell-cycle arrest and inhibitor of Ak strain transforming (Akt) and nuclear factor kappa B (NF-κB) which are well known molecular targets of cancer therapy . To assess the significance of Pten homozygous deletion in prostate cancer progression, we generated prostate-specific Pten deletion Ptenloxp/loxp;PB-Cre4 mouse model. Our previous studies showed that loss of Pten heterozygosity led to a significant delay in latency of PIN formation and resulted in progression of prostate cancer to metastatic stage (Fig. ). This essentially mimicks disease progression seen in humans. After establishing this animal model, there is great interest to investigate the effect of I3C on cancer interception via metabolic reprogramming. In this study, we determined metabolic alterations due to I3C diet in prostate specific Pten KO mouse model.
Chemicals and animal diet I3C was purchased and blended in AIN-93 M rodent diet (Research Diet, Inc. New Brunswick, NJ, USA) at a final concentration of 1% (w/w). The diet storage temperature was 4°C during the experimentation period. All mice were caged under standard conditions of 12-h light/12-h dark cycle. Water and diet were provided ad libitum in accordance with the protocol approved by the Institutional Animal Care and Use Committee (IACUC) at Rutgers University. Animal model Pb-Cre4 mice (strain: B6.Cg-Tg(Pbsn-cre)4Prb/Nci) and Pten(flox/flox) mice (C;129S4-Ptentm1Hwu/J) were obtained from the National Cancer Institute, USA and Jackson Laboratories respectively. Prostate-specific Pten knockout (KO) male offspring (Pb-cre/Pten(flox/flox)) were generated at F2 generation by crossing female Pten(flox/flox) mice with male Pb-Cre4 mice. For simplicity, Pb-cre/Pten (flox/flox) mice are referred as Pten-KO, and Pten (flox/flox) mice are referred as Pten wild-type (WT) (Fig. ). The mice were genotyped appropriately using polymerase chain reaction (PCR), and only the male mice that were cre carriers and homozygous Pten flox/flox were used for the treatment groups. The expected band size were 393-bp for Pb-Cre4 mice (primers: 5’-CTGAAGAATGGGACAGGCATT-3’ and 5’-CATCACTCGTTGCATCGACC-3’), 328-bp for mutant allele of Pten (primers: 5’-CAAGCACTCTGCGAACTGAG-3’ and 5’-AAGTTTTTGAAGGCAAGATGC-3’), 156-bp and 328-bp for heterozygous Pten (primers: 5’-CAAGCACTCTGCGAACTGAG-3’ and 5’-AAGTTTTTGAAGGCAAGATGC-3’). Animal grouping Mice were randomly divided into four experimental groups: Pten WT and Pten KO mice fed on control diet or I3C diet. Mice were sacrificed by CO 2 asphyxiation at 20 weeks. Mice prostates were harvested immediately and either dissected into ventral & lateral prostate (VLP) or dorsal & lateral prostate (DLP), snap-frozen in liquid nitrogen and stored at − 80°C for downstream analysis. For histopathological evaluation prostate tissues were fixed in 10% phosphate-buffered formalin at room temperature for 24 h. Prostate tissue metabolite analysis via LC–MS Prostate tissues from the aforementioned four experimental groups were harvested, utilized for organic extraction of cellular metabolites and subjected to liquid chromatography-mass spectrometry (LC–MS) analysis based on the protocol from our previously published paper . Briefly, about 30 mg tissue was pulverized with Yttria Grinding ball using CryoMill at 20 Hz for 2 min to ensure complete homogenization of tissue. Tissue metabolite extraction was performed on ice. Metabolic quenching was done by adding low temperature methanol, acetonitrile and water in the ratio 40:40:20 with 0.5% formic acid. After 20 min incubation, samples were centrifuged at 14,000 g for 10 min. The quenching steps were repeated. Ammonium bicarbonate (15%) was then added to the final supernatant for LC–MS analysis. Methodology for LC–MS is described previously . The metabolite data for each animal group was analyzed using MetaboAnalyst 5.0 software. Statistical analysis of individual metabolite ions Identified metabolites of the aforementioned four experimental groups were analyzed for statistical significance using Two-way ANOVA in GraphPad Prism 9.0.2, considering Tukey test based multiple comparison. Adjusted P-values < 0.05 comparing (i) Pten WT (control diet) and Pten KO (control diet) (ii) Pten KO (control diet) and Pten KO (I3C diet) (iii) Pten KO (I3C diet) and Pten WT (I3C diet) are represented in appropriate figures for the purpose of understanding effect of I3C on mice prostate tissue.
I3C was purchased and blended in AIN-93 M rodent diet (Research Diet, Inc. New Brunswick, NJ, USA) at a final concentration of 1% (w/w). The diet storage temperature was 4°C during the experimentation period. All mice were caged under standard conditions of 12-h light/12-h dark cycle. Water and diet were provided ad libitum in accordance with the protocol approved by the Institutional Animal Care and Use Committee (IACUC) at Rutgers University.
Pb-Cre4 mice (strain: B6.Cg-Tg(Pbsn-cre)4Prb/Nci) and Pten(flox/flox) mice (C;129S4-Ptentm1Hwu/J) were obtained from the National Cancer Institute, USA and Jackson Laboratories respectively. Prostate-specific Pten knockout (KO) male offspring (Pb-cre/Pten(flox/flox)) were generated at F2 generation by crossing female Pten(flox/flox) mice with male Pb-Cre4 mice. For simplicity, Pb-cre/Pten (flox/flox) mice are referred as Pten-KO, and Pten (flox/flox) mice are referred as Pten wild-type (WT) (Fig. ). The mice were genotyped appropriately using polymerase chain reaction (PCR), and only the male mice that were cre carriers and homozygous Pten flox/flox were used for the treatment groups. The expected band size were 393-bp for Pb-Cre4 mice (primers: 5’-CTGAAGAATGGGACAGGCATT-3’ and 5’-CATCACTCGTTGCATCGACC-3’), 328-bp for mutant allele of Pten (primers: 5’-CAAGCACTCTGCGAACTGAG-3’ and 5’-AAGTTTTTGAAGGCAAGATGC-3’), 156-bp and 328-bp for heterozygous Pten (primers: 5’-CAAGCACTCTGCGAACTGAG-3’ and 5’-AAGTTTTTGAAGGCAAGATGC-3’).
Mice were randomly divided into four experimental groups: Pten WT and Pten KO mice fed on control diet or I3C diet. Mice were sacrificed by CO 2 asphyxiation at 20 weeks. Mice prostates were harvested immediately and either dissected into ventral & lateral prostate (VLP) or dorsal & lateral prostate (DLP), snap-frozen in liquid nitrogen and stored at − 80°C for downstream analysis. For histopathological evaluation prostate tissues were fixed in 10% phosphate-buffered formalin at room temperature for 24 h.
via LC–MS Prostate tissues from the aforementioned four experimental groups were harvested, utilized for organic extraction of cellular metabolites and subjected to liquid chromatography-mass spectrometry (LC–MS) analysis based on the protocol from our previously published paper . Briefly, about 30 mg tissue was pulverized with Yttria Grinding ball using CryoMill at 20 Hz for 2 min to ensure complete homogenization of tissue. Tissue metabolite extraction was performed on ice. Metabolic quenching was done by adding low temperature methanol, acetonitrile and water in the ratio 40:40:20 with 0.5% formic acid. After 20 min incubation, samples were centrifuged at 14,000 g for 10 min. The quenching steps were repeated. Ammonium bicarbonate (15%) was then added to the final supernatant for LC–MS analysis. Methodology for LC–MS is described previously . The metabolite data for each animal group was analyzed using MetaboAnalyst 5.0 software.
Identified metabolites of the aforementioned four experimental groups were analyzed for statistical significance using Two-way ANOVA in GraphPad Prism 9.0.2, considering Tukey test based multiple comparison. Adjusted P-values < 0.05 comparing (i) Pten WT (control diet) and Pten KO (control diet) (ii) Pten KO (control diet) and Pten KO (I3C diet) (iii) Pten KO (I3C diet) and Pten WT (I3C diet) are represented in appropriate figures for the purpose of understanding effect of I3C on mice prostate tissue.
I3C regulates Pten dependent metabolic pathways in mice prostate Based on LC–MS based untargeted metabolomics, we identified 218 metabolites that were analyzed and interpreted to determine the most regulated biochemical pathways. We performed metabolic pathway analysis (integrating pathway enrichment analysis and pathway topology analysis) for two groups data (MetaboAnalyst 5.0) with sample sizes ranging from 3 to 6. Statistical p-values were adjusted for multiple testings. To determine Pten dependent metabolic pathways regulated by I3C, we first identified most regulated metabolic pathways caused by genetic KO of Pten in mice (fed on AIN-93 M control diet) (Fig. a). Let it be “x”. We next compared metabolites in prostate tissues of Pten KO mice (fed on AIN-93 M I3C diet) with Pten KO mice (fed on AIN-93 M control diet) to determine significantly regulated metabolic pathways based on difference in diet (Fig. b). Let it be “y”. Furthermore, we compared metabolites in prostate tissues of Pten WT mice (fed on AIN-93 M I3C diet) with Pten WT mice (fed on AIN-93 M control diet) ( Fig. c ) . Let it be “z”. Intersection of Figs. a and 3b but excluding Fig. c provided the Pten dependent metabolic pathways which were exclusively targeted by I3C diet (Fig. d). Alternatively, [12pt]{minimal}
$${}{}{}{}\;}{}{}{}{}{}{}{}{}\;}3{}\;}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}={} {}-{}$$ P t e n d e p e n d e n t I 3 C t a r g e t e d m e t a b o l i c p a t h w a y s = x ∩ y - z Based on this data analysis, we determined that I3C diet significantly modulated pyrimidine metabolism, arginine and proline metabolism and porphyrin metabolism. The list of all identified metabolic pathways along with their statistical significance is provided in supplementary information. I3C targets deregulated pyrimidine metabolism in Pten KO mice prostate Pyrimidine metabolism is one of the top Pten regulated metabolic pathways based on prostate tissue metabolite analysis. This is also confirmed in prostate cancer cell lines by Loh et al . . We identified that I3C diet significantly targets the perturbed pyrimidine metabolism caused by Pten KO. N-carbamoyl-aspartate, a key intermediate of pyrimidine metabolism was increased by ~ 127 fold with the silencing of Pten, but interestingly I3C substantially reversed this increased effect by downregulating N-carbamoyl-aspartate levels by ~ 22 fold. I3C diet also reduced significantly the overexpression of orotate (5.46-fold elevation) in Pten KO prostate tissue relative to Pten WT, by about 11-fold (relative to untreated Pten KO). In addition, ribonucleoside cytidine showed considerable decrease in Pten KO mice with or without I3C treatment but statistically significant change in its corresponding nucleoside 5′-monophosphate i.e. cytidine monophosphate (CMP) could not be seen (Figs. a and b). I3C targets deregulated arginine and proline metabolism in Pten KO mice prostate Arginine and proline metabolism was the next most important target of I3C in prostate tissues of Pten KO mice. This pathway is known to have profound effects on prostate cancer progression and tumor microenvironment . Based on metabolite data analysis, hydroxyproline was significantly upregulated by threefold by the genetic KO of Pten. However, I3C reversed this effect by nearly 1.6 fold. At the same time, metabolites like arginine and guanidinoacetate, were not substantially regulated by I3C in prostate tissues of Pten KO mice, although their levels were significantly changed with the suppression of Pten. (Figs. a and b). I3C targets deregulated porphyrin metabolism in Pten KO mice prostate It was interesting to note that I3C had the potential to target the disturbed heme biosynthesis pathway caused due to genetic KO of Pten. Heme biosynthesis pathway is often considered a cataplerotic pathway for the TCA cycle . 5-Aminolevulinate, the first metabolite of this pathway and precursor of porphyrin metabolites exhibited threefold increase in the prostate of Pten KO mice. I3C diet was found to suppress the metabolite level by nearly 40% (Figs. a and b). I3C influences Pten independent metabolic pathways in mouse prostate In addition to directly regulating metabolic pathways dependent on Pten, I3C was found to significantly affect certain metabolites in prostate cancer mice model that were not necessarily regulated by Pten. Referencing metabolic pathway analysis described in Sect. " " , we first compared figs. b and c and identified metabolic pathways regulated by I3C. We then selected Pten independent pathways by excluding those which were not modulated by Pten KO (Fig. a). Alternatively, [12pt]{minimal}
$${}{}{}{}\;}{}{}{}{}{}{}{}{}{}{}\;}3{}\;}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}={} {}-{}$$ P t e n i n d e p e n d e n t I 3 C t a r g e t e d m e t a b o l i c p a t h w a y s = y ∩ z - x Venn diagram representing Pten independent metabolic pathways significantly targeted by I3C is shown in Fig. e. Based on this analysis, we determined citrate cycle and lipoic acid metabolism as the potential targets of I3C. The list of all identified metabolic pathways along with their statistical significance is provided in supplementary information. I3C targets citrate cycle in Pten KO mice prostate I3C significantly regulated mitochondrial energy metabolism by lowering the levels of citrate cycle metabolites in the prostate of Pten KO mice. We observed statistically significant decrease of 70.6% in citrateand 67.6% in aconitate ( Fig. a ) . I3C targets lipoic acid metabolism in Pten KO mice prostate Lipoic acid is an important cofactor of mitochondrial metabolism . Among the identified metabolites in Pten KO mice prostate tissue, we did not observe statistically significant regulatory activity of I3C ( Fig. b ) . However, it is possible that unidentified metabolites may be metabolic targets of I3C.
Based on LC–MS based untargeted metabolomics, we identified 218 metabolites that were analyzed and interpreted to determine the most regulated biochemical pathways. We performed metabolic pathway analysis (integrating pathway enrichment analysis and pathway topology analysis) for two groups data (MetaboAnalyst 5.0) with sample sizes ranging from 3 to 6. Statistical p-values were adjusted for multiple testings. To determine Pten dependent metabolic pathways regulated by I3C, we first identified most regulated metabolic pathways caused by genetic KO of Pten in mice (fed on AIN-93 M control diet) (Fig. a). Let it be “x”. We next compared metabolites in prostate tissues of Pten KO mice (fed on AIN-93 M I3C diet) with Pten KO mice (fed on AIN-93 M control diet) to determine significantly regulated metabolic pathways based on difference in diet (Fig. b). Let it be “y”. Furthermore, we compared metabolites in prostate tissues of Pten WT mice (fed on AIN-93 M I3C diet) with Pten WT mice (fed on AIN-93 M control diet) ( Fig. c ) . Let it be “z”. Intersection of Figs. a and 3b but excluding Fig. c provided the Pten dependent metabolic pathways which were exclusively targeted by I3C diet (Fig. d). Alternatively, [12pt]{minimal}
$${}{}{}{}\;}{}{}{}{}{}{}{}{}\;}3{}\;}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}={} {}-{}$$ P t e n d e p e n d e n t I 3 C t a r g e t e d m e t a b o l i c p a t h w a y s = x ∩ y - z Based on this data analysis, we determined that I3C diet significantly modulated pyrimidine metabolism, arginine and proline metabolism and porphyrin metabolism. The list of all identified metabolic pathways along with their statistical significance is provided in supplementary information. I3C targets deregulated pyrimidine metabolism in Pten KO mice prostate Pyrimidine metabolism is one of the top Pten regulated metabolic pathways based on prostate tissue metabolite analysis. This is also confirmed in prostate cancer cell lines by Loh et al . . We identified that I3C diet significantly targets the perturbed pyrimidine metabolism caused by Pten KO. N-carbamoyl-aspartate, a key intermediate of pyrimidine metabolism was increased by ~ 127 fold with the silencing of Pten, but interestingly I3C substantially reversed this increased effect by downregulating N-carbamoyl-aspartate levels by ~ 22 fold. I3C diet also reduced significantly the overexpression of orotate (5.46-fold elevation) in Pten KO prostate tissue relative to Pten WT, by about 11-fold (relative to untreated Pten KO). In addition, ribonucleoside cytidine showed considerable decrease in Pten KO mice with or without I3C treatment but statistically significant change in its corresponding nucleoside 5′-monophosphate i.e. cytidine monophosphate (CMP) could not be seen (Figs. a and b). I3C targets deregulated arginine and proline metabolism in Pten KO mice prostate Arginine and proline metabolism was the next most important target of I3C in prostate tissues of Pten KO mice. This pathway is known to have profound effects on prostate cancer progression and tumor microenvironment . Based on metabolite data analysis, hydroxyproline was significantly upregulated by threefold by the genetic KO of Pten. However, I3C reversed this effect by nearly 1.6 fold. At the same time, metabolites like arginine and guanidinoacetate, were not substantially regulated by I3C in prostate tissues of Pten KO mice, although their levels were significantly changed with the suppression of Pten. (Figs. a and b). I3C targets deregulated porphyrin metabolism in Pten KO mice prostate It was interesting to note that I3C had the potential to target the disturbed heme biosynthesis pathway caused due to genetic KO of Pten. Heme biosynthesis pathway is often considered a cataplerotic pathway for the TCA cycle . 5-Aminolevulinate, the first metabolite of this pathway and precursor of porphyrin metabolites exhibited threefold increase in the prostate of Pten KO mice. I3C diet was found to suppress the metabolite level by nearly 40% (Figs. a and b). I3C influences Pten independent metabolic pathways in mouse prostate In addition to directly regulating metabolic pathways dependent on Pten, I3C was found to significantly affect certain metabolites in prostate cancer mice model that were not necessarily regulated by Pten. Referencing metabolic pathway analysis described in Sect. " " , we first compared figs. b and c and identified metabolic pathways regulated by I3C. We then selected Pten independent pathways by excluding those which were not modulated by Pten KO (Fig. a). Alternatively, [12pt]{minimal}
$${}{}{}{}\;}{}{}{}{}{}{}{}{}{}{}\;}3{}\;}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}={} {}-{}$$ P t e n i n d e p e n d e n t I 3 C t a r g e t e d m e t a b o l i c p a t h w a y s = y ∩ z - x Venn diagram representing Pten independent metabolic pathways significantly targeted by I3C is shown in Fig. e. Based on this analysis, we determined citrate cycle and lipoic acid metabolism as the potential targets of I3C. The list of all identified metabolic pathways along with their statistical significance is provided in supplementary information. I3C targets citrate cycle in Pten KO mice prostate I3C significantly regulated mitochondrial energy metabolism by lowering the levels of citrate cycle metabolites in the prostate of Pten KO mice. We observed statistically significant decrease of 70.6% in citrateand 67.6% in aconitate ( Fig. a ) . I3C targets lipoic acid metabolism in Pten KO mice prostate Lipoic acid is an important cofactor of mitochondrial metabolism . Among the identified metabolites in Pten KO mice prostate tissue, we did not observe statistically significant regulatory activity of I3C ( Fig. b ) . However, it is possible that unidentified metabolites may be metabolic targets of I3C.
Pyrimidine metabolism is one of the top Pten regulated metabolic pathways based on prostate tissue metabolite analysis. This is also confirmed in prostate cancer cell lines by Loh et al . . We identified that I3C diet significantly targets the perturbed pyrimidine metabolism caused by Pten KO. N-carbamoyl-aspartate, a key intermediate of pyrimidine metabolism was increased by ~ 127 fold with the silencing of Pten, but interestingly I3C substantially reversed this increased effect by downregulating N-carbamoyl-aspartate levels by ~ 22 fold. I3C diet also reduced significantly the overexpression of orotate (5.46-fold elevation) in Pten KO prostate tissue relative to Pten WT, by about 11-fold (relative to untreated Pten KO). In addition, ribonucleoside cytidine showed considerable decrease in Pten KO mice with or without I3C treatment but statistically significant change in its corresponding nucleoside 5′-monophosphate i.e. cytidine monophosphate (CMP) could not be seen (Figs. a and b).
Arginine and proline metabolism was the next most important target of I3C in prostate tissues of Pten KO mice. This pathway is known to have profound effects on prostate cancer progression and tumor microenvironment . Based on metabolite data analysis, hydroxyproline was significantly upregulated by threefold by the genetic KO of Pten. However, I3C reversed this effect by nearly 1.6 fold. At the same time, metabolites like arginine and guanidinoacetate, were not substantially regulated by I3C in prostate tissues of Pten KO mice, although their levels were significantly changed with the suppression of Pten. (Figs. a and b).
It was interesting to note that I3C had the potential to target the disturbed heme biosynthesis pathway caused due to genetic KO of Pten. Heme biosynthesis pathway is often considered a cataplerotic pathway for the TCA cycle . 5-Aminolevulinate, the first metabolite of this pathway and precursor of porphyrin metabolites exhibited threefold increase in the prostate of Pten KO mice. I3C diet was found to suppress the metabolite level by nearly 40% (Figs. a and b).
In addition to directly regulating metabolic pathways dependent on Pten, I3C was found to significantly affect certain metabolites in prostate cancer mice model that were not necessarily regulated by Pten. Referencing metabolic pathway analysis described in Sect. " " , we first compared figs. b and c and identified metabolic pathways regulated by I3C. We then selected Pten independent pathways by excluding those which were not modulated by Pten KO (Fig. a). Alternatively, [12pt]{minimal}
$${}{}{}{}\;}{}{}{}{}{}{}{}{}{}{}\;}3{}\;}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}{}\;}{}{}{}{}{}{}{}={} {}-{}$$ P t e n i n d e p e n d e n t I 3 C t a r g e t e d m e t a b o l i c p a t h w a y s = y ∩ z - x Venn diagram representing Pten independent metabolic pathways significantly targeted by I3C is shown in Fig. e. Based on this analysis, we determined citrate cycle and lipoic acid metabolism as the potential targets of I3C. The list of all identified metabolic pathways along with their statistical significance is provided in supplementary information.
I3C significantly regulated mitochondrial energy metabolism by lowering the levels of citrate cycle metabolites in the prostate of Pten KO mice. We observed statistically significant decrease of 70.6% in citrateand 67.6% in aconitate ( Fig. a ) .
Lipoic acid is an important cofactor of mitochondrial metabolism . Among the identified metabolites in Pten KO mice prostate tissue, we did not observe statistically significant regulatory activity of I3C ( Fig. b ) . However, it is possible that unidentified metabolites may be metabolic targets of I3C.
Pten mediated cancer protection is known to promote oxidative phosphorylation and regulate metabolic signaling pathways including mitochondrial metabolism, lipid metabolism and glycogen synthesis . With deletion of Pten there is evidence of metabolic reprogramming in prostate cancer cells .To understand the anti-cancer potential of I3C, particularly at metabolomic level in prostate cancer based Pten KO mice model, we treated mice non-invasively with AIN-93 M diet containing 1% (w/w) I3C for 20 weeks. We designed a controlled experiment comprising Pten WT and Pten KO mice which were further subdivided into “control diet” and “I3C diet” groups based on diet fed to mice. Our major objective was to understand mechanism of action of I3C on prostate carcinogenesis based on metabolic alterations. To test metabolic regulations in the prostate tissues of different groups of mice, we performed LC–MS based untargeted metabolomics. We identified the most regulated metabolic pathways potentially targeted by I3C in the chemoprevention of prostate cancer. Some of them were Pten-dependent and some others were Pten-independent. Pten-dependent pathways are expected to represent metabolic pathways based on interaction between I3C and Pten. These included pyrimidine metabolism, arginine and proline metabolism and porphyrin metabolism. Pyrimidine metabolism was found to be the most significantly regulated pathway caused by I3C diet. It is well understood that cancer cells reprogram metabolism to support vigorous cell growth by increasing pyrimidine de novo synthesis flux via steady supply of deoxyribonucleoside triphosphates (dNTPs) . In addition, mammalian target of rapamycin complex 1 (mTORC1) cellular growth signaling is found to post translationally regulate de novo pyrimidine synthesis particularly the first three steps, during which N-carbamoyl aspartate (generated from glutamine) plays critical role in the formation of pyrimidine nucleotides . Accumulating evidence suggest that abundance of N-carbamoyl aspartate is positively correlated with mTORC1 signaling . I3C diet has shown great potential in inhibiting mTORC1 by suppressing N-carbamoyl aspartate in prostate cancer (Fig. b). Inhibition of dihydroorotate dehydrogenase (DHODH) which contributes to mitochondrial electron transport chain and catalyzes conversion of dihydroorotate into orotate is regarded as a promising therapy to treat patients with Pten mutant cancers . Interestingly, I3C diet significantly downregulated orotate levels in prostate tissues of Pten KO mice (Fig. b). Cytidine deaminases catalyze cytidine to uridine transitions and play a significant role in protecting cancer cells against deoxycytidine-based chemotherapies . Inability of I3C diet to reverse the changes in uridine and cytidine levels in Pten KO based prostate cancer (Fig. b) unveils probable resistance of prostate cancer cells to I3C. Arginine and proline metabolism was determined as the next Pten dependent metabolomic target of I3C. With the deletion of Pten, mice prostate was starved of arginine but proline amount was stable. However, metabolite of proline, hydroxyproline was significantly increased by the loss of Pten and was decreased by I3C (Fig. b). An increased availability of hydroxyproline in solid tissue is associated with Hypoxia-inducible factor 1-alpha (HIF1-α) mediated cancer cell survival by promoting expression of matrix metalloproteinase and degradation of the extracellular matrix (ECM) . It was interesting to note that I3C had the potential to target the elevated hydroxyproline in mice prostate carcinoma.. Alteration in porphyrin metabolism was also identified as Pten-dependent metabolic target of I3C. Although, external administration of 5-aminolevulinate is an approved therapeutic strategy in photodynamic cancer therapy , it is unclear what molecular mechanism substantially elevated this intermediate of heme biosynthesis in prostate of Pten KO mice. Furthermore, I3C was shown to downregulate 5-aminolevulinate with statistical significance (Fig. b). Among the metabolic pathways which were not dependent on Pten, we determined citrate cycle and lipoic acid metabolism as potential metabolic targets of I3C. Prostate cancer cells demonstrate higher citric acid cycle activity relative to benign cells with net decrease in citrate secretion, unlike most cancer cells which resort to aerobic glycolysis . Surprisingly, I3C significantly downregulated several key metabolites of this pathway including citrateand aconitate (Fig. a) in prostate of Pten KO mice, suggesting pro-cancer effect of I3C. Regulation of citrate cycle by I3C is further expected to impact lipoic acid metabolism (Fig. b), an essential contributor to cell growth and mitochondrial metabolism . I3C has been regarded as a chemopreventive agent due to its antioxidant activity, particularly via regulation of phase II drug metabolizing, antioxidant and apoptotic genes by Nrf2 . Nrf2 being a master regulator of cellular oxidative stress, its activation by I3C may have caused a survival mechanism to cope and thrive in early stages of Pten KO mediated prostate cancer tumor microenvironment leading to its progression which requires further investigation. Taken together, metabolic correlation between Pten and Nrf2 activator, I3C can be considered an emerging marker to diagnose and monitor prostate cancer.
In summary, for the first time we identified some metabolic targets of I3C that describe its effects against Pten KO prostate cancer. We identified Pten-dependent (pyrimidine metabolism, arginine and proline metabolism and porphyrin metabolism) as well as Pten-independent (citrate cycle and lipoic acid metabolism) pathways targeted by I3C. Additional in-vivo studies are needed to determine the circumstances related to initial stages of carcinoma such as initiation, promotion or progression of prostate cancer development during which I3C might be suitable for anti-cancer effects including cancer prevention.
Supplementary Material 1 Supplementary material 2 Supplementary Material 3 Supplementary Material 4
|
Exploring Single-Probe
Single-Cell Mass Spectrometry:
Current Trends and Future Directions | 3ec594bc-890b-4913-b96d-f8aae69663a0 | 11912137 | Biochemistry[mh] | In recent years, single-cell analysis
has emerged as a transformative
approach in analytical chemistry, offering unprecedented insights
into cellular heterogeneity and enabling the study of intricate biological
processes at the cellular level. Cellular heterogeneity is a common
feature in almost all biological systems. Beyond genetic differences,
it can also arise from nongenetic mechanisms, where cells with similar
genotypes exhibit distinct morphological and phenotypical traits. , This heterogeneity may stem from diverse factors such as the cell
cycle, stochastic variations in gene expression, and interactions
with the surrounding microenvironment. − In addition to studies
of cell heterogeneity, single-cell analysis is needed for other applications,
including research of rare cells (e.g., cancer stem cells), − development biology (e.g., cell changes during embryonic development), − and personalized medicine (e.g., analyzing individual cells from
a patient for personalized treatment). − Owing to its multiple advantages (e.g., high
sensitivity, high
accuracy, and broad molecular coverage), mass spectrometry (MS) is
regarded as one of the most important techniques for molecular analysis.
With the recent advancement, a variety of different single-cell MS
(SCMS) methods have been established as powerful tools to analyze
large (e.g., proteins) and small (e.g., metabolites) within individual
cells. , Metabolomics is the study of metabolites,
which are broadly defined small molecules with molecular weight <1500
amu, such as lipids, fatty acids, peptides, amino acids, nucleic acids,
sugars, and organic acids. , The Single-probe, which
is a microscale sampling and ionization device, can be coupled to
a mass spectrometer for SCMS metabolomics studies of live single cells
without complex sample preparation or labeling. The interest in the
Single-probe SCMS stems from its potential to deepen our understanding
of single-cell metabolism, driving both fundamental research and clinical
applications forward. Notable innovations include integration with
microscopy methods (e.g., fluorescence microscopy) and combination
with chemical reactions, greatly extending the application of this
technique. As a multifunctional device, the Single-probe can be coupled
to mass spectrometry for studies in multiple different areas, including
single-cell metabolomics, , MS imaging (MSI) of
tissue slices, − and the analysis of extracellular molecules in live single spheroids, within ambient conditions. Additionally, we
have developed other techniques, such as the T-probe , and micropipette capillary, to facilitate
SCMS measurements. As a robust technique, the Single-probe SCMS method
has been used in a variety of fundamental studies such as analysis
of cellular heterogeneity, − investigation of cell–cell
interactions, detection of signaling
molecules, , and environmental influences on cell metabolism. , In addition, the Single-probe SCMS technique has promising clinical
applications, as it has been implemented to detect and quantify drug
molecules in single cells, , − characterize cancer stem cells, investigate
drug resistance and metabolic responses to drugs, − and study human diseases. , In addition to mammalian cells, the Single-probe SCMS technique
has been used to study plant cells. , This Review
aims to explore recent advancements and applications of the Single-probe
SCMS techniques, underscoring its growing significance in analytical
chemistry. As this technology continues to evolve, it promises to
usher in a new era in microscale bioanalysis, enabling unprecedented
insights into complex biological systems at the cellular and tissue
levels.
The current SCMS
techniques can broadly be classified into two
categories based on their sampling and ionization environments: vacuum-based
and ambient methods. , , Vacuum-Based Ionization Techniques These methods are
known for their high sensitivity and spatial resolution, and they
are well-suited for single-cell analysis. However, these experiments require complex sample preparation, such
as dehydration and matrix application, to facilitate ion generation
through lasers or ion beams. Key single-cell
MS technologies in this category include secondary ion mass spectrometry
(SIMS), gas cluster ion beam (GCIB), matrix-assisted laser desorption/ionization
(MALDI), and matrix-free laser desorption/ionization (LDI). SIMS-Based Methods SIMS, originally demonstrated by
Herzog and Biehböck in 1949, evolved
for single-cell analysis in the 1960s. , SIMS provides
sensitive analysis of surface compositions by sputtering analytes
with a focused primary ion beam (e.g., 16 O – , 16 O 2 + , and 40 Ar + ), which generates secondary ions from surface molecules.
The established SIMS methods for single-cell analysis include TOF-SIMS, − nanoSIMS, , and the newer GCIB-SIMS. , These techniques render high spatial resolution (e.g., 50 nm spatial
resolution can be achieved using nanoSIMS); however, challenges remain
for analyzing small biological samples such as single cells. These challenges include the high vacuum requirement,
low ionization efficiency for biomolecules, and complex data analysis
due to extensive fragmentation from high-energy ion bombardment. Advances in the ion beam source, such as the
GCIB, have been introduced to mitigate fragmentation. , Laser Desorption/Ionization (LDI)-Based Methods These
techniques employ laser beams at specific wavelengths to irradiate
the sample surface, desorbing and ionizing molecules. Although laser
technology emerged in 1960 with Maiman’s invention, the potential
of LDI for MS was only realized in the 1980s, as early LDI methods
could ionize only molecules absorbing specific laser wavelengths. Key LDI approaches include matrix-assisted laser
desorption/ionization (MALDI-MS) and matrix-free LDI. MALDI, a soft
ionization method, significantly enhances the ionization efficiency
of large biomolecules such as proteins and polymers. , In a MALDI experiment, an organic matrix compound with strong UV
absorption assists in laser absorption, enabling efficient energy
transfer to the analytes. Ionization occurs through the interactions
between the analyte and ionized matrix molecules, but the high vacuum
environment required to prevent atmospheric interference may lead
to molecular delocalization and other sample alterations. Additionally,
matrix compounds often interfere with detection of low-molecular-weight
compounds (<1000 m / z ), complicating
studies on small molecules like metabolites and drug compounds. − To minimize interference with matrix molecules, alternative MS techniques,
such as matrix-free laser desorption/ionization MS (LDI-MS) , and label-assisted laser desorption/ionization MS (LALDI-MS), , have been developed. These methods are particularly useful for analyzing
relatively large cells, including plant and algae cells. In LALDI-MS, specific
functional groups (e.g., fluorophores or polyaromatic structures)
are used to label target molecules (e.g., peptides) to enable their
desorption and ionization when exposed to soft lasers operating at
visible wavelengths. While some LDI-based methods are described as
matrix-free, it is often challenging to completely avoid the use of
assistive molecules (e.g., 1,5-diaminonaphthalene) when studying biological
samples. This is largely due to the complexity of biomolecules, which
often demand varied levels of desorption and ionization energy. While these vacuum-based SCMS methods minimize
interference from experimental contaminants, allowing for enhanced
detection sensitivity and high throughput analysis, they require nontrivial
sample preparation as well as preclude the analysis of live cells
due to the extensive pretreatments involved. , Ambient-Based Sampling and Ionization Techniques Compared
with vacuum-based techniques, ambient SCMS methods offer greater flexibility,
enabling in-situ analysis of cells within their native or near-native
environments. This capability makes ambient SCMS more suitable for
live cell studies. However, the sensitivity of ambient techniques
is typically lower than that of vacuum-based methods, due to ionization
efficiency caused by interference from matrix molecules. Additionally, the throughput of most ambient-based
methods tends to be lower, limiting their applicability for studies
requiring a large number of cells. Despite these challenges, significant
advancements have been made to improve the throughput of ambient SCMS
techniques, making them a valuable tool for minimally invasive live-cell
analysis. Many ambient SCMS techniques
typically employ physical probes, lasers, or charged solvent droplets
to facilitate analyte sampling and ionization. According to
the methods used for sampling contents from single cells, we classified
ambient SCMS techniques into three categories: direct suction by microprobes,
microextraction by probe with solid or liquid phase, and direct desorption
and ionization. , The first two categories primarily
use probe-based approaches. Due to the small size of single cells,
often only a few micrometers, traditional sampling and preparation
techniques from bulk analyses are not applicable. Microprobes were
introduced to meet the specific requirements of single-cell analysis. Direct Suction by Microprobes The concept of the microprobe
was initially proposed by Masujima in 1999, leading to the first ambient SCMS experiment in 2008 using live
single-cell video mass spectrometry (Video-MS). In these early experiments, cells were monitored using
a video microscope, and a gold-coated capillary nanoelectrospray ionization
(nanoESI) emitter (tip size is ∼1–2 μm) was employed
as a micropipette to extract cell contents. The same nanoESI emitter was then used for ionization in MS analysis.
This technique has been applied to study plant cells, quantify analytes
in live SCMS, and integrate with fluorescence imaging, laser microscopy,
and microdroplet array systems. − Vertes et al. integrated the capillary microsampling
system with ion mobility MS to identify
metabolites in single human hepatocytes. Additionally, this system has been applied to analyze neurons of
the mollusk Lymnaea stagnalis . When combined with fluorescence microscopy,
specific subcellular components (e.g., cytoplasm and nucleus) can
be selected for analysis. The pressure probes facilitated direct sample injection using an internal electrode
capillary, reducing the need for extensive
sample preparation. Pico-ESI capabilities enabled these probes to
be operated efficiently under ambient conditions. Other direct-suction
methods, including nanopipettes, micropipettes, and T-probes, , have also
been developed for ambient single-cell analysis. Microextraction by Probe with Solid or Liquid Phase These methods for single-cell analysis are divided into solid–liquid
and liquid–liquid microextractions. In solid–liquid microextraction, a surface-treated metal needle is introduced into a single cell
to extract and enrich analytes, which are subsequently analyzed by
a mass spectrometer under ambient conditions. Techniques, such as
probe electrospray ionization, direct
sampling probes, , and surface-coated probe nanoESI-MS, − are examples of solid–liquid microextraction developed for
single-cell analysis. On the other hand, liquid–liquid microextraction
employs organic solvents (e.g., methanol and acetonitrile) for extraction.
These methods generally do not require additional solvents for MS
analysis, leading to a higher throughput compared to solid–liquid
techniques. In liquid–liquid extraction, a capillary is typically
used to introduce the solvent, which then carries dissolved analytes
to the mass spectrometer. Major liquid–liquid microextraction
methods include nanomanipulation, nano-DESI, and the Single-probe
MS. In nanomanipulation coupled nanospray MS, introduced by Phelps
et al., a quartz probe punctures the
cell membrane, and a nanoESI emitter is used to extract analytes.
Nano-DESI, developed by the Laskin group in 2012, utilizes a primary capillary to deliver solvent to the
sample and a secondary capillary for solution extraction and ionization.
Originally designed for MS imaging, nano-DESI was later adapted by
Lanekoff et al. for single-cell analysis. In the subsequent sections of this Review, we will provide a detailed
discussion on the applications of the Single-probe techniques in single-cell
analysis. Direct Desorption and Ionization These techniques involve
the application of laser energy, charged particles, or high electric
fields to facilitate analyte desorption/ionization from individual
cells, producing gas-phase ions suitable for MS detection under ambient
and open-air conditions. Approaches include desorption electrospray
ionization (DESI)-MS, , easy ambient sonic-spray ionization,
drop-on-demand inkjet printing with probe electrospray ionization
(PESI)-MS, and laser-based methods such as laser ablation electrospray
ionization (LAESI), , laser desorption/ionization
droplet delivery (LDIDD), and atmospheric-pressure
MALDI (AP-MALDI).
These methods are
known for their high sensitivity and spatial resolution, and they
are well-suited for single-cell analysis. However, these experiments require complex sample preparation, such
as dehydration and matrix application, to facilitate ion generation
through lasers or ion beams. Key single-cell
MS technologies in this category include secondary ion mass spectrometry
(SIMS), gas cluster ion beam (GCIB), matrix-assisted laser desorption/ionization
(MALDI), and matrix-free laser desorption/ionization (LDI). SIMS-Based Methods SIMS, originally demonstrated by
Herzog and Biehböck in 1949, evolved
for single-cell analysis in the 1960s. , SIMS provides
sensitive analysis of surface compositions by sputtering analytes
with a focused primary ion beam (e.g., 16 O – , 16 O 2 + , and 40 Ar + ), which generates secondary ions from surface molecules.
The established SIMS methods for single-cell analysis include TOF-SIMS, − nanoSIMS, , and the newer GCIB-SIMS. , These techniques render high spatial resolution (e.g., 50 nm spatial
resolution can be achieved using nanoSIMS); however, challenges remain
for analyzing small biological samples such as single cells. These challenges include the high vacuum requirement,
low ionization efficiency for biomolecules, and complex data analysis
due to extensive fragmentation from high-energy ion bombardment. Advances in the ion beam source, such as the
GCIB, have been introduced to mitigate fragmentation. , Laser Desorption/Ionization (LDI)-Based Methods These
techniques employ laser beams at specific wavelengths to irradiate
the sample surface, desorbing and ionizing molecules. Although laser
technology emerged in 1960 with Maiman’s invention, the potential
of LDI for MS was only realized in the 1980s, as early LDI methods
could ionize only molecules absorbing specific laser wavelengths. Key LDI approaches include matrix-assisted laser
desorption/ionization (MALDI-MS) and matrix-free LDI. MALDI, a soft
ionization method, significantly enhances the ionization efficiency
of large biomolecules such as proteins and polymers. , In a MALDI experiment, an organic matrix compound with strong UV
absorption assists in laser absorption, enabling efficient energy
transfer to the analytes. Ionization occurs through the interactions
between the analyte and ionized matrix molecules, but the high vacuum
environment required to prevent atmospheric interference may lead
to molecular delocalization and other sample alterations. Additionally,
matrix compounds often interfere with detection of low-molecular-weight
compounds (<1000 m / z ), complicating
studies on small molecules like metabolites and drug compounds. − To minimize interference with matrix molecules, alternative MS techniques,
such as matrix-free laser desorption/ionization MS (LDI-MS) , and label-assisted laser desorption/ionization MS (LALDI-MS), , have been developed. These methods are particularly useful for analyzing
relatively large cells, including plant and algae cells. In LALDI-MS, specific
functional groups (e.g., fluorophores or polyaromatic structures)
are used to label target molecules (e.g., peptides) to enable their
desorption and ionization when exposed to soft lasers operating at
visible wavelengths. While some LDI-based methods are described as
matrix-free, it is often challenging to completely avoid the use of
assistive molecules (e.g., 1,5-diaminonaphthalene) when studying biological
samples. This is largely due to the complexity of biomolecules, which
often demand varied levels of desorption and ionization energy. While these vacuum-based SCMS methods minimize
interference from experimental contaminants, allowing for enhanced
detection sensitivity and high throughput analysis, they require nontrivial
sample preparation as well as preclude the analysis of live cells
due to the extensive pretreatments involved. ,
SIMS, originally demonstrated by
Herzog and Biehböck in 1949, evolved
for single-cell analysis in the 1960s. , SIMS provides
sensitive analysis of surface compositions by sputtering analytes
with a focused primary ion beam (e.g., 16 O – , 16 O 2 + , and 40 Ar + ), which generates secondary ions from surface molecules.
The established SIMS methods for single-cell analysis include TOF-SIMS, − nanoSIMS, , and the newer GCIB-SIMS. , These techniques render high spatial resolution (e.g., 50 nm spatial
resolution can be achieved using nanoSIMS); however, challenges remain
for analyzing small biological samples such as single cells. These challenges include the high vacuum requirement,
low ionization efficiency for biomolecules, and complex data analysis
due to extensive fragmentation from high-energy ion bombardment. Advances in the ion beam source, such as the
GCIB, have been introduced to mitigate fragmentation. ,
These
techniques employ laser beams at specific wavelengths to irradiate
the sample surface, desorbing and ionizing molecules. Although laser
technology emerged in 1960 with Maiman’s invention, the potential
of LDI for MS was only realized in the 1980s, as early LDI methods
could ionize only molecules absorbing specific laser wavelengths. Key LDI approaches include matrix-assisted laser
desorption/ionization (MALDI-MS) and matrix-free LDI. MALDI, a soft
ionization method, significantly enhances the ionization efficiency
of large biomolecules such as proteins and polymers. , In a MALDI experiment, an organic matrix compound with strong UV
absorption assists in laser absorption, enabling efficient energy
transfer to the analytes. Ionization occurs through the interactions
between the analyte and ionized matrix molecules, but the high vacuum
environment required to prevent atmospheric interference may lead
to molecular delocalization and other sample alterations. Additionally,
matrix compounds often interfere with detection of low-molecular-weight
compounds (<1000 m / z ), complicating
studies on small molecules like metabolites and drug compounds. − To minimize interference with matrix molecules, alternative MS techniques,
such as matrix-free laser desorption/ionization MS (LDI-MS) , and label-assisted laser desorption/ionization MS (LALDI-MS), , have been developed. These methods are particularly useful for analyzing
relatively large cells, including plant and algae cells. In LALDI-MS, specific
functional groups (e.g., fluorophores or polyaromatic structures)
are used to label target molecules (e.g., peptides) to enable their
desorption and ionization when exposed to soft lasers operating at
visible wavelengths. While some LDI-based methods are described as
matrix-free, it is often challenging to completely avoid the use of
assistive molecules (e.g., 1,5-diaminonaphthalene) when studying biological
samples. This is largely due to the complexity of biomolecules, which
often demand varied levels of desorption and ionization energy. While these vacuum-based SCMS methods minimize
interference from experimental contaminants, allowing for enhanced
detection sensitivity and high throughput analysis, they require nontrivial
sample preparation as well as preclude the analysis of live cells
due to the extensive pretreatments involved. ,
Compared
with vacuum-based techniques, ambient SCMS methods offer greater flexibility,
enabling in-situ analysis of cells within their native or near-native
environments. This capability makes ambient SCMS more suitable for
live cell studies. However, the sensitivity of ambient techniques
is typically lower than that of vacuum-based methods, due to ionization
efficiency caused by interference from matrix molecules. Additionally, the throughput of most ambient-based
methods tends to be lower, limiting their applicability for studies
requiring a large number of cells. Despite these challenges, significant
advancements have been made to improve the throughput of ambient SCMS
techniques, making them a valuable tool for minimally invasive live-cell
analysis. Many ambient SCMS techniques
typically employ physical probes, lasers, or charged solvent droplets
to facilitate analyte sampling and ionization. According to
the methods used for sampling contents from single cells, we classified
ambient SCMS techniques into three categories: direct suction by microprobes,
microextraction by probe with solid or liquid phase, and direct desorption
and ionization. , The first two categories primarily
use probe-based approaches. Due to the small size of single cells,
often only a few micrometers, traditional sampling and preparation
techniques from bulk analyses are not applicable. Microprobes were
introduced to meet the specific requirements of single-cell analysis. Direct Suction by Microprobes The concept of the microprobe
was initially proposed by Masujima in 1999, leading to the first ambient SCMS experiment in 2008 using live
single-cell video mass spectrometry (Video-MS). In these early experiments, cells were monitored using
a video microscope, and a gold-coated capillary nanoelectrospray ionization
(nanoESI) emitter (tip size is ∼1–2 μm) was employed
as a micropipette to extract cell contents. The same nanoESI emitter was then used for ionization in MS analysis.
This technique has been applied to study plant cells, quantify analytes
in live SCMS, and integrate with fluorescence imaging, laser microscopy,
and microdroplet array systems. − Vertes et al. integrated the capillary microsampling
system with ion mobility MS to identify
metabolites in single human hepatocytes. Additionally, this system has been applied to analyze neurons of
the mollusk Lymnaea stagnalis . When combined with fluorescence microscopy,
specific subcellular components (e.g., cytoplasm and nucleus) can
be selected for analysis. The pressure probes facilitated direct sample injection using an internal electrode
capillary, reducing the need for extensive
sample preparation. Pico-ESI capabilities enabled these probes to
be operated efficiently under ambient conditions. Other direct-suction
methods, including nanopipettes, micropipettes, and T-probes, , have also
been developed for ambient single-cell analysis. Microextraction by Probe with Solid or Liquid Phase These methods for single-cell analysis are divided into solid–liquid
and liquid–liquid microextractions. In solid–liquid microextraction, a surface-treated metal needle is introduced into a single cell
to extract and enrich analytes, which are subsequently analyzed by
a mass spectrometer under ambient conditions. Techniques, such as
probe electrospray ionization, direct
sampling probes, , and surface-coated probe nanoESI-MS, − are examples of solid–liquid microextraction developed for
single-cell analysis. On the other hand, liquid–liquid microextraction
employs organic solvents (e.g., methanol and acetonitrile) for extraction.
These methods generally do not require additional solvents for MS
analysis, leading to a higher throughput compared to solid–liquid
techniques. In liquid–liquid extraction, a capillary is typically
used to introduce the solvent, which then carries dissolved analytes
to the mass spectrometer. Major liquid–liquid microextraction
methods include nanomanipulation, nano-DESI, and the Single-probe
MS. In nanomanipulation coupled nanospray MS, introduced by Phelps
et al., a quartz probe punctures the
cell membrane, and a nanoESI emitter is used to extract analytes.
Nano-DESI, developed by the Laskin group in 2012, utilizes a primary capillary to deliver solvent to the
sample and a secondary capillary for solution extraction and ionization.
Originally designed for MS imaging, nano-DESI was later adapted by
Lanekoff et al. for single-cell analysis. In the subsequent sections of this Review, we will provide a detailed
discussion on the applications of the Single-probe techniques in single-cell
analysis. Direct Desorption and Ionization These techniques involve
the application of laser energy, charged particles, or high electric
fields to facilitate analyte desorption/ionization from individual
cells, producing gas-phase ions suitable for MS detection under ambient
and open-air conditions. Approaches include desorption electrospray
ionization (DESI)-MS, , easy ambient sonic-spray ionization,
drop-on-demand inkjet printing with probe electrospray ionization
(PESI)-MS, and laser-based methods such as laser ablation electrospray
ionization (LAESI), , laser desorption/ionization
droplet delivery (LDIDD), and atmospheric-pressure
MALDI (AP-MALDI).
The concept of the microprobe
was initially proposed by Masujima in 1999, leading to the first ambient SCMS experiment in 2008 using live
single-cell video mass spectrometry (Video-MS). In these early experiments, cells were monitored using
a video microscope, and a gold-coated capillary nanoelectrospray ionization
(nanoESI) emitter (tip size is ∼1–2 μm) was employed
as a micropipette to extract cell contents. The same nanoESI emitter was then used for ionization in MS analysis.
This technique has been applied to study plant cells, quantify analytes
in live SCMS, and integrate with fluorescence imaging, laser microscopy,
and microdroplet array systems. − Vertes et al. integrated the capillary microsampling
system with ion mobility MS to identify
metabolites in single human hepatocytes. Additionally, this system has been applied to analyze neurons of
the mollusk Lymnaea stagnalis . When combined with fluorescence microscopy,
specific subcellular components (e.g., cytoplasm and nucleus) can
be selected for analysis. The pressure probes facilitated direct sample injection using an internal electrode
capillary, reducing the need for extensive
sample preparation. Pico-ESI capabilities enabled these probes to
be operated efficiently under ambient conditions. Other direct-suction
methods, including nanopipettes, micropipettes, and T-probes, , have also
been developed for ambient single-cell analysis.
These methods for single-cell analysis are divided into solid–liquid
and liquid–liquid microextractions. In solid–liquid microextraction, a surface-treated metal needle is introduced into a single cell
to extract and enrich analytes, which are subsequently analyzed by
a mass spectrometer under ambient conditions. Techniques, such as
probe electrospray ionization, direct
sampling probes, , and surface-coated probe nanoESI-MS, − are examples of solid–liquid microextraction developed for
single-cell analysis. On the other hand, liquid–liquid microextraction
employs organic solvents (e.g., methanol and acetonitrile) for extraction.
These methods generally do not require additional solvents for MS
analysis, leading to a higher throughput compared to solid–liquid
techniques. In liquid–liquid extraction, a capillary is typically
used to introduce the solvent, which then carries dissolved analytes
to the mass spectrometer. Major liquid–liquid microextraction
methods include nanomanipulation, nano-DESI, and the Single-probe
MS. In nanomanipulation coupled nanospray MS, introduced by Phelps
et al., a quartz probe punctures the
cell membrane, and a nanoESI emitter is used to extract analytes.
Nano-DESI, developed by the Laskin group in 2012, utilizes a primary capillary to deliver solvent to the
sample and a secondary capillary for solution extraction and ionization.
Originally designed for MS imaging, nano-DESI was later adapted by
Lanekoff et al. for single-cell analysis. In the subsequent sections of this Review, we will provide a detailed
discussion on the applications of the Single-probe techniques in single-cell
analysis.
These techniques involve
the application of laser energy, charged particles, or high electric
fields to facilitate analyte desorption/ionization from individual
cells, producing gas-phase ions suitable for MS detection under ambient
and open-air conditions. Approaches include desorption electrospray
ionization (DESI)-MS, , easy ambient sonic-spray ionization,
drop-on-demand inkjet printing with probe electrospray ionization
(PESI)-MS, and laser-based methods such as laser ablation electrospray
ionization (LAESI), , laser desorption/ionization
droplet delivery (LDIDD), and atmospheric-pressure
MALDI (AP-MALDI).
The Single-probe is a
sophisticated analytical tool composed of
several integral components that work together to facilitate microscale
bioanalysis. Here, we provide a review of its fabrication as well
as applications in SCMS analysis of small molecules (i.e., semiquantitative
analysis, quantitative analysis, integration with chemical reactions,
evaluation of single cell sample preparation, and combined advanced
data analysis), MSI of biological tissues, MS analysis of extracellular
molecules in live spheroids, and other studies performed using the
Single-probe-based
techniques. Single-Probe Fabrication The fabrication of the Single-probe
( a–c)
has been thoroughly described in our previous studies. , , , , This assembly includes three key elements:
a laser-pulled dual-bore quartz needle, a solvent providing silica
capillary, and a nanoelectrospray ionization (nano-ESI) emitter that
efficiently ionizes the extracted metabolite. The fabrication of the
single probe begins with the precise shaping of a dual-bore quartz
needle (outer diameter (OD) 500 μm; inner diameter (ID) 127
μm, Friedrich & Dimmock, Inc., Millville, NJ, USA)) using
a laser pipet puller (Model P-2000, Sutter Instrument CO., Novato,
CA). This pulling process creates a fine, tapered structure in the
quartz needle. Following this step, a fused silica capillary (outer
diameter 105 μm, inner diameter 40 μm, Polymicro Technologies,
Phoenix, AZ) is embedded into one bore of the pulled quartz needle
to serve as the solvent delivery channel. Additionally, a nano-ESI
emitter is positioned within the other bore. The nano-ESI emitter
is formed by heating a similar fused silica capillary with a butane
micro torch to achieve a sharp, functional tip for effective ionization.
To secure both the capillary and the nano-ESI emitter within the dual-bore
needle, UV-curing epoxy (Prime Dental, Item No. 006.030, Chicago,
IL) is applied to glue these parts and is then cured under a UV LED
lamp. To ensure the ease of use and stability of the
device during sampling,
the Single-probe is mounted on a microscope glass slide using standard
epoxy adhesive (Part No. 20945, ITW Devcon, Inc., Danvers, MA) ( b). A Conductive
MicroTight Union (M-539, IDEX Health & Science, LLC) connects
the fused silica capillary (ID: 50 μm, OD: 150 μm) to
the solvent-providing capillary. A PEEK tubing (F-181 and F-380, IDEX
Health & Science, LLC) is used as the sleeve of the fused silica
capillary to ensure a tight connection. The ionization voltage is
applied to the union instead of the nano-ESI emitter, enabling efficient
solvent delivery and ionization. To construct a functioning setup,
the Single-probe is combined with other components, including a motorized
XYZ-stage (CONEX- MFACC, Newport Corp., Irvine, CA), a manual XYZ-translation
stage (Compact Dovetail XYZ Linear Stage, Newport Corp., Irvine, CA),
a stereomicroscope (Supereyes T004 Digital Microscope, Shenzhen D&F
Co., Ltd., Shenzhen, China), and a flexible connector (MXB-3 h, Siskiyou
Corp., Grants Pass, OR). All components are integrated on an optical
board (Thorlabs Inc., Newton, NJ, US) interfaced with the mass spectrometer
(Thermo LTQ Orbitrap XL mass spectrometer, Thermo Fisher Scientific,
Inc., Waltham, MA) ( d). Single-Probe SCMS Studies Single-Probe SCMS in Semiquantitative Studies The Single-probe
SCMS technique has been used to characterize cellular metabolites
through semiquantitative analysis, in which ion intensities of metabolites
are normalized to the total ion current (TIC) as commonly performed
in MS studies. The Single-probe semiquantitative approaches have been
used for uncovering molecular diversity and cellular heterogeneity. , , , , − , , , , This section outlines the progression of
semiquantitative applications of the Single-probe SCMS technique in
single-cell metabolomics, highlighting its evolution across diverse
biological contexts. One of the earliest qualitative applications
was demonstrated in 2018 by Sun et al., who employed the Single-probe
SCMS to investigate intracellular metabolite changes in Scrippsiella trochoidea , a marine dinoflagellate,
under various environmental conditions. Bulk filtration techniques are predominantly used to assess the
physiological responses of microbial populations to environmental
changes. The Single-probe SCMS technique provided profiles of intracellular
metabolites in these single marine algae cells altered by different
conditions such as light variation and nitrogen limitation. This work
is a showcase of the potential applications of single-cell metabolomics
studies of marine algae cells’ responses to environmental stressors
without extensive sample manipulation. To further extend the
scope of SCMS, a novel platform integrating
a commercially available cell manipulation system with the Single-probe
technique was developed, allowing for the analysis of suspended cells
such as leukemia cells. , This Integrated Cell
Manipulation Platform (ICMP) coupled with a high-resolution mass spectrometer
was further used for quantitative analysis of intracellular metabolites
from patient-derived suspension cells such as those in urine from
bladder cancer patients (as illustrated in and detailed in section ). This system not only expanded the range of cell types that
could be analyzed with minimal sample preparation but also enhanced
specificity and sensitivity in distinguishing cellular features. The
versatility of this approach highlighted its potential for personalized
medicine, offering a rapid, real-time method to analyze live patient
cells and tailor therapeutic strategies. The semiquantitative
applications of the Single-probe SCMS methods
have been extended to studying drug-resistant cancer cells. The colorectal
cancer cells with irinotecan resistance possess elevated unsaturated lipids and cancer stem cell markers,
pointing to the upregulation of SCD1 as a key factor in resistance.
These findings suggested that inhibiting SCD1 could enhance irinotecan
sensitivity, offering a potential approach to overcoming drug resistance
in clinical treatment. More recently, Chen et al. applied SCMS to
evaluate the synergistic effects of combining irinotecan with metformin,
an antidiabetic medicine, in irinotecan-resistant colorectal cancer
cells. The study revealed that metformin
could downregulate lipids and fatty acids, suppressing cancer cell
metabolism. Combining metformin with irinotecan further enhanced the
suppression of glycosylated ceramide production, a critical component
of cancer cell metabolism. These studies demonstrated the utility
of SCMS in investigating drug resistance mechanisms and underscored
its potential for broader applications in cancer therapy. The
Single-probe SCMS has coupled with fluorescence microscopy
to investigate cell–cell interactions. Chen et al. employed
the technique in a co-culture system, which included drug-resistant
and drug-sensitive cancer cells, to study metabolism affected by cell–cell
interactions ( a–c). Two types of co-culture systems
were studied, including indirect (two different types of cells were
cultured in the same well but separated by Transwell) and direct (two
different types of cells were directly cultured in the same well without
separation) co-culture systems. In the direct co-culture experiments,
one type of cells was labeled with GFP (green fluorescence protein),
and fluorescence microscopy was combined with the Single-probe SCMS
to analyze metabolites of single cells in each group. The study revealed
that drug-sensitive cells exhibited increased resistance and altered
metabolic profiles when co-cultured with drug-resistant cells, shedding
light on the role of cellular communication in the development of
chemotherapy resistance. This application demonstrated the integration
of SCMS, and microscopy techniques could provide unique insights into
the metabolic shifts driven by cell–cell interactions, paving
the way for future studies on the metabolic responses of heterogeneous
cell populations. The Single-probe SCMS has also been coupled with
bright-field microscopy
to study cell heterogeneity. Nguyen et al. extended the application
of SCMS to infectious diseases by investigating host cell heterogeneity
during Trypanosoma cruzi ( T. cruzi ) infection, the causative agent of Chagas
disease (CD). The study revealed significant
metabolic differences between infected cells, which contain stained
parasites, and uninfected cells as well as the presence of bystander
effect, which indicates uninfected cells adjacent to infected ones
displaying altered metabolism. The bystander effect suggested a novel
mechanism for lesion development in parasite-free areas, offering
crucial insights into the pathogenesis of CD. This work represents
the first use of SCMS in studying mammalian-infectious diseases, showcasing
the technique’s broad applicability beyond cancer research. The Single-probe SCMS technique has significantly advanced semiquantitative
single-cell metabolomics by enabling precise, real-time analysis of
individual cells across diverse biological systems. Its applications
cover multiple areas such as marine microorganisms, human diseases,
and cell–cell communication, offering unprecedented insight
into cellular heterogeneity and metabolic dynamics. As the technique
continues to evolve, it holds immense potential for furthering our
understanding of complex biological processes and driving innovations
in personalized medicine. Single-Probe SCMS in Quantitative Studies The Single-probe
SCMS technique has been used for quantification of anticancer drugs
(both amounts and concentrations) in live individual cells under ambient
conditions. Due to its unique design,
the internal standard can be added into the sampling solvent (e.g.,
acetonitrile) at a known concentration. The internal standard can
be an isotopically labeled compound or species with the structure
highly similar to the target compound. , , When performing quantitative SCMS measurements of
drug-treated cells, the Single-probe tip is inserted into a single
living cell to extract intracellular chemicals (including drug molecules).
Both the internal standard and drug molecules are simultaneously delivered
to the nano-ESI emitter for ionization and detected by MS. Multiple
factors (e.g., the ion intensities of the drug and internal standard,
internal standard’s concentration and flow rate, and data acquisition
time) must be considered for the quantification. If the isotopically
labeled analogue is not available, the internal standard can be selected
from the species with a structure similar to the target compound,
whereas a calibration curve must be established. The quantitative
SCMS technique makes it possible to accurately estimate the amounts
of drugs in individual cells, offering insights into how individual
cells metabolize and retain therapeutic agents. This method
was first employed to rapidly quantify the absolute amounts of the
anticancer drug in individual adherent cancer cells under ambient
conditions. Pan et al. performed the
measurement of anticancer drug amounts within live cells . In this study,
both HCT-116 and HeLa cell lines were employed to investigate the
intracellular uptake of irinotecan under various treatment durations
and concentrations. To minimize the diffusion loss of cellular contents
and internal standard (irinotecan-d10), glass chips containing microwells
(diameter, 55 μm; depth, 25 μm) were used during cell
incubation and treatment. Single cells in individual microwells were
selected for measurements. The amount of irinotecan within single
cells was heterogenous across different cells. When comparing these
single cell results with those average values obtained through traditional
LC/MS techniques, it was found that the LC/MS approach yielded lower
intracellular drug levels. This discrepancy was attributed to drug
losses during the sample preparation process in LC/MS, highlighting
the advantage of single-cell mass spectrometry in preserving and detecting
accurate drug concentrations within cells. This method offers a more
direct and precise approach to understanding drug uptake dynamics. Recent advancements have integrated the Single-probe
with a cell
manipulation system, enabling analysis of suspension cells and patient-isolated
cells from body fluids ( a–c). To extend quantitative SCMS techniques to suspended cells,
the Single-probe system was coupled with an integrated cell manipulation
platform (ICMP), which consists of an Eppendorf TransferMan cell micromanipulation
system, a Nikon Eclipse TE300 inverted microscope, and a Tokai Hit
ThermoPlate system. A single cell was selected by the cell selection
probe, and the cell diameter was measured using the inverted microscope.
In fact, the microscope enables the discrimination between cancerous
and noncancerous cells based on their morphological characteristics.
The cell was then transferred to the Single-probe tip, where the cell
was immediately lysed when contacting the solvent (e.g., acetonitrile
containing the internal standard). The single cell lysate and the
internal standard were simultaneously detected by MS. Bensen et al.
accurately measured intracellular amounts and concentrations of the
chemotherapy drug gemcitabine in individual bladder cancer cells,
including both K562 cell lines and bladder cancer cells isolated from
patients undergoing chemotherapy. Comparisons
with traditional LC/MS results of K562 cells yielded comparable intracellular
drug concentrations. This study demonstrates the system’s capacity
for real-time, precise quantification of anticancer drug levels in
single cells, highlighting its potential for improving personalized
chemotherapy regimens. Combing the Single-Probe SCMS with Chemical Reactions Reaction through Noncovalent Interactions In the quest
to improve the detection coverage of ionizable cellular metabolites,
experiments are often conducted in both positive and negative ionization
modes. However, this is particularly challenging in SCMS, due to the
extremely limited cellular content available (∼1 pL/cell), which makes repeated analyses impractical. Addressing
this limitation, in 2016, Pan, Rao, and co-workers introduced a unique
MS method that facilitates the detection of negatively charged species
in single cells using positive ionization mode. This approach leverages dicationic ion-pairing reagents
in conjunction with the Single-probe for real-time reactive SCMS experiments.
In their studies, two dicationic compounds, 1,5-pentanediyl-bis(1-butylpyrrolidinium)
difluoride (C 5 (bpyr) 2 F 2 ) and 1,3-propanediyl-bis(tripropylphosphonium)
difluoride (C 3 (triprp) 2 F 2 ), were
added into the sampling solvent and introduced into single cells ( a and b). These dicationic reagents (2+) formed stable ion pairs
with negatively charged (1−) cellular metabolites, transforming
them into positively charged (1+) adducts, thus enabling their detection
in positive ionization mode with enhanced sensitivity. In three separate
SCMS experiments, 192 and 70 negatively charged metabolites were detected
as adducts with C 5 (bpyr) 2 F 2 and C 3 (triprp) 2 F 2 , respectively, along with
the detection of other positively charged metabolites, highlighting
the capability of this approach to detect a broad spectrum of metabolites.
A key advantage of these dicationic compounds lies in their selectivity
for complex formation, allowing the discrimination of low-abundance
ions with nearly identical m / z values.
Additionally, MS/MS was employed for molecular identification of selected
adduct ions. This reactive SCMS method represents a significant advancement
by enabling the simultaneous detection of negatively and positively
charged metabolites in a single experiment. Most notably, many of
the negatively charged metabolites identified using dicationic reagents
were undetectable in negative ionization mode alone, demonstrating
the enhanced sensitivity offered by this technique. Future studies
could explore other compounds to further refine the sensitivity and
scope of metabolite detection in single-cell analysis. Reaction through Covalent Interactions Lan et al. introduced
a novel method for indirect quantifying intracellular nitric oxide
(NO) by means of chemical reactions at the single-cell level ( a and b). NO, a reactive and short-lived
molecule (with a half-life of less than one second), plays a critical
role in various biological processes, including angiogenesis in tumors.
Quantifying NO at the single-cell level remains challenging due to
the small size of cells and NO’s reactive nature. There are
two main pathways for NO production: exogenous (provided by NO donor
compounds) and endogenous (produced by cells). Clinically, NO donors
are used in the treatment of conditions such as high blood pressure
and heart disease. Additionally, the anticancer drug doxorubicin (DOX)
can increase endogenous NO levels via the catalytic activity of nitric
oxide synthase (NOSs). Given NO’s crucial biological functions,
developing a method to accurately quantify NO at the single-cell level
is highly significant. Lan et al. proposed a method based on a quantitative
reaction between NO and amlodipine (AML), a compound containing the
Hantzsch ester group. The reaction between NO and AML yields dehydroamlodipine
(DAM), which can then be detected and quantified using the Single-probe
SCMS technique. Importantly, AML reacts selectively with NO, exhibiting
100% efficiency without interference from other reactive species within
the cell. In their study, individual cells were adhered to glass chips
containing microwells and were subsequently treated with AML under
different experimental conditions. To induce NO production, two compounds
were used: sodium nitroprusside (SNP) (to generate exogenous NO) and
doxorubicin (DOX) (to stimulate production of endogenous NO). The
Single-probe
SCMS system was employed for NO quantification, with acetonitrile
(ACN) containing 0.1% formic acid (FA) and 1.0 μM OXF (internal
standard) used as the sampling solution. Results from the SCMS studies
demonstrated that intracellular NO levels exhibited heterogeneous
distributions across the treated cells under all experimental conditions.
This method provides a robust approach to quantifying NO at the single-cell
level, offering insights into the complex biological roles of NO in
cellular systems. Improving Cell Sample Preparation for Robust Single-Probe SCMS
Analysis Maintaining the metabolic integrity of live cells
during sample transport, storage, or extended measurements is critical,
particularly given the rapid turnover rate of metabolites and low
throughput of most ambient-based SCMS techniques, which require substantial
time to manually select and analyze a statistically significant number
of cells. A recent study developed a robust methodology to preserve
cellular metabolomic profiles for SCMS experiments, addressing the
challenge in most ambient SCMS metabolomics studies . This study introduced a cell preparation protocol combining washing
by volatile salt (ammonium formate (AF)) solution, rapid quenching
in liquid nitrogen (LN 2 ), vacuum freeze-drying, and storage
at −80 °C to stabilize cell metabolites for SCMS analysis.
Experimental findings demonstrated that LN 2 quenching effectively
preserved the overall metabolome, while storage at −80 °C
for 48 h caused minor changes in metabolite profiles of quenched cells.
In contrast, unquenched cells exhibited significant metabolic alterations
despite low-temperature storage. Further investigation revealed the
necessity of quenching to maintain metabolic integrity and emphasized
minimizing low-temperature storage duration to limit metabolic perturbations.
The proposed method is readily applicable to SCMS workflows, ensuring
metabolite stability during extended studies while maintaining the
fidelity of metabolic profiles. Combining Advanced Data Analysis Methodologies with Single-Probe
SCMS Experiments The integration of SCMS methods with innovative
data analysis techniques has significantly advanced the field of single-cell
metabolomics. A variety of data processing and analysis methods have
been employed to extract meaningful insights from the complex data
generated from the Single-probe SCMS experiments, extending the applications
of these techniques. , , , SCMS Data Pretreatment Liu et al. reported a generalized
data analysis workflow to pretreat the Single-probe SCMS data. This data preprocessing workflow includes multiple
key steps for data refinement: (1) removal of exogenous ion signals
originated from culture medium and sampling solvent; (2) filtering
instrument
noise, which typically comprises 20%–40% of detected peaks,
through low-intensity ion exclusion; and (3) normalization of metabolite
ion intensities to the total ion count. These steps were shown to
effectively reduce data dimensionality while retaining crucial metabolite
information, though challenges remain in distinguishing true metabolite
signals from low-abundance noise. This
generalized data analysis workflow can be seamlessly integrated with
raw datasets, enabling thorough metabolomic analyses across different
experimental conditions. The introduction of MassLite by Zhu
et al. marks a notable advancement in the pretreatment of metabolomics
data. This software package is an integrated Python platform with
a user-friendly graphical interface for processing data in standard
.mzML format. This tool is suitable to handle data from intermittent
acquisition processes, enabling efficient segmentation of ion signals
from individual cells. MassLite also retains low-intensity metabolite
signals within complex single-cell data, broadening the scope of detectable
molecular species from limited analyte content. Additionally, this
tool incorporates functions for void scan filtering, dynamic grouping,
and advanced background removal, all of which enhance data quality
and processing efficiency. Furthermore, MassLite automates cell region
selection, replacing the manual process to enhance processing throughput.
Overall, MassLite serves as a vital tool for advancing SCMS research,
streamlining data preprocessing, and facilitating more accurate metabolomic
analyses. SCMS Data Analysis by Machine Learning While significant
progress has been made in understanding drug resistance mechanisms,
predicting a drug-resistant phenotype before starting chemotherapy
remains underexplored, potentially resulting in ineffective treatments
and unwanted toxicity for patients. For the first time, the integration
of the Single-probe SCMS with machine learning techniques was performed
by Liu et al. to quickly and accurately predict the phenotypes of
unknown single cells. This innovative approach, facilitated by the
Single-probe,
offers a solution for the rapid and reliable prediction of drug-resistant
cancer cell phenotypes such as those associated with chemoresistance
mechanisms (e.g., cell adhesion-mediated drug resistance (CAM-DR)). Advanced data analysis, incorporating machine
learning algorithms, was subsequently used to process complex metabolomic
data. Specifically, random forest (RF), penalized logistic regression
(LR), and artificial neural networks (ANNs) were used for analyzing
pretreated single-cell metabolomic datasets. By integrating a diverse
range of cellular metabolites, these models achieved significantly
improved predictive accuracy ( p -value < 0.05)
compared to other approaches that relied solely on metabolic biomarkers
identified through two-sample t -tests or PCA loading
plots. This highlights the effectiveness of our methodology in enhancing
model performance. Yao et al. developed
MetaPhenotype, a meta-learning-based model designed to address limitations
in adaptability and transferability often encountered in machine learning
models for SCMS data analysis. SCMS is a powerful tool for investigating
cellular heterogeneity, such as phenotypes, through the variation
of molecular species in individual cells. However, its application
to rare cell populations is often constrained by the limited availability
of cell samples. To overcome these challenges, two pairs of isogenic
melanoma cancer cell lines (each has primary and metastatic phenotypes)
were analyzed using the Single-probe SCMS technique. Both control
and drug-treated cells were analyzed. The SCMS metabolomics data of
one cell pair (no drug treatment) served as the training and evaluation
datasets for MetaPhenotype, which was subsequently applied to classify
the remaining data. The MetaPhenotype model demonstrated rapid adaptation
and exceptional transferability, achieving high prediction accuracy
of over 90% with minimal new training samples. Moreover, it enabled
the identification of a small subset of critical molecular species
essential for phenotype classification. This work highlights the potential
of MetaPhenotype to lower the demand for extensive sample acquisition,
facilitating accurate cell phenotype classification even with limited
SCMS datasets. The applicability of MetaPhenotype extends beyond melanoma
cell lines and the specific SCMS platform employed in this study,
offering potential for broader use in metabolomics studies across
diverse SCMS platforms and cell systems. SCMS Data Analysis by Biostatistics A notable study
by Liu et al. illustrates the application of the Single-probe SCMS
experiment combined with SinCHet-MS (Single Cell Heterogeneity for
Mass Spectrometry) software to investigate tumor cell heterogeneity
and cellular subpopulations. They analyzed
the metabolomic profiles of drug-sensitive and drug-resistant melanoma
cells (WM115 and WM266-4) treated with vemurafenib. The data were
subjected to batch effect correction, subpopulation analysis, and
biomarker prioritization. Notably, the findings showed that drug-sensitive
cells developed a new subpopulation after treatment, while drug-resistant
cells only showed changes in existing subpopulation proportions. There
are a few highlights of this work. First, for the first time, effect
correction in SCMS studies was performed using SinCHet-MS. Second,
the subpopulations of cells can be quantified using this bioinformatics
tool. Third, new algorithms used in this software allow for prioritizing
biomarkers of subpopulations of cells. These contributions underscore
the transformative impact of combining Single-probe SCMS experiments
with sophisticated data analysis techniques, paving the way for improved
understanding of cellular behaviors and therapeutic responses. Single-Probe MS Imaging (MSI) As a microscale sampling
and ionization device, the Single-probe can be coupled to MS for other
studies. The Single-probe MS imaging (MSI) technique, first introduced
in 2015, is a novel tool for analyzing biomolecules on tissue slices
with high spatial resolution under ambient conditions. − During the MSI experiment, the Single-probe tip is placed closely
above the tissue slice, and the solvent junction at the tip performs
in-situ surface microextraction, and the extracted molecules are immediately
analyzed by MS ( a–c). Using programmed stage control system, the Single-probe
tip performs continuous raster sampling of the region of interest
in tissue. MS images of ions of interest can be constructed using
a visualization tool. The Single-probe is capable of producing MSI
images of biological tissue slices with a spatial resolution as fine
as 8.5 μm ( d and e), making it one of the highest
resolutions among ambient MSI methods available. Combining the Single-Probe MSI with Chemical Reactions Due to its unique design, the sampling solvent of the Single-probe
can be flexibly selected. Similar to the relevant application in SCMS
studies, the use of dicationic compounds
(i.e., [C 5 (bpyr) 2 F 2 ] and [C 3 (triprp) 2 F 2 ]) in MSI experiments enabled the detection
of negatively charged species in the positive ion mode. Particularly, detection of metabolites in the
range of 600–900 m / z was
improved with enhanced ion intensities compared to regular negative
ionization modes. This technique also allowed the detection of metabolites
that were previously undetectable under standard conditions. Combining Advanced Data Analysis with Single-Probe MSI Experiments Due to their high dimensionality, high complexity, and large size,
extracting essential biological information from MSI data is generally
challenging. To facilitate the relevant studies, advanced data analysis
methods have been developed and combined with the Single-probe MSI
experiments. , , , Tian et al. developed a data analysis method using Multivariant Curve Resolution
(MCR) and Machine Learning (ML) approaches, and then used it to analyze
the MSI data from a mouse kidney slice. This method involved four
main steps: data preprocessing, MCR-Alternating Least Squares (ALS),
supervised ML (e.g., Random Forest), and unsupervised ML (e.g., Clustering
Large Applications (CLARA) and Density-based Spatial Clustering of
Applications with Noise (DBSCAN)). A key step was using t-SNE, a dimensionality
reduction tool, to process and visualize the complex datasets. For
supervised ML methods, predefined histological regions identified
through MCR-ALS were used to train the models. In unsupervised methods,
t-SNE prepared the data for clustering. The combination of these approaches
provided a more thorough understanding of chemical and spatial features
in the data. Other machine learning methods were then developed to
improve the MSI data analysis. In a study involving slices of cancer
spheroids, the Single-probe was used to examine the effects of the
anticancer drug Irinotecan on colorectal cancer (HCT-116) spheroids. By obtaining spatially resolved metabolomic
profiles, the technique revealed how the drug affected the abundance
of metabolites in different regions of the 3D tumor model. ML techniques,
such as Random Forest and CLARA, were employed to analyze the MSI
data, improving the identification and classification of metabolomic
features. The MS images obtained from the Single-probe MSI experiments
can
be integrated with fluorescence microscopy images through image fusion
( a–c).
In Alzheimer’s disease (AD) research, the Single-probe was
used to investigate the spatial distribution of metabolites around
amyloid-beta (Aβ) plaques in an AD mouse brain. Image fusion allowed researchers to correlate histological
markers (detected through fluorescence microscopy) with metabolomic
features (observed through MSI). This combined approach improved spatial
resolution (∼5 μm) and provided insights into abnormal
metabolite expressions, such as lysophospholipids, malic acid, and
glutamine, that are linked to the progression of AD. Single-Probe Mass Spectrometry in Live Multicellular Tumor Spheroids The Single-probe can be used as a microscale sampling device to
extract analytes for direct MS analysis. In a study by Sun et al.,
the integration of the microfunnel, which was implanted into a spheroid,
with the Single-probe provided an innovative approach to analyze extracellular
metabolites in live multicellular tumor spheroids . This work focused on understanding
the effects of anticancer drug treatments in the tumor microenvironment. This technique is particularly valuable for
capturing undiluted extracellular compounds inside single spheroids,
a critical area due to its unique microenvironment and potential for
harboring drug-resistant cells. To carry out this work, the researchers
first developed the microfunnel from a biocompatible fused silica
capillary with a fine tip (∼25 μm), enabling precise
implantation into the spheroid to collect extracellular compounds.
The spheroids, cultured using a colon carcinoma cell line (HCT-116),
were treated with the anticancer drug irinotecan under various concentrations
and durations. The microfunnel allowed for targeted sampling, accumulating
metabolites in a microscale environment that would otherwise be challenging
to access without dilution or selection bias. Once metabolites were
collected, the Single-probe was inserted into the opening of the microfunnel
to extract these metabolites and was analyzed by MS. The changes in
the spheroid’s extracellular lipid profile were observed, particularly
in phospholipids and glycerides, with increased lipid abundance as
drug treatment concentration and exposure time increased. These results
indicated that irinotecan prompted significant shifts in lipid metabolites,
which could contribute to drug-resistance mechanisms within central
tumor cells. This study’s workflow demonstrates an effective
methodology for profiling the extracellular environment of live spheroids,
making it a valuable tool for investigating drug response, cellular
communication, and resistance mechanisms in three-dimensional (3D)
cancer models.
The fabrication of the Single-probe
( a–c)
has been thoroughly described in our previous studies. , , , , This assembly includes three key elements:
a laser-pulled dual-bore quartz needle, a solvent providing silica
capillary, and a nanoelectrospray ionization (nano-ESI) emitter that
efficiently ionizes the extracted metabolite. The fabrication of the
single probe begins with the precise shaping of a dual-bore quartz
needle (outer diameter (OD) 500 μm; inner diameter (ID) 127
μm, Friedrich & Dimmock, Inc., Millville, NJ, USA)) using
a laser pipet puller (Model P-2000, Sutter Instrument CO., Novato,
CA). This pulling process creates a fine, tapered structure in the
quartz needle. Following this step, a fused silica capillary (outer
diameter 105 μm, inner diameter 40 μm, Polymicro Technologies,
Phoenix, AZ) is embedded into one bore of the pulled quartz needle
to serve as the solvent delivery channel. Additionally, a nano-ESI
emitter is positioned within the other bore. The nano-ESI emitter
is formed by heating a similar fused silica capillary with a butane
micro torch to achieve a sharp, functional tip for effective ionization.
To secure both the capillary and the nano-ESI emitter within the dual-bore
needle, UV-curing epoxy (Prime Dental, Item No. 006.030, Chicago,
IL) is applied to glue these parts and is then cured under a UV LED
lamp. To ensure the ease of use and stability of the
device during sampling,
the Single-probe is mounted on a microscope glass slide using standard
epoxy adhesive (Part No. 20945, ITW Devcon, Inc., Danvers, MA) ( b). A Conductive
MicroTight Union (M-539, IDEX Health & Science, LLC) connects
the fused silica capillary (ID: 50 μm, OD: 150 μm) to
the solvent-providing capillary. A PEEK tubing (F-181 and F-380, IDEX
Health & Science, LLC) is used as the sleeve of the fused silica
capillary to ensure a tight connection. The ionization voltage is
applied to the union instead of the nano-ESI emitter, enabling efficient
solvent delivery and ionization. To construct a functioning setup,
the Single-probe is combined with other components, including a motorized
XYZ-stage (CONEX- MFACC, Newport Corp., Irvine, CA), a manual XYZ-translation
stage (Compact Dovetail XYZ Linear Stage, Newport Corp., Irvine, CA),
a stereomicroscope (Supereyes T004 Digital Microscope, Shenzhen D&F
Co., Ltd., Shenzhen, China), and a flexible connector (MXB-3 h, Siskiyou
Corp., Grants Pass, OR). All components are integrated on an optical
board (Thorlabs Inc., Newton, NJ, US) interfaced with the mass spectrometer
(Thermo LTQ Orbitrap XL mass spectrometer, Thermo Fisher Scientific,
Inc., Waltham, MA) ( d).
Single-Probe SCMS in Semiquantitative Studies The Single-probe
SCMS technique has been used to characterize cellular metabolites
through semiquantitative analysis, in which ion intensities of metabolites
are normalized to the total ion current (TIC) as commonly performed
in MS studies. The Single-probe semiquantitative approaches have been
used for uncovering molecular diversity and cellular heterogeneity. , , , , − , , , , This section outlines the progression of
semiquantitative applications of the Single-probe SCMS technique in
single-cell metabolomics, highlighting its evolution across diverse
biological contexts. One of the earliest qualitative applications
was demonstrated in 2018 by Sun et al., who employed the Single-probe
SCMS to investigate intracellular metabolite changes in Scrippsiella trochoidea , a marine dinoflagellate,
under various environmental conditions. Bulk filtration techniques are predominantly used to assess the
physiological responses of microbial populations to environmental
changes. The Single-probe SCMS technique provided profiles of intracellular
metabolites in these single marine algae cells altered by different
conditions such as light variation and nitrogen limitation. This work
is a showcase of the potential applications of single-cell metabolomics
studies of marine algae cells’ responses to environmental stressors
without extensive sample manipulation. To further extend the
scope of SCMS, a novel platform integrating
a commercially available cell manipulation system with the Single-probe
technique was developed, allowing for the analysis of suspended cells
such as leukemia cells. , This Integrated Cell
Manipulation Platform (ICMP) coupled with a high-resolution mass spectrometer
was further used for quantitative analysis of intracellular metabolites
from patient-derived suspension cells such as those in urine from
bladder cancer patients (as illustrated in and detailed in section ). This system not only expanded the range of cell types that
could be analyzed with minimal sample preparation but also enhanced
specificity and sensitivity in distinguishing cellular features. The
versatility of this approach highlighted its potential for personalized
medicine, offering a rapid, real-time method to analyze live patient
cells and tailor therapeutic strategies. The semiquantitative
applications of the Single-probe SCMS methods
have been extended to studying drug-resistant cancer cells. The colorectal
cancer cells with irinotecan resistance possess elevated unsaturated lipids and cancer stem cell markers,
pointing to the upregulation of SCD1 as a key factor in resistance.
These findings suggested that inhibiting SCD1 could enhance irinotecan
sensitivity, offering a potential approach to overcoming drug resistance
in clinical treatment. More recently, Chen et al. applied SCMS to
evaluate the synergistic effects of combining irinotecan with metformin,
an antidiabetic medicine, in irinotecan-resistant colorectal cancer
cells. The study revealed that metformin
could downregulate lipids and fatty acids, suppressing cancer cell
metabolism. Combining metformin with irinotecan further enhanced the
suppression of glycosylated ceramide production, a critical component
of cancer cell metabolism. These studies demonstrated the utility
of SCMS in investigating drug resistance mechanisms and underscored
its potential for broader applications in cancer therapy. The
Single-probe SCMS has coupled with fluorescence microscopy
to investigate cell–cell interactions. Chen et al. employed
the technique in a co-culture system, which included drug-resistant
and drug-sensitive cancer cells, to study metabolism affected by cell–cell
interactions ( a–c). Two types of co-culture systems
were studied, including indirect (two different types of cells were
cultured in the same well but separated by Transwell) and direct (two
different types of cells were directly cultured in the same well without
separation) co-culture systems. In the direct co-culture experiments,
one type of cells was labeled with GFP (green fluorescence protein),
and fluorescence microscopy was combined with the Single-probe SCMS
to analyze metabolites of single cells in each group. The study revealed
that drug-sensitive cells exhibited increased resistance and altered
metabolic profiles when co-cultured with drug-resistant cells, shedding
light on the role of cellular communication in the development of
chemotherapy resistance. This application demonstrated the integration
of SCMS, and microscopy techniques could provide unique insights into
the metabolic shifts driven by cell–cell interactions, paving
the way for future studies on the metabolic responses of heterogeneous
cell populations. The Single-probe SCMS has also been coupled with
bright-field microscopy
to study cell heterogeneity. Nguyen et al. extended the application
of SCMS to infectious diseases by investigating host cell heterogeneity
during Trypanosoma cruzi ( T. cruzi ) infection, the causative agent of Chagas
disease (CD). The study revealed significant
metabolic differences between infected cells, which contain stained
parasites, and uninfected cells as well as the presence of bystander
effect, which indicates uninfected cells adjacent to infected ones
displaying altered metabolism. The bystander effect suggested a novel
mechanism for lesion development in parasite-free areas, offering
crucial insights into the pathogenesis of CD. This work represents
the first use of SCMS in studying mammalian-infectious diseases, showcasing
the technique’s broad applicability beyond cancer research. The Single-probe SCMS technique has significantly advanced semiquantitative
single-cell metabolomics by enabling precise, real-time analysis of
individual cells across diverse biological systems. Its applications
cover multiple areas such as marine microorganisms, human diseases,
and cell–cell communication, offering unprecedented insight
into cellular heterogeneity and metabolic dynamics. As the technique
continues to evolve, it holds immense potential for furthering our
understanding of complex biological processes and driving innovations
in personalized medicine. Single-Probe SCMS in Quantitative Studies The Single-probe
SCMS technique has been used for quantification of anticancer drugs
(both amounts and concentrations) in live individual cells under ambient
conditions. Due to its unique design,
the internal standard can be added into the sampling solvent (e.g.,
acetonitrile) at a known concentration. The internal standard can
be an isotopically labeled compound or species with the structure
highly similar to the target compound. , , When performing quantitative SCMS measurements of
drug-treated cells, the Single-probe tip is inserted into a single
living cell to extract intracellular chemicals (including drug molecules).
Both the internal standard and drug molecules are simultaneously delivered
to the nano-ESI emitter for ionization and detected by MS. Multiple
factors (e.g., the ion intensities of the drug and internal standard,
internal standard’s concentration and flow rate, and data acquisition
time) must be considered for the quantification. If the isotopically
labeled analogue is not available, the internal standard can be selected
from the species with a structure similar to the target compound,
whereas a calibration curve must be established. The quantitative
SCMS technique makes it possible to accurately estimate the amounts
of drugs in individual cells, offering insights into how individual
cells metabolize and retain therapeutic agents. This method
was first employed to rapidly quantify the absolute amounts of the
anticancer drug in individual adherent cancer cells under ambient
conditions. Pan et al. performed the
measurement of anticancer drug amounts within live cells . In this study,
both HCT-116 and HeLa cell lines were employed to investigate the
intracellular uptake of irinotecan under various treatment durations
and concentrations. To minimize the diffusion loss of cellular contents
and internal standard (irinotecan-d10), glass chips containing microwells
(diameter, 55 μm; depth, 25 μm) were used during cell
incubation and treatment. Single cells in individual microwells were
selected for measurements. The amount of irinotecan within single
cells was heterogenous across different cells. When comparing these
single cell results with those average values obtained through traditional
LC/MS techniques, it was found that the LC/MS approach yielded lower
intracellular drug levels. This discrepancy was attributed to drug
losses during the sample preparation process in LC/MS, highlighting
the advantage of single-cell mass spectrometry in preserving and detecting
accurate drug concentrations within cells. This method offers a more
direct and precise approach to understanding drug uptake dynamics. Recent advancements have integrated the Single-probe
with a cell
manipulation system, enabling analysis of suspension cells and patient-isolated
cells from body fluids ( a–c). To extend quantitative SCMS techniques to suspended cells,
the Single-probe system was coupled with an integrated cell manipulation
platform (ICMP), which consists of an Eppendorf TransferMan cell micromanipulation
system, a Nikon Eclipse TE300 inverted microscope, and a Tokai Hit
ThermoPlate system. A single cell was selected by the cell selection
probe, and the cell diameter was measured using the inverted microscope.
In fact, the microscope enables the discrimination between cancerous
and noncancerous cells based on their morphological characteristics.
The cell was then transferred to the Single-probe tip, where the cell
was immediately lysed when contacting the solvent (e.g., acetonitrile
containing the internal standard). The single cell lysate and the
internal standard were simultaneously detected by MS. Bensen et al.
accurately measured intracellular amounts and concentrations of the
chemotherapy drug gemcitabine in individual bladder cancer cells,
including both K562 cell lines and bladder cancer cells isolated from
patients undergoing chemotherapy. Comparisons
with traditional LC/MS results of K562 cells yielded comparable intracellular
drug concentrations. This study demonstrates the system’s capacity
for real-time, precise quantification of anticancer drug levels in
single cells, highlighting its potential for improving personalized
chemotherapy regimens. Combing the Single-Probe SCMS with Chemical Reactions Reaction through Noncovalent Interactions In the quest
to improve the detection coverage of ionizable cellular metabolites,
experiments are often conducted in both positive and negative ionization
modes. However, this is particularly challenging in SCMS, due to the
extremely limited cellular content available (∼1 pL/cell), which makes repeated analyses impractical. Addressing
this limitation, in 2016, Pan, Rao, and co-workers introduced a unique
MS method that facilitates the detection of negatively charged species
in single cells using positive ionization mode. This approach leverages dicationic ion-pairing reagents
in conjunction with the Single-probe for real-time reactive SCMS experiments.
In their studies, two dicationic compounds, 1,5-pentanediyl-bis(1-butylpyrrolidinium)
difluoride (C 5 (bpyr) 2 F 2 ) and 1,3-propanediyl-bis(tripropylphosphonium)
difluoride (C 3 (triprp) 2 F 2 ), were
added into the sampling solvent and introduced into single cells ( a and b). These dicationic reagents (2+) formed stable ion pairs
with negatively charged (1−) cellular metabolites, transforming
them into positively charged (1+) adducts, thus enabling their detection
in positive ionization mode with enhanced sensitivity. In three separate
SCMS experiments, 192 and 70 negatively charged metabolites were detected
as adducts with C 5 (bpyr) 2 F 2 and C 3 (triprp) 2 F 2 , respectively, along with
the detection of other positively charged metabolites, highlighting
the capability of this approach to detect a broad spectrum of metabolites.
A key advantage of these dicationic compounds lies in their selectivity
for complex formation, allowing the discrimination of low-abundance
ions with nearly identical m / z values.
Additionally, MS/MS was employed for molecular identification of selected
adduct ions. This reactive SCMS method represents a significant advancement
by enabling the simultaneous detection of negatively and positively
charged metabolites in a single experiment. Most notably, many of
the negatively charged metabolites identified using dicationic reagents
were undetectable in negative ionization mode alone, demonstrating
the enhanced sensitivity offered by this technique. Future studies
could explore other compounds to further refine the sensitivity and
scope of metabolite detection in single-cell analysis. Reaction through Covalent Interactions Lan et al. introduced
a novel method for indirect quantifying intracellular nitric oxide
(NO) by means of chemical reactions at the single-cell level ( a and b). NO, a reactive and short-lived
molecule (with a half-life of less than one second), plays a critical
role in various biological processes, including angiogenesis in tumors.
Quantifying NO at the single-cell level remains challenging due to
the small size of cells and NO’s reactive nature. There are
two main pathways for NO production: exogenous (provided by NO donor
compounds) and endogenous (produced by cells). Clinically, NO donors
are used in the treatment of conditions such as high blood pressure
and heart disease. Additionally, the anticancer drug doxorubicin (DOX)
can increase endogenous NO levels via the catalytic activity of nitric
oxide synthase (NOSs). Given NO’s crucial biological functions,
developing a method to accurately quantify NO at the single-cell level
is highly significant. Lan et al. proposed a method based on a quantitative
reaction between NO and amlodipine (AML), a compound containing the
Hantzsch ester group. The reaction between NO and AML yields dehydroamlodipine
(DAM), which can then be detected and quantified using the Single-probe
SCMS technique. Importantly, AML reacts selectively with NO, exhibiting
100% efficiency without interference from other reactive species within
the cell. In their study, individual cells were adhered to glass chips
containing microwells and were subsequently treated with AML under
different experimental conditions. To induce NO production, two compounds
were used: sodium nitroprusside (SNP) (to generate exogenous NO) and
doxorubicin (DOX) (to stimulate production of endogenous NO). The
Single-probe
SCMS system was employed for NO quantification, with acetonitrile
(ACN) containing 0.1% formic acid (FA) and 1.0 μM OXF (internal
standard) used as the sampling solution. Results from the SCMS studies
demonstrated that intracellular NO levels exhibited heterogeneous
distributions across the treated cells under all experimental conditions.
This method provides a robust approach to quantifying NO at the single-cell
level, offering insights into the complex biological roles of NO in
cellular systems. Improving Cell Sample Preparation for Robust Single-Probe SCMS
Analysis Maintaining the metabolic integrity of live cells
during sample transport, storage, or extended measurements is critical,
particularly given the rapid turnover rate of metabolites and low
throughput of most ambient-based SCMS techniques, which require substantial
time to manually select and analyze a statistically significant number
of cells. A recent study developed a robust methodology to preserve
cellular metabolomic profiles for SCMS experiments, addressing the
challenge in most ambient SCMS metabolomics studies . This study introduced a cell preparation protocol combining washing
by volatile salt (ammonium formate (AF)) solution, rapid quenching
in liquid nitrogen (LN 2 ), vacuum freeze-drying, and storage
at −80 °C to stabilize cell metabolites for SCMS analysis.
Experimental findings demonstrated that LN 2 quenching effectively
preserved the overall metabolome, while storage at −80 °C
for 48 h caused minor changes in metabolite profiles of quenched cells.
In contrast, unquenched cells exhibited significant metabolic alterations
despite low-temperature storage. Further investigation revealed the
necessity of quenching to maintain metabolic integrity and emphasized
minimizing low-temperature storage duration to limit metabolic perturbations.
The proposed method is readily applicable to SCMS workflows, ensuring
metabolite stability during extended studies while maintaining the
fidelity of metabolic profiles. Combining Advanced Data Analysis Methodologies with Single-Probe
SCMS Experiments The integration of SCMS methods with innovative
data analysis techniques has significantly advanced the field of single-cell
metabolomics. A variety of data processing and analysis methods have
been employed to extract meaningful insights from the complex data
generated from the Single-probe SCMS experiments, extending the applications
of these techniques. , , , SCMS Data Pretreatment Liu et al. reported a generalized
data analysis workflow to pretreat the Single-probe SCMS data. This data preprocessing workflow includes multiple
key steps for data refinement: (1) removal of exogenous ion signals
originated from culture medium and sampling solvent; (2) filtering
instrument
noise, which typically comprises 20%–40% of detected peaks,
through low-intensity ion exclusion; and (3) normalization of metabolite
ion intensities to the total ion count. These steps were shown to
effectively reduce data dimensionality while retaining crucial metabolite
information, though challenges remain in distinguishing true metabolite
signals from low-abundance noise. This
generalized data analysis workflow can be seamlessly integrated with
raw datasets, enabling thorough metabolomic analyses across different
experimental conditions. The introduction of MassLite by Zhu
et al. marks a notable advancement in the pretreatment of metabolomics
data. This software package is an integrated Python platform with
a user-friendly graphical interface for processing data in standard
.mzML format. This tool is suitable to handle data from intermittent
acquisition processes, enabling efficient segmentation of ion signals
from individual cells. MassLite also retains low-intensity metabolite
signals within complex single-cell data, broadening the scope of detectable
molecular species from limited analyte content. Additionally, this
tool incorporates functions for void scan filtering, dynamic grouping,
and advanced background removal, all of which enhance data quality
and processing efficiency. Furthermore, MassLite automates cell region
selection, replacing the manual process to enhance processing throughput.
Overall, MassLite serves as a vital tool for advancing SCMS research,
streamlining data preprocessing, and facilitating more accurate metabolomic
analyses. SCMS Data Analysis by Machine Learning While significant
progress has been made in understanding drug resistance mechanisms,
predicting a drug-resistant phenotype before starting chemotherapy
remains underexplored, potentially resulting in ineffective treatments
and unwanted toxicity for patients. For the first time, the integration
of the Single-probe SCMS with machine learning techniques was performed
by Liu et al. to quickly and accurately predict the phenotypes of
unknown single cells. This innovative approach, facilitated by the
Single-probe,
offers a solution for the rapid and reliable prediction of drug-resistant
cancer cell phenotypes such as those associated with chemoresistance
mechanisms (e.g., cell adhesion-mediated drug resistance (CAM-DR)). Advanced data analysis, incorporating machine
learning algorithms, was subsequently used to process complex metabolomic
data. Specifically, random forest (RF), penalized logistic regression
(LR), and artificial neural networks (ANNs) were used for analyzing
pretreated single-cell metabolomic datasets. By integrating a diverse
range of cellular metabolites, these models achieved significantly
improved predictive accuracy ( p -value < 0.05)
compared to other approaches that relied solely on metabolic biomarkers
identified through two-sample t -tests or PCA loading
plots. This highlights the effectiveness of our methodology in enhancing
model performance. Yao et al. developed
MetaPhenotype, a meta-learning-based model designed to address limitations
in adaptability and transferability often encountered in machine learning
models for SCMS data analysis. SCMS is a powerful tool for investigating
cellular heterogeneity, such as phenotypes, through the variation
of molecular species in individual cells. However, its application
to rare cell populations is often constrained by the limited availability
of cell samples. To overcome these challenges, two pairs of isogenic
melanoma cancer cell lines (each has primary and metastatic phenotypes)
were analyzed using the Single-probe SCMS technique. Both control
and drug-treated cells were analyzed. The SCMS metabolomics data of
one cell pair (no drug treatment) served as the training and evaluation
datasets for MetaPhenotype, which was subsequently applied to classify
the remaining data. The MetaPhenotype model demonstrated rapid adaptation
and exceptional transferability, achieving high prediction accuracy
of over 90% with minimal new training samples. Moreover, it enabled
the identification of a small subset of critical molecular species
essential for phenotype classification. This work highlights the potential
of MetaPhenotype to lower the demand for extensive sample acquisition,
facilitating accurate cell phenotype classification even with limited
SCMS datasets. The applicability of MetaPhenotype extends beyond melanoma
cell lines and the specific SCMS platform employed in this study,
offering potential for broader use in metabolomics studies across
diverse SCMS platforms and cell systems. SCMS Data Analysis by Biostatistics A notable study
by Liu et al. illustrates the application of the Single-probe SCMS
experiment combined with SinCHet-MS (Single Cell Heterogeneity for
Mass Spectrometry) software to investigate tumor cell heterogeneity
and cellular subpopulations. They analyzed
the metabolomic profiles of drug-sensitive and drug-resistant melanoma
cells (WM115 and WM266-4) treated with vemurafenib. The data were
subjected to batch effect correction, subpopulation analysis, and
biomarker prioritization. Notably, the findings showed that drug-sensitive
cells developed a new subpopulation after treatment, while drug-resistant
cells only showed changes in existing subpopulation proportions. There
are a few highlights of this work. First, for the first time, effect
correction in SCMS studies was performed using SinCHet-MS. Second,
the subpopulations of cells can be quantified using this bioinformatics
tool. Third, new algorithms used in this software allow for prioritizing
biomarkers of subpopulations of cells. These contributions underscore
the transformative impact of combining Single-probe SCMS experiments
with sophisticated data analysis techniques, paving the way for improved
understanding of cellular behaviors and therapeutic responses.
The Single-probe
SCMS technique has been used to characterize cellular metabolites
through semiquantitative analysis, in which ion intensities of metabolites
are normalized to the total ion current (TIC) as commonly performed
in MS studies. The Single-probe semiquantitative approaches have been
used for uncovering molecular diversity and cellular heterogeneity. , , , , − , , , , This section outlines the progression of
semiquantitative applications of the Single-probe SCMS technique in
single-cell metabolomics, highlighting its evolution across diverse
biological contexts. One of the earliest qualitative applications
was demonstrated in 2018 by Sun et al., who employed the Single-probe
SCMS to investigate intracellular metabolite changes in Scrippsiella trochoidea , a marine dinoflagellate,
under various environmental conditions. Bulk filtration techniques are predominantly used to assess the
physiological responses of microbial populations to environmental
changes. The Single-probe SCMS technique provided profiles of intracellular
metabolites in these single marine algae cells altered by different
conditions such as light variation and nitrogen limitation. This work
is a showcase of the potential applications of single-cell metabolomics
studies of marine algae cells’ responses to environmental stressors
without extensive sample manipulation. To further extend the
scope of SCMS, a novel platform integrating
a commercially available cell manipulation system with the Single-probe
technique was developed, allowing for the analysis of suspended cells
such as leukemia cells. , This Integrated Cell
Manipulation Platform (ICMP) coupled with a high-resolution mass spectrometer
was further used for quantitative analysis of intracellular metabolites
from patient-derived suspension cells such as those in urine from
bladder cancer patients (as illustrated in and detailed in section ). This system not only expanded the range of cell types that
could be analyzed with minimal sample preparation but also enhanced
specificity and sensitivity in distinguishing cellular features. The
versatility of this approach highlighted its potential for personalized
medicine, offering a rapid, real-time method to analyze live patient
cells and tailor therapeutic strategies. The semiquantitative
applications of the Single-probe SCMS methods
have been extended to studying drug-resistant cancer cells. The colorectal
cancer cells with irinotecan resistance possess elevated unsaturated lipids and cancer stem cell markers,
pointing to the upregulation of SCD1 as a key factor in resistance.
These findings suggested that inhibiting SCD1 could enhance irinotecan
sensitivity, offering a potential approach to overcoming drug resistance
in clinical treatment. More recently, Chen et al. applied SCMS to
evaluate the synergistic effects of combining irinotecan with metformin,
an antidiabetic medicine, in irinotecan-resistant colorectal cancer
cells. The study revealed that metformin
could downregulate lipids and fatty acids, suppressing cancer cell
metabolism. Combining metformin with irinotecan further enhanced the
suppression of glycosylated ceramide production, a critical component
of cancer cell metabolism. These studies demonstrated the utility
of SCMS in investigating drug resistance mechanisms and underscored
its potential for broader applications in cancer therapy. The
Single-probe SCMS has coupled with fluorescence microscopy
to investigate cell–cell interactions. Chen et al. employed
the technique in a co-culture system, which included drug-resistant
and drug-sensitive cancer cells, to study metabolism affected by cell–cell
interactions ( a–c). Two types of co-culture systems
were studied, including indirect (two different types of cells were
cultured in the same well but separated by Transwell) and direct (two
different types of cells were directly cultured in the same well without
separation) co-culture systems. In the direct co-culture experiments,
one type of cells was labeled with GFP (green fluorescence protein),
and fluorescence microscopy was combined with the Single-probe SCMS
to analyze metabolites of single cells in each group. The study revealed
that drug-sensitive cells exhibited increased resistance and altered
metabolic profiles when co-cultured with drug-resistant cells, shedding
light on the role of cellular communication in the development of
chemotherapy resistance. This application demonstrated the integration
of SCMS, and microscopy techniques could provide unique insights into
the metabolic shifts driven by cell–cell interactions, paving
the way for future studies on the metabolic responses of heterogeneous
cell populations. The Single-probe SCMS has also been coupled with
bright-field microscopy
to study cell heterogeneity. Nguyen et al. extended the application
of SCMS to infectious diseases by investigating host cell heterogeneity
during Trypanosoma cruzi ( T. cruzi ) infection, the causative agent of Chagas
disease (CD). The study revealed significant
metabolic differences between infected cells, which contain stained
parasites, and uninfected cells as well as the presence of bystander
effect, which indicates uninfected cells adjacent to infected ones
displaying altered metabolism. The bystander effect suggested a novel
mechanism for lesion development in parasite-free areas, offering
crucial insights into the pathogenesis of CD. This work represents
the first use of SCMS in studying mammalian-infectious diseases, showcasing
the technique’s broad applicability beyond cancer research. The Single-probe SCMS technique has significantly advanced semiquantitative
single-cell metabolomics by enabling precise, real-time analysis of
individual cells across diverse biological systems. Its applications
cover multiple areas such as marine microorganisms, human diseases,
and cell–cell communication, offering unprecedented insight
into cellular heterogeneity and metabolic dynamics. As the technique
continues to evolve, it holds immense potential for furthering our
understanding of complex biological processes and driving innovations
in personalized medicine.
The Single-probe
SCMS technique has been used for quantification of anticancer drugs
(both amounts and concentrations) in live individual cells under ambient
conditions. Due to its unique design,
the internal standard can be added into the sampling solvent (e.g.,
acetonitrile) at a known concentration. The internal standard can
be an isotopically labeled compound or species with the structure
highly similar to the target compound. , , When performing quantitative SCMS measurements of
drug-treated cells, the Single-probe tip is inserted into a single
living cell to extract intracellular chemicals (including drug molecules).
Both the internal standard and drug molecules are simultaneously delivered
to the nano-ESI emitter for ionization and detected by MS. Multiple
factors (e.g., the ion intensities of the drug and internal standard,
internal standard’s concentration and flow rate, and data acquisition
time) must be considered for the quantification. If the isotopically
labeled analogue is not available, the internal standard can be selected
from the species with a structure similar to the target compound,
whereas a calibration curve must be established. The quantitative
SCMS technique makes it possible to accurately estimate the amounts
of drugs in individual cells, offering insights into how individual
cells metabolize and retain therapeutic agents. This method
was first employed to rapidly quantify the absolute amounts of the
anticancer drug in individual adherent cancer cells under ambient
conditions. Pan et al. performed the
measurement of anticancer drug amounts within live cells . In this study,
both HCT-116 and HeLa cell lines were employed to investigate the
intracellular uptake of irinotecan under various treatment durations
and concentrations. To minimize the diffusion loss of cellular contents
and internal standard (irinotecan-d10), glass chips containing microwells
(diameter, 55 μm; depth, 25 μm) were used during cell
incubation and treatment. Single cells in individual microwells were
selected for measurements. The amount of irinotecan within single
cells was heterogenous across different cells. When comparing these
single cell results with those average values obtained through traditional
LC/MS techniques, it was found that the LC/MS approach yielded lower
intracellular drug levels. This discrepancy was attributed to drug
losses during the sample preparation process in LC/MS, highlighting
the advantage of single-cell mass spectrometry in preserving and detecting
accurate drug concentrations within cells. This method offers a more
direct and precise approach to understanding drug uptake dynamics. Recent advancements have integrated the Single-probe
with a cell
manipulation system, enabling analysis of suspension cells and patient-isolated
cells from body fluids ( a–c). To extend quantitative SCMS techniques to suspended cells,
the Single-probe system was coupled with an integrated cell manipulation
platform (ICMP), which consists of an Eppendorf TransferMan cell micromanipulation
system, a Nikon Eclipse TE300 inverted microscope, and a Tokai Hit
ThermoPlate system. A single cell was selected by the cell selection
probe, and the cell diameter was measured using the inverted microscope.
In fact, the microscope enables the discrimination between cancerous
and noncancerous cells based on their morphological characteristics.
The cell was then transferred to the Single-probe tip, where the cell
was immediately lysed when contacting the solvent (e.g., acetonitrile
containing the internal standard). The single cell lysate and the
internal standard were simultaneously detected by MS. Bensen et al.
accurately measured intracellular amounts and concentrations of the
chemotherapy drug gemcitabine in individual bladder cancer cells,
including both K562 cell lines and bladder cancer cells isolated from
patients undergoing chemotherapy. Comparisons
with traditional LC/MS results of K562 cells yielded comparable intracellular
drug concentrations. This study demonstrates the system’s capacity
for real-time, precise quantification of anticancer drug levels in
single cells, highlighting its potential for improving personalized
chemotherapy regimens.
Reaction through Noncovalent Interactions In the quest
to improve the detection coverage of ionizable cellular metabolites,
experiments are often conducted in both positive and negative ionization
modes. However, this is particularly challenging in SCMS, due to the
extremely limited cellular content available (∼1 pL/cell), which makes repeated analyses impractical. Addressing
this limitation, in 2016, Pan, Rao, and co-workers introduced a unique
MS method that facilitates the detection of negatively charged species
in single cells using positive ionization mode. This approach leverages dicationic ion-pairing reagents
in conjunction with the Single-probe for real-time reactive SCMS experiments.
In their studies, two dicationic compounds, 1,5-pentanediyl-bis(1-butylpyrrolidinium)
difluoride (C 5 (bpyr) 2 F 2 ) and 1,3-propanediyl-bis(tripropylphosphonium)
difluoride (C 3 (triprp) 2 F 2 ), were
added into the sampling solvent and introduced into single cells ( a and b). These dicationic reagents (2+) formed stable ion pairs
with negatively charged (1−) cellular metabolites, transforming
them into positively charged (1+) adducts, thus enabling their detection
in positive ionization mode with enhanced sensitivity. In three separate
SCMS experiments, 192 and 70 negatively charged metabolites were detected
as adducts with C 5 (bpyr) 2 F 2 and C 3 (triprp) 2 F 2 , respectively, along with
the detection of other positively charged metabolites, highlighting
the capability of this approach to detect a broad spectrum of metabolites.
A key advantage of these dicationic compounds lies in their selectivity
for complex formation, allowing the discrimination of low-abundance
ions with nearly identical m / z values.
Additionally, MS/MS was employed for molecular identification of selected
adduct ions. This reactive SCMS method represents a significant advancement
by enabling the simultaneous detection of negatively and positively
charged metabolites in a single experiment. Most notably, many of
the negatively charged metabolites identified using dicationic reagents
were undetectable in negative ionization mode alone, demonstrating
the enhanced sensitivity offered by this technique. Future studies
could explore other compounds to further refine the sensitivity and
scope of metabolite detection in single-cell analysis. Reaction through Covalent Interactions Lan et al. introduced
a novel method for indirect quantifying intracellular nitric oxide
(NO) by means of chemical reactions at the single-cell level ( a and b). NO, a reactive and short-lived
molecule (with a half-life of less than one second), plays a critical
role in various biological processes, including angiogenesis in tumors.
Quantifying NO at the single-cell level remains challenging due to
the small size of cells and NO’s reactive nature. There are
two main pathways for NO production: exogenous (provided by NO donor
compounds) and endogenous (produced by cells). Clinically, NO donors
are used in the treatment of conditions such as high blood pressure
and heart disease. Additionally, the anticancer drug doxorubicin (DOX)
can increase endogenous NO levels via the catalytic activity of nitric
oxide synthase (NOSs). Given NO’s crucial biological functions,
developing a method to accurately quantify NO at the single-cell level
is highly significant. Lan et al. proposed a method based on a quantitative
reaction between NO and amlodipine (AML), a compound containing the
Hantzsch ester group. The reaction between NO and AML yields dehydroamlodipine
(DAM), which can then be detected and quantified using the Single-probe
SCMS technique. Importantly, AML reacts selectively with NO, exhibiting
100% efficiency without interference from other reactive species within
the cell. In their study, individual cells were adhered to glass chips
containing microwells and were subsequently treated with AML under
different experimental conditions. To induce NO production, two compounds
were used: sodium nitroprusside (SNP) (to generate exogenous NO) and
doxorubicin (DOX) (to stimulate production of endogenous NO). The
Single-probe
SCMS system was employed for NO quantification, with acetonitrile
(ACN) containing 0.1% formic acid (FA) and 1.0 μM OXF (internal
standard) used as the sampling solution. Results from the SCMS studies
demonstrated that intracellular NO levels exhibited heterogeneous
distributions across the treated cells under all experimental conditions.
This method provides a robust approach to quantifying NO at the single-cell
level, offering insights into the complex biological roles of NO in
cellular systems.
In the quest
to improve the detection coverage of ionizable cellular metabolites,
experiments are often conducted in both positive and negative ionization
modes. However, this is particularly challenging in SCMS, due to the
extremely limited cellular content available (∼1 pL/cell), which makes repeated analyses impractical. Addressing
this limitation, in 2016, Pan, Rao, and co-workers introduced a unique
MS method that facilitates the detection of negatively charged species
in single cells using positive ionization mode. This approach leverages dicationic ion-pairing reagents
in conjunction with the Single-probe for real-time reactive SCMS experiments.
In their studies, two dicationic compounds, 1,5-pentanediyl-bis(1-butylpyrrolidinium)
difluoride (C 5 (bpyr) 2 F 2 ) and 1,3-propanediyl-bis(tripropylphosphonium)
difluoride (C 3 (triprp) 2 F 2 ), were
added into the sampling solvent and introduced into single cells ( a and b). These dicationic reagents (2+) formed stable ion pairs
with negatively charged (1−) cellular metabolites, transforming
them into positively charged (1+) adducts, thus enabling their detection
in positive ionization mode with enhanced sensitivity. In three separate
SCMS experiments, 192 and 70 negatively charged metabolites were detected
as adducts with C 5 (bpyr) 2 F 2 and C 3 (triprp) 2 F 2 , respectively, along with
the detection of other positively charged metabolites, highlighting
the capability of this approach to detect a broad spectrum of metabolites.
A key advantage of these dicationic compounds lies in their selectivity
for complex formation, allowing the discrimination of low-abundance
ions with nearly identical m / z values.
Additionally, MS/MS was employed for molecular identification of selected
adduct ions. This reactive SCMS method represents a significant advancement
by enabling the simultaneous detection of negatively and positively
charged metabolites in a single experiment. Most notably, many of
the negatively charged metabolites identified using dicationic reagents
were undetectable in negative ionization mode alone, demonstrating
the enhanced sensitivity offered by this technique. Future studies
could explore other compounds to further refine the sensitivity and
scope of metabolite detection in single-cell analysis.
Lan et al. introduced
a novel method for indirect quantifying intracellular nitric oxide
(NO) by means of chemical reactions at the single-cell level ( a and b). NO, a reactive and short-lived
molecule (with a half-life of less than one second), plays a critical
role in various biological processes, including angiogenesis in tumors.
Quantifying NO at the single-cell level remains challenging due to
the small size of cells and NO’s reactive nature. There are
two main pathways for NO production: exogenous (provided by NO donor
compounds) and endogenous (produced by cells). Clinically, NO donors
are used in the treatment of conditions such as high blood pressure
and heart disease. Additionally, the anticancer drug doxorubicin (DOX)
can increase endogenous NO levels via the catalytic activity of nitric
oxide synthase (NOSs). Given NO’s crucial biological functions,
developing a method to accurately quantify NO at the single-cell level
is highly significant. Lan et al. proposed a method based on a quantitative
reaction between NO and amlodipine (AML), a compound containing the
Hantzsch ester group. The reaction between NO and AML yields dehydroamlodipine
(DAM), which can then be detected and quantified using the Single-probe
SCMS technique. Importantly, AML reacts selectively with NO, exhibiting
100% efficiency without interference from other reactive species within
the cell. In their study, individual cells were adhered to glass chips
containing microwells and were subsequently treated with AML under
different experimental conditions. To induce NO production, two compounds
were used: sodium nitroprusside (SNP) (to generate exogenous NO) and
doxorubicin (DOX) (to stimulate production of endogenous NO). The
Single-probe
SCMS system was employed for NO quantification, with acetonitrile
(ACN) containing 0.1% formic acid (FA) and 1.0 μM OXF (internal
standard) used as the sampling solution. Results from the SCMS studies
demonstrated that intracellular NO levels exhibited heterogeneous
distributions across the treated cells under all experimental conditions.
This method provides a robust approach to quantifying NO at the single-cell
level, offering insights into the complex biological roles of NO in
cellular systems.
Maintaining the metabolic integrity of live cells
during sample transport, storage, or extended measurements is critical,
particularly given the rapid turnover rate of metabolites and low
throughput of most ambient-based SCMS techniques, which require substantial
time to manually select and analyze a statistically significant number
of cells. A recent study developed a robust methodology to preserve
cellular metabolomic profiles for SCMS experiments, addressing the
challenge in most ambient SCMS metabolomics studies . This study introduced a cell preparation protocol combining washing
by volatile salt (ammonium formate (AF)) solution, rapid quenching
in liquid nitrogen (LN 2 ), vacuum freeze-drying, and storage
at −80 °C to stabilize cell metabolites for SCMS analysis.
Experimental findings demonstrated that LN 2 quenching effectively
preserved the overall metabolome, while storage at −80 °C
for 48 h caused minor changes in metabolite profiles of quenched cells.
In contrast, unquenched cells exhibited significant metabolic alterations
despite low-temperature storage. Further investigation revealed the
necessity of quenching to maintain metabolic integrity and emphasized
minimizing low-temperature storage duration to limit metabolic perturbations.
The proposed method is readily applicable to SCMS workflows, ensuring
metabolite stability during extended studies while maintaining the
fidelity of metabolic profiles.
The integration of SCMS methods with innovative
data analysis techniques has significantly advanced the field of single-cell
metabolomics. A variety of data processing and analysis methods have
been employed to extract meaningful insights from the complex data
generated from the Single-probe SCMS experiments, extending the applications
of these techniques. , , , SCMS Data Pretreatment Liu et al. reported a generalized
data analysis workflow to pretreat the Single-probe SCMS data. This data preprocessing workflow includes multiple
key steps for data refinement: (1) removal of exogenous ion signals
originated from culture medium and sampling solvent; (2) filtering
instrument
noise, which typically comprises 20%–40% of detected peaks,
through low-intensity ion exclusion; and (3) normalization of metabolite
ion intensities to the total ion count. These steps were shown to
effectively reduce data dimensionality while retaining crucial metabolite
information, though challenges remain in distinguishing true metabolite
signals from low-abundance noise. This
generalized data analysis workflow can be seamlessly integrated with
raw datasets, enabling thorough metabolomic analyses across different
experimental conditions. The introduction of MassLite by Zhu
et al. marks a notable advancement in the pretreatment of metabolomics
data. This software package is an integrated Python platform with
a user-friendly graphical interface for processing data in standard
.mzML format. This tool is suitable to handle data from intermittent
acquisition processes, enabling efficient segmentation of ion signals
from individual cells. MassLite also retains low-intensity metabolite
signals within complex single-cell data, broadening the scope of detectable
molecular species from limited analyte content. Additionally, this
tool incorporates functions for void scan filtering, dynamic grouping,
and advanced background removal, all of which enhance data quality
and processing efficiency. Furthermore, MassLite automates cell region
selection, replacing the manual process to enhance processing throughput.
Overall, MassLite serves as a vital tool for advancing SCMS research,
streamlining data preprocessing, and facilitating more accurate metabolomic
analyses. SCMS Data Analysis by Machine Learning While significant
progress has been made in understanding drug resistance mechanisms,
predicting a drug-resistant phenotype before starting chemotherapy
remains underexplored, potentially resulting in ineffective treatments
and unwanted toxicity for patients. For the first time, the integration
of the Single-probe SCMS with machine learning techniques was performed
by Liu et al. to quickly and accurately predict the phenotypes of
unknown single cells. This innovative approach, facilitated by the
Single-probe,
offers a solution for the rapid and reliable prediction of drug-resistant
cancer cell phenotypes such as those associated with chemoresistance
mechanisms (e.g., cell adhesion-mediated drug resistance (CAM-DR)). Advanced data analysis, incorporating machine
learning algorithms, was subsequently used to process complex metabolomic
data. Specifically, random forest (RF), penalized logistic regression
(LR), and artificial neural networks (ANNs) were used for analyzing
pretreated single-cell metabolomic datasets. By integrating a diverse
range of cellular metabolites, these models achieved significantly
improved predictive accuracy ( p -value < 0.05)
compared to other approaches that relied solely on metabolic biomarkers
identified through two-sample t -tests or PCA loading
plots. This highlights the effectiveness of our methodology in enhancing
model performance. Yao et al. developed
MetaPhenotype, a meta-learning-based model designed to address limitations
in adaptability and transferability often encountered in machine learning
models for SCMS data analysis. SCMS is a powerful tool for investigating
cellular heterogeneity, such as phenotypes, through the variation
of molecular species in individual cells. However, its application
to rare cell populations is often constrained by the limited availability
of cell samples. To overcome these challenges, two pairs of isogenic
melanoma cancer cell lines (each has primary and metastatic phenotypes)
were analyzed using the Single-probe SCMS technique. Both control
and drug-treated cells were analyzed. The SCMS metabolomics data of
one cell pair (no drug treatment) served as the training and evaluation
datasets for MetaPhenotype, which was subsequently applied to classify
the remaining data. The MetaPhenotype model demonstrated rapid adaptation
and exceptional transferability, achieving high prediction accuracy
of over 90% with minimal new training samples. Moreover, it enabled
the identification of a small subset of critical molecular species
essential for phenotype classification. This work highlights the potential
of MetaPhenotype to lower the demand for extensive sample acquisition,
facilitating accurate cell phenotype classification even with limited
SCMS datasets. The applicability of MetaPhenotype extends beyond melanoma
cell lines and the specific SCMS platform employed in this study,
offering potential for broader use in metabolomics studies across
diverse SCMS platforms and cell systems. SCMS Data Analysis by Biostatistics A notable study
by Liu et al. illustrates the application of the Single-probe SCMS
experiment combined with SinCHet-MS (Single Cell Heterogeneity for
Mass Spectrometry) software to investigate tumor cell heterogeneity
and cellular subpopulations. They analyzed
the metabolomic profiles of drug-sensitive and drug-resistant melanoma
cells (WM115 and WM266-4) treated with vemurafenib. The data were
subjected to batch effect correction, subpopulation analysis, and
biomarker prioritization. Notably, the findings showed that drug-sensitive
cells developed a new subpopulation after treatment, while drug-resistant
cells only showed changes in existing subpopulation proportions. There
are a few highlights of this work. First, for the first time, effect
correction in SCMS studies was performed using SinCHet-MS. Second,
the subpopulations of cells can be quantified using this bioinformatics
tool. Third, new algorithms used in this software allow for prioritizing
biomarkers of subpopulations of cells. These contributions underscore
the transformative impact of combining Single-probe SCMS experiments
with sophisticated data analysis techniques, paving the way for improved
understanding of cellular behaviors and therapeutic responses.
Liu et al. reported a generalized
data analysis workflow to pretreat the Single-probe SCMS data. This data preprocessing workflow includes multiple
key steps for data refinement: (1) removal of exogenous ion signals
originated from culture medium and sampling solvent; (2) filtering
instrument
noise, which typically comprises 20%–40% of detected peaks,
through low-intensity ion exclusion; and (3) normalization of metabolite
ion intensities to the total ion count. These steps were shown to
effectively reduce data dimensionality while retaining crucial metabolite
information, though challenges remain in distinguishing true metabolite
signals from low-abundance noise. This
generalized data analysis workflow can be seamlessly integrated with
raw datasets, enabling thorough metabolomic analyses across different
experimental conditions. The introduction of MassLite by Zhu
et al. marks a notable advancement in the pretreatment of metabolomics
data. This software package is an integrated Python platform with
a user-friendly graphical interface for processing data in standard
.mzML format. This tool is suitable to handle data from intermittent
acquisition processes, enabling efficient segmentation of ion signals
from individual cells. MassLite also retains low-intensity metabolite
signals within complex single-cell data, broadening the scope of detectable
molecular species from limited analyte content. Additionally, this
tool incorporates functions for void scan filtering, dynamic grouping,
and advanced background removal, all of which enhance data quality
and processing efficiency. Furthermore, MassLite automates cell region
selection, replacing the manual process to enhance processing throughput.
Overall, MassLite serves as a vital tool for advancing SCMS research,
streamlining data preprocessing, and facilitating more accurate metabolomic
analyses.
While significant
progress has been made in understanding drug resistance mechanisms,
predicting a drug-resistant phenotype before starting chemotherapy
remains underexplored, potentially resulting in ineffective treatments
and unwanted toxicity for patients. For the first time, the integration
of the Single-probe SCMS with machine learning techniques was performed
by Liu et al. to quickly and accurately predict the phenotypes of
unknown single cells. This innovative approach, facilitated by the
Single-probe,
offers a solution for the rapid and reliable prediction of drug-resistant
cancer cell phenotypes such as those associated with chemoresistance
mechanisms (e.g., cell adhesion-mediated drug resistance (CAM-DR)). Advanced data analysis, incorporating machine
learning algorithms, was subsequently used to process complex metabolomic
data. Specifically, random forest (RF), penalized logistic regression
(LR), and artificial neural networks (ANNs) were used for analyzing
pretreated single-cell metabolomic datasets. By integrating a diverse
range of cellular metabolites, these models achieved significantly
improved predictive accuracy ( p -value < 0.05)
compared to other approaches that relied solely on metabolic biomarkers
identified through two-sample t -tests or PCA loading
plots. This highlights the effectiveness of our methodology in enhancing
model performance. Yao et al. developed
MetaPhenotype, a meta-learning-based model designed to address limitations
in adaptability and transferability often encountered in machine learning
models for SCMS data analysis. SCMS is a powerful tool for investigating
cellular heterogeneity, such as phenotypes, through the variation
of molecular species in individual cells. However, its application
to rare cell populations is often constrained by the limited availability
of cell samples. To overcome these challenges, two pairs of isogenic
melanoma cancer cell lines (each has primary and metastatic phenotypes)
were analyzed using the Single-probe SCMS technique. Both control
and drug-treated cells were analyzed. The SCMS metabolomics data of
one cell pair (no drug treatment) served as the training and evaluation
datasets for MetaPhenotype, which was subsequently applied to classify
the remaining data. The MetaPhenotype model demonstrated rapid adaptation
and exceptional transferability, achieving high prediction accuracy
of over 90% with minimal new training samples. Moreover, it enabled
the identification of a small subset of critical molecular species
essential for phenotype classification. This work highlights the potential
of MetaPhenotype to lower the demand for extensive sample acquisition,
facilitating accurate cell phenotype classification even with limited
SCMS datasets. The applicability of MetaPhenotype extends beyond melanoma
cell lines and the specific SCMS platform employed in this study,
offering potential for broader use in metabolomics studies across
diverse SCMS platforms and cell systems.
A notable study
by Liu et al. illustrates the application of the Single-probe SCMS
experiment combined with SinCHet-MS (Single Cell Heterogeneity for
Mass Spectrometry) software to investigate tumor cell heterogeneity
and cellular subpopulations. They analyzed
the metabolomic profiles of drug-sensitive and drug-resistant melanoma
cells (WM115 and WM266-4) treated with vemurafenib. The data were
subjected to batch effect correction, subpopulation analysis, and
biomarker prioritization. Notably, the findings showed that drug-sensitive
cells developed a new subpopulation after treatment, while drug-resistant
cells only showed changes in existing subpopulation proportions. There
are a few highlights of this work. First, for the first time, effect
correction in SCMS studies was performed using SinCHet-MS. Second,
the subpopulations of cells can be quantified using this bioinformatics
tool. Third, new algorithms used in this software allow for prioritizing
biomarkers of subpopulations of cells. These contributions underscore
the transformative impact of combining Single-probe SCMS experiments
with sophisticated data analysis techniques, paving the way for improved
understanding of cellular behaviors and therapeutic responses.
As a microscale sampling
and ionization device, the Single-probe can be coupled to MS for other
studies. The Single-probe MS imaging (MSI) technique, first introduced
in 2015, is a novel tool for analyzing biomolecules on tissue slices
with high spatial resolution under ambient conditions. − During the MSI experiment, the Single-probe tip is placed closely
above the tissue slice, and the solvent junction at the tip performs
in-situ surface microextraction, and the extracted molecules are immediately
analyzed by MS ( a–c). Using programmed stage control system, the Single-probe
tip performs continuous raster sampling of the region of interest
in tissue. MS images of ions of interest can be constructed using
a visualization tool. The Single-probe is capable of producing MSI
images of biological tissue slices with a spatial resolution as fine
as 8.5 μm ( d and e), making it one of the highest
resolutions among ambient MSI methods available. Combining the Single-Probe MSI with Chemical Reactions Due to its unique design, the sampling solvent of the Single-probe
can be flexibly selected. Similar to the relevant application in SCMS
studies, the use of dicationic compounds
(i.e., [C 5 (bpyr) 2 F 2 ] and [C 3 (triprp) 2 F 2 ]) in MSI experiments enabled the detection
of negatively charged species in the positive ion mode. Particularly, detection of metabolites in the
range of 600–900 m / z was
improved with enhanced ion intensities compared to regular negative
ionization modes. This technique also allowed the detection of metabolites
that were previously undetectable under standard conditions. Combining Advanced Data Analysis with Single-Probe MSI Experiments Due to their high dimensionality, high complexity, and large size,
extracting essential biological information from MSI data is generally
challenging. To facilitate the relevant studies, advanced data analysis
methods have been developed and combined with the Single-probe MSI
experiments. , , , Tian et al. developed a data analysis method using Multivariant Curve Resolution
(MCR) and Machine Learning (ML) approaches, and then used it to analyze
the MSI data from a mouse kidney slice. This method involved four
main steps: data preprocessing, MCR-Alternating Least Squares (ALS),
supervised ML (e.g., Random Forest), and unsupervised ML (e.g., Clustering
Large Applications (CLARA) and Density-based Spatial Clustering of
Applications with Noise (DBSCAN)). A key step was using t-SNE, a dimensionality
reduction tool, to process and visualize the complex datasets. For
supervised ML methods, predefined histological regions identified
through MCR-ALS were used to train the models. In unsupervised methods,
t-SNE prepared the data for clustering. The combination of these approaches
provided a more thorough understanding of chemical and spatial features
in the data. Other machine learning methods were then developed to
improve the MSI data analysis. In a study involving slices of cancer
spheroids, the Single-probe was used to examine the effects of the
anticancer drug Irinotecan on colorectal cancer (HCT-116) spheroids. By obtaining spatially resolved metabolomic
profiles, the technique revealed how the drug affected the abundance
of metabolites in different regions of the 3D tumor model. ML techniques,
such as Random Forest and CLARA, were employed to analyze the MSI
data, improving the identification and classification of metabolomic
features. The MS images obtained from the Single-probe MSI experiments
can
be integrated with fluorescence microscopy images through image fusion
( a–c).
In Alzheimer’s disease (AD) research, the Single-probe was
used to investigate the spatial distribution of metabolites around
amyloid-beta (Aβ) plaques in an AD mouse brain. Image fusion allowed researchers to correlate histological
markers (detected through fluorescence microscopy) with metabolomic
features (observed through MSI). This combined approach improved spatial
resolution (∼5 μm) and provided insights into abnormal
metabolite expressions, such as lysophospholipids, malic acid, and
glutamine, that are linked to the progression of AD.
Due to its unique design, the sampling solvent of the Single-probe
can be flexibly selected. Similar to the relevant application in SCMS
studies, the use of dicationic compounds
(i.e., [C 5 (bpyr) 2 F 2 ] and [C 3 (triprp) 2 F 2 ]) in MSI experiments enabled the detection
of negatively charged species in the positive ion mode. Particularly, detection of metabolites in the
range of 600–900 m / z was
improved with enhanced ion intensities compared to regular negative
ionization modes. This technique also allowed the detection of metabolites
that were previously undetectable under standard conditions.
Due to their high dimensionality, high complexity, and large size,
extracting essential biological information from MSI data is generally
challenging. To facilitate the relevant studies, advanced data analysis
methods have been developed and combined with the Single-probe MSI
experiments. , , , Tian et al. developed a data analysis method using Multivariant Curve Resolution
(MCR) and Machine Learning (ML) approaches, and then used it to analyze
the MSI data from a mouse kidney slice. This method involved four
main steps: data preprocessing, MCR-Alternating Least Squares (ALS),
supervised ML (e.g., Random Forest), and unsupervised ML (e.g., Clustering
Large Applications (CLARA) and Density-based Spatial Clustering of
Applications with Noise (DBSCAN)). A key step was using t-SNE, a dimensionality
reduction tool, to process and visualize the complex datasets. For
supervised ML methods, predefined histological regions identified
through MCR-ALS were used to train the models. In unsupervised methods,
t-SNE prepared the data for clustering. The combination of these approaches
provided a more thorough understanding of chemical and spatial features
in the data. Other machine learning methods were then developed to
improve the MSI data analysis. In a study involving slices of cancer
spheroids, the Single-probe was used to examine the effects of the
anticancer drug Irinotecan on colorectal cancer (HCT-116) spheroids. By obtaining spatially resolved metabolomic
profiles, the technique revealed how the drug affected the abundance
of metabolites in different regions of the 3D tumor model. ML techniques,
such as Random Forest and CLARA, were employed to analyze the MSI
data, improving the identification and classification of metabolomic
features. The MS images obtained from the Single-probe MSI experiments
can
be integrated with fluorescence microscopy images through image fusion
( a–c).
In Alzheimer’s disease (AD) research, the Single-probe was
used to investigate the spatial distribution of metabolites around
amyloid-beta (Aβ) plaques in an AD mouse brain. Image fusion allowed researchers to correlate histological
markers (detected through fluorescence microscopy) with metabolomic
features (observed through MSI). This combined approach improved spatial
resolution (∼5 μm) and provided insights into abnormal
metabolite expressions, such as lysophospholipids, malic acid, and
glutamine, that are linked to the progression of AD.
The Single-probe can be used as a microscale sampling device to
extract analytes for direct MS analysis. In a study by Sun et al.,
the integration of the microfunnel, which was implanted into a spheroid,
with the Single-probe provided an innovative approach to analyze extracellular
metabolites in live multicellular tumor spheroids . This work focused on understanding
the effects of anticancer drug treatments in the tumor microenvironment. This technique is particularly valuable for
capturing undiluted extracellular compounds inside single spheroids,
a critical area due to its unique microenvironment and potential for
harboring drug-resistant cells. To carry out this work, the researchers
first developed the microfunnel from a biocompatible fused silica
capillary with a fine tip (∼25 μm), enabling precise
implantation into the spheroid to collect extracellular compounds.
The spheroids, cultured using a colon carcinoma cell line (HCT-116),
were treated with the anticancer drug irinotecan under various concentrations
and durations. The microfunnel allowed for targeted sampling, accumulating
metabolites in a microscale environment that would otherwise be challenging
to access without dilution or selection bias. Once metabolites were
collected, the Single-probe was inserted into the opening of the microfunnel
to extract these metabolites and was analyzed by MS. The changes in
the spheroid’s extracellular lipid profile were observed, particularly
in phospholipids and glycerides, with increased lipid abundance as
drug treatment concentration and exposure time increased. These results
indicated that irinotecan prompted significant shifts in lipid metabolites,
which could contribute to drug-resistance mechanisms within central
tumor cells. This study’s workflow demonstrates an effective
methodology for profiling the extracellular environment of live spheroids,
making it a valuable tool for investigating drug response, cellular
communication, and resistance mechanisms in three-dimensional (3D)
cancer models.
The general design of the Single-probe device has been adopted
and modified by other researchers for a variety of different studies. Quantitative MSI Studies In 2021, Wu et al. applied
the Single-probe technique for per-pixel absolute quantification of
endogenous lipidomes through model prediction of mass-transfer kinetics. This method enabled ambient liquid extraction
MSI in rat cerebellum, utilizing phosphatidylcholine (PC) and cerebroside
(CB) standards doped in the extraction solvent. By studying the extraction
kinetics of endogenous lipids during the probe’s stationary
phase in each tissue pixel, the team could gather detailed kinetic
data. Enrichment of Low-Abundance Analytes on Biological Tissue Slices In 2021, Wang et al. further leveraged the Single-probe fabrication
to create a microprobe with a larger tip size, suitable for ambient
liquid extraction MSI but not single-cell analysis. This study aimed to address the limited imaging coverage
of low-abundance or low-polarity lipids, such as glycerolipids and
sphingolipids, in complex tissues. To do so, they applied a porous
graphitic carbon (PGC) material to imprint brain tissue sections selectively,
enriching neutral lipids while removing polar phospholipids. Subsequent
scanning of the PGC-imprinted tissue with the ambient liquid extraction
MSI system revealed that hydrophobic interactions dominate in protic
solvents on the PGC surface, while polar interactions dominate in
aprotic solvents. A recent study performed by this group presents
a novel MSI approach that enhances spatial lipidomics analysis using
a graphene oxide/titanium dioxide (GO/TiO 2 ) nanocomposite
as a mixed-mode adsorptive material. By combining the chelation affinity of TiO 2 with the
hydrophobic interaction of GO, the material facilitates selective
enrichment of poorly ionizable glycolipids and glycerides while reducing
ion suppression and peak interference from high-abundance polar lipids.
Optimized solvent systems enabled on-plate separation of lipid classes
and efficient two-step ambient liquid extraction MSI. This method
significantly improved lipid coverage, detecting a greater variety
of glycolipids, glycerides, and phospholipids compared to traditional
MSI techniques. Application to rat cerebellum tissue demonstrated
higher imaging quality and comprehensive lipid profiling, advancing
the depth and scope of spatial lipidomics studies. Their future work
will focus on scaling the nanocomposite coating for single-cell MSI. In 2024, Wu et al. advanced the Single-probe for ambient liquid
extraction MSI studies aimed at enhancing the detection of poorly
ionizable lipids in brain tissue using a Lewis acidic metal–organic
framework (MOF). In this study, the
sample was placed on a triaxial platform, with the Single-probe affixed
in a perpendicular orientation, relative to the sample surface. The
team employed 1% FA-MeOH as the extraction solvent at a flow rate
of 5 μL min –1 , delivered via a syringe pump,
while a vacuum pump drew the solvent into the probe, creating a stable
liquid junction with a precise 10 μm distance between the probe
tip and sample surface. This approach effectively mitigated ion suppression
by phospholipids in MSI, significantly improving the detection coverage
of low-abundance, poorly ionizable lipids. Using Chemical Reactions to Improve the Detection of Low-Abundance
Analytes with Low Ionization Efficiencies In 2024, Lu et
al. developed a novel method to address challenges in lipidomics,
specifically for glycosphingolipids (GSLs), which are difficult to
ionize and analyze. This method introduces
a photoinduced enrichment and deglycosylation approach, implemented
in an ambient liquid extraction MS system, to improve GSL detection
coverage and structural elucidation in single-cell analysis. Using
TiO 2 in ammonia-based protic solvents, GSL standards were
selectively adsorbed. Under UV irradiation, GSLs underwent deglycosylation
(losing one hexosyl group) with a high conversion efficiency (>70%),
then desorbed from TiO 2 . Coating the TiO 2 onto
a capillary probe enabled selective GSL enrichment while separating
them from high-abundance phospholipids, reducing ion suppression.
UV exposure triggered rapid photodesorption without solvent changes,
achieving 6-fold GSL enrichment. This enhanced GSL detection 9-fold,
compared to traditional methods, allowing for detailed fatty acyl
and sphingosine chain elucidation through increased MS/MS fragmentation.
The method was applied to lipidomics in nerve cells, identifying 31
lipids, including 11 GSLs, and detecting alterations in five hexosylceramides
after neuron injury. This innovative TiO 2 -coated probe
demonstrated low limits of detection (3.7 ng/mL), high linearity ( r > 0.99), and repeatability (RSD < 20%). In brain
tissue
analysis, this technique identified 38 more lipids than using conventional
methods. Overall, this approach significantly advances single cell
lipidomics by enhancing GSL detection and structural analysis, providing
valuable insights for biomedical and photo-oxidation research.
In 2021, Wu et al. applied
the Single-probe technique for per-pixel absolute quantification of
endogenous lipidomes through model prediction of mass-transfer kinetics. This method enabled ambient liquid extraction
MSI in rat cerebellum, utilizing phosphatidylcholine (PC) and cerebroside
(CB) standards doped in the extraction solvent. By studying the extraction
kinetics of endogenous lipids during the probe’s stationary
phase in each tissue pixel, the team could gather detailed kinetic
data.
In 2021, Wang et al. further leveraged the Single-probe fabrication
to create a microprobe with a larger tip size, suitable for ambient
liquid extraction MSI but not single-cell analysis. This study aimed to address the limited imaging coverage
of low-abundance or low-polarity lipids, such as glycerolipids and
sphingolipids, in complex tissues. To do so, they applied a porous
graphitic carbon (PGC) material to imprint brain tissue sections selectively,
enriching neutral lipids while removing polar phospholipids. Subsequent
scanning of the PGC-imprinted tissue with the ambient liquid extraction
MSI system revealed that hydrophobic interactions dominate in protic
solvents on the PGC surface, while polar interactions dominate in
aprotic solvents. A recent study performed by this group presents
a novel MSI approach that enhances spatial lipidomics analysis using
a graphene oxide/titanium dioxide (GO/TiO 2 ) nanocomposite
as a mixed-mode adsorptive material. By combining the chelation affinity of TiO 2 with the
hydrophobic interaction of GO, the material facilitates selective
enrichment of poorly ionizable glycolipids and glycerides while reducing
ion suppression and peak interference from high-abundance polar lipids.
Optimized solvent systems enabled on-plate separation of lipid classes
and efficient two-step ambient liquid extraction MSI. This method
significantly improved lipid coverage, detecting a greater variety
of glycolipids, glycerides, and phospholipids compared to traditional
MSI techniques. Application to rat cerebellum tissue demonstrated
higher imaging quality and comprehensive lipid profiling, advancing
the depth and scope of spatial lipidomics studies. Their future work
will focus on scaling the nanocomposite coating for single-cell MSI. In 2024, Wu et al. advanced the Single-probe for ambient liquid
extraction MSI studies aimed at enhancing the detection of poorly
ionizable lipids in brain tissue using a Lewis acidic metal–organic
framework (MOF). In this study, the
sample was placed on a triaxial platform, with the Single-probe affixed
in a perpendicular orientation, relative to the sample surface. The
team employed 1% FA-MeOH as the extraction solvent at a flow rate
of 5 μL min –1 , delivered via a syringe pump,
while a vacuum pump drew the solvent into the probe, creating a stable
liquid junction with a precise 10 μm distance between the probe
tip and sample surface. This approach effectively mitigated ion suppression
by phospholipids in MSI, significantly improving the detection coverage
of low-abundance, poorly ionizable lipids.
In 2024, Lu et
al. developed a novel method to address challenges in lipidomics,
specifically for glycosphingolipids (GSLs), which are difficult to
ionize and analyze. This method introduces
a photoinduced enrichment and deglycosylation approach, implemented
in an ambient liquid extraction MS system, to improve GSL detection
coverage and structural elucidation in single-cell analysis. Using
TiO 2 in ammonia-based protic solvents, GSL standards were
selectively adsorbed. Under UV irradiation, GSLs underwent deglycosylation
(losing one hexosyl group) with a high conversion efficiency (>70%),
then desorbed from TiO 2 . Coating the TiO 2 onto
a capillary probe enabled selective GSL enrichment while separating
them from high-abundance phospholipids, reducing ion suppression.
UV exposure triggered rapid photodesorption without solvent changes,
achieving 6-fold GSL enrichment. This enhanced GSL detection 9-fold,
compared to traditional methods, allowing for detailed fatty acyl
and sphingosine chain elucidation through increased MS/MS fragmentation.
The method was applied to lipidomics in nerve cells, identifying 31
lipids, including 11 GSLs, and detecting alterations in five hexosylceramides
after neuron injury. This innovative TiO 2 -coated probe
demonstrated low limits of detection (3.7 ng/mL), high linearity ( r > 0.99), and repeatability (RSD < 20%). In brain
tissue
analysis, this technique identified 38 more lipids than using conventional
methods. Overall, this approach significantly advances single cell
lipidomics by enhancing GSL detection and structural analysis, providing
valuable insights for biomedical and photo-oxidation research.
Since it was first introduced in 2014,
the Single-probe-based methods
have demonstrated their capabilities in various studies of microscale
bioanalyses, such as single cells, tissue slices, and 3D tumor models,
in ambient conditions. Implemented with other techniques in instrumentation
(e.g., microscopy and precise manipulation), chemical reactions, and
surface functionalization, applications of these methods have been
largely extended. The advancement in data analysis tools (e.g., multivariate
analysis and machine learning) enables extraction of essential information
from complex data. Regardless of their advantages, broad applications
of the Single-probe-based methods still face multiple challenges.
In Single-probe SCMS studies, cell sampling must be manually performed
using the XYZ-stage system guided by a microscope. Although this is
beneficial for studies of target cells, which can be labeled by dyes
or fluorescent proteins, among heterogeneous populations, these manual
procedures largely limit the analysis throughput. In fact, microfluidics
techniques have been implemented to SCMS metabolomics studies. − Similar strategies can be potentially adopted by the Single-probe
SCMS setup to improve its analysis throughput. In Single-probe MSI
studies, maintaining the robustness of the experimental setup for
stable data acquisition (e.g., several hours) has been challenging.
These issues can be mitigated by fabricating robust probes with carefully
adjusted tip sizes and shapes. In fact, taking advantage of modern
microfabrication techniques (e.g., micromachining, microinjection,
and 3D printing), the fabrication of high-quality Single-probe devices
can be automated and standardized, promoting their widespread adoption
with high consistency and reliability across laboratories. In addition,
enclosed, environmentally controlled setups can further enhance reproducibility
by mitigating external influences such as temperature and humidity
variations. In principle, the Single-probe setup can be customized
and coupled with any model of mass spectrometer with a suitable interface.
Its open design allows for flexible customization of translation stage
system, microscope, and solvent and reagent selection and delivery.
Similar to all other MS studies, the Single-probe MS techniques can
reap the benefit of rapid advancements in modern mass spectrometers
(e.g., detection sensitivity, mass resolution, and data acquisition
speed). Collectively, these technology innovations and advancements
will broaden the utility of Single-probe MS methods, solidifying their
roles in advancing cutting-edge biological research.
|
The Chemical Composition and Health-Promoting Effects of the | 450d172e-58d4-4831-95c6-c9bcc93c296b | 8707743 | Pharmacology[mh] | The increasing number of deaths associated with cardiovascular disease, diabetes, cancer, inflammation, and other physiological disorders has gained the attention of health experts, researchers, and policymakers, with a view to promote healthy eating practices. Fruits and vegetables possess phytochemicals and metabolites that exhibit anticancer and anti-inflammatory effects, owing to their ability to scavenge free radicals in living systems . Among these, the Grewia species are rich in phytochemicals and are regarded as a promising niche in averting or ameliorating the aforementioned chronic ailments. There are about 159 species of Grewia that are grown in tropical and sub-tropical areas of Pakistan, India, China, Malaysia, South Africa, Australia, northern Thailand, and Nigeria. Fruits of some of the Grewia species are edible e.g., G. asiatica , G. optiva , G. mollis, G. occidentalis, and G. tenax . Numerous species of this genus have been shown to possess a variety of ethnopharmacological applications, e.g., G. asiatica leaves have been reported to cure skin problems such as eczema, eruptions, inflammation, as well as asthma, bronchitis, colds, coughs, and sore throat. G. optiva is used as “folk” medicine in the treatment of dysentery, typhoid, diarrhea, fever, cough, and smallpox . G. tiliaefolia has been widely used to cure jaundice, biliousness, dysentery, and the diseases of the blood . The ethnomedicinal formulations of G. mollis include infusion, decoction, maceration, or mucilage from the leaves, roots, or stem bark . G. hirsute has been conventionally used to treat several disease conditions, such as rheumatism, joint pain, cholera, diarrhea, and ulcers . G. tenax has been reported to cure distress of the stomach and skin, intestinal infections, fever, diarrhea, dysentery, hepatic disorders, jaundice, and rheumatism and has been reported to have antibiotic properties . The boiled leaves of G. microcos are traditionally used to improve digestion and are also used for colds, hepatitis, diarrhea, heat stroke, dyspepsia, typhoid fever, and syphilitic ulceration of the mouth . The traditional uses are increasingly supported by recent scientific research wherein some species of this genus have now been confirmed to possess anticancer, anti-inflammatory, antinociceptive, antioxidant, hepatoprotective, antidiabetic, antimicrobial, antimalarial, and sedative–hypnotic properties. They are also reported to hold immunomodulatory potential, to ameliorate learning and memory deficits, and to be effective against neurodegenerative ailments . Such effects are predominantly attributed to the synergistic effects of phenolics such as flavonoids (i.e., flavones, flavanones, isoflavonoids, flavanols, dihydroflavonols, tannins, anthocyanidins), triterpenes, alkaloids, and phytosterols that are abundantly available in these species . The Grewia species are also considered to be one of the most nutritious foods, since they are high in fiber, vitamins, carbohydrates, protein, and minerals, all of which are essential for a healthy lifestyle. G. asiatica fruits are enjoyed by people of all ages and communities in Pakistan because of their exquisite taste and affordable cost. Fresh fruits are consumed raw, and soft drinks are also produced from them. Jams, pies, squashes, and chutneys are all made using the fruit . In Sudan, rural peasants utilize G. tenax fruits as an iron supplement for anemic children. Nesha is a thin porridge made from millet flour and the pulp of the G. tenax fruits, which is then thickened with custard. This porridge is provided to pregnant and breastfeeding women to help them stay healthy and produce milk for their babies . In Pakistan and India, G. optiva fruits are edible and have a pleasant acid taste. The leaves are rated as good fodder and the trees are heavily lopped for this purpose in the winter months when no other green fodder is usually available . In view of the preceding observations, a thorough and systematic analysis of the nutrients and phytochemical composition of the Grewia species could assist in a better understanding of the role of this genus in human nutrition and health. This review intends to provide a broad view of this species, beyond the currently available reviews, and highlights future potential biological and pharmacological research on the wide range of phytochemicals found in this genus. Herein, we systematically evaluate the nutrient and bioactive composition of the Grewia species, including the reported concentrations of its bioactive components and related biological activities. Additionally, a bibliometric analysis was performed for the first time, all to encourage experts from underrepresented localities of the globe to initiate new studies.
2.1. Literature Search and Methodology The present review on the genus Grewia was planned and conveyed following the statement of PRISMA 10 and based on the systematic review approach adopted by Muka et al. . Different bibliographic databases (PubMed, Scopus, Web of Science, Google Scholar) were explored to screen fifty-six relevant scientific articles published between 1975 and 31 March 2021 (date of last search) . The search terms were related to the nutritional aspects, and the phytochemical and pharmacological profiling of the genus Grewia (e.g., “nutritional composition”, “traditional medicinal uses of Grewia ”, “biological activities of genus Grewia ”, “phytochemical composition of Grewia” , “antioxidant potential of Grewia ”, “anticancer analysis of Grewia ”, “in vitro and in vivo anti-inflammatory activities of Grewia ”, “anti-diabetic properties of Grewia ”). 2.2. Study Selection Criteria Articles were selected according to the criteria listed below: i. Any parts of Grewia species, such as the pulp, skin, seeds, roots, bark or leaves were described; ii. Evaluation of nutritional profiling, phytochemical composition/characterization, and pharmacological activities were provided. Conference abstracts, letters to the editors, proceedings of conferences, literature reviews, meta-analyses, morphological studies, and product development experiments were excluded. To discover additional relevant articles, the reference lists of the included articles were checked (backward reference searching). 2.3. Data Extraction The articles were sorted or screened by two reviewers including the first and fourth authors with respect to the provided information . Articles focusing on the nutritional and phytochemical composition/identification/characterization/quantification and the health-promoting impacts of the Grewia species, including the antioxidant, anti-inflammatory, antidiabetic, anticancer, antimicrobial and other biological activities were included in this review. The other key features were year of publication, species, type of solvent used for extraction, technique/method adopted for the identification of bioactive metabolites, and the parts of plants used in the experiment. Moreover, in vitro experiments and in vivo animal-based studies were also considered. Among all the eligible studies, eleven studies evaluated the proximate composition including carbohydrates, fat and fatty acids, protein and amino acids, fiber, ash and minerals, and vitamins. Nineteen studies evaluated the phytochemical composition including flavonoids, phenolic acids, terpenoids, phytosterols, carboxylic acid, hydroxycinnamic acid, sesquiterpenoids, hydroxycoumarins, fatty alcohol, phenols, xanthones, hydroxyquinols, and non-flavonoids. Eleven studies determined the antioxidant potential of Grewia using in vitro experiments. Six studies focused on in vitro anticancer properties using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay against various cancer cell lines. In five articles, the anti-inflammatory properties of Grewia were analyzed; four of them were in vivo and one of them was in vitro. Eight studies evaluated the radioprotective/hepatoprotective potential of Grewia against radiation-induced thiobarbituric acid reactive substances (TBARS) and lipid peroxide production. Nine antimicrobial studies were conducted wherein four focused on antibacterial properties, two focused on antibacterial and antifungal activities, and two only studied antifungal capabilities. Seven articles evaluated antidiabetic potential wherein three studies used animal models, three used in vitro models, and one used a non-diabetic human model. 2.4. Bibliometric Analysis Bibliometric analysis is a computational method for analyzing selected published research/review articles, as well as other related works on the subject that aims to attract experts from pharmaceutical industries paying close attention to the outcomes from this statistical method. The network maps were created based on research relationships between article authors, keywords in papers, journals in which publications are published, and organizations where research was performed . In the present systematic review, the analysis was performed on published data on various Grewia species. The co-authorship analysis was performed to investigate the interactions among scholars in relation to a research topic and a formal means for researchers to collaborate intellectually . The goal was to create a network model that could describe the interactions between researchers from various areas of the world. For this purpose, relevant articles were found in the Mendeley database using the multiple key terms given in . Importantly, research articles were selected based on their publication year between 1975 and 2021 to provide the scope of research on the Grewia species during the last 50 years. A network visualization map was constructed based on this refined list, using VOS viewer software version 1.6.16 (available online: www.vosviewer.com , accessed on 5 November 2021) for bibliometric analysis. For the study, a supported RIS file type was uploaded in the software. The type of analysis selected was “co-authors”, the unit of analysis was “authors”, the counting method was “full counting”, and the maximum number per author selected was 25. In co-authorship networks, nodes represent authors, organizations, or countries, which are connected when they share the authorship of a paper, and these insights can be used to justify and encourage new studies among experts from underrepresented localities .
The present review on the genus Grewia was planned and conveyed following the statement of PRISMA 10 and based on the systematic review approach adopted by Muka et al. . Different bibliographic databases (PubMed, Scopus, Web of Science, Google Scholar) were explored to screen fifty-six relevant scientific articles published between 1975 and 31 March 2021 (date of last search) . The search terms were related to the nutritional aspects, and the phytochemical and pharmacological profiling of the genus Grewia (e.g., “nutritional composition”, “traditional medicinal uses of Grewia ”, “biological activities of genus Grewia ”, “phytochemical composition of Grewia” , “antioxidant potential of Grewia ”, “anticancer analysis of Grewia ”, “in vitro and in vivo anti-inflammatory activities of Grewia ”, “anti-diabetic properties of Grewia ”).
Articles were selected according to the criteria listed below: i. Any parts of Grewia species, such as the pulp, skin, seeds, roots, bark or leaves were described; ii. Evaluation of nutritional profiling, phytochemical composition/characterization, and pharmacological activities were provided. Conference abstracts, letters to the editors, proceedings of conferences, literature reviews, meta-analyses, morphological studies, and product development experiments were excluded. To discover additional relevant articles, the reference lists of the included articles were checked (backward reference searching).
The articles were sorted or screened by two reviewers including the first and fourth authors with respect to the provided information . Articles focusing on the nutritional and phytochemical composition/identification/characterization/quantification and the health-promoting impacts of the Grewia species, including the antioxidant, anti-inflammatory, antidiabetic, anticancer, antimicrobial and other biological activities were included in this review. The other key features were year of publication, species, type of solvent used for extraction, technique/method adopted for the identification of bioactive metabolites, and the parts of plants used in the experiment. Moreover, in vitro experiments and in vivo animal-based studies were also considered. Among all the eligible studies, eleven studies evaluated the proximate composition including carbohydrates, fat and fatty acids, protein and amino acids, fiber, ash and minerals, and vitamins. Nineteen studies evaluated the phytochemical composition including flavonoids, phenolic acids, terpenoids, phytosterols, carboxylic acid, hydroxycinnamic acid, sesquiterpenoids, hydroxycoumarins, fatty alcohol, phenols, xanthones, hydroxyquinols, and non-flavonoids. Eleven studies determined the antioxidant potential of Grewia using in vitro experiments. Six studies focused on in vitro anticancer properties using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay against various cancer cell lines. In five articles, the anti-inflammatory properties of Grewia were analyzed; four of them were in vivo and one of them was in vitro. Eight studies evaluated the radioprotective/hepatoprotective potential of Grewia against radiation-induced thiobarbituric acid reactive substances (TBARS) and lipid peroxide production. Nine antimicrobial studies were conducted wherein four focused on antibacterial properties, two focused on antibacterial and antifungal activities, and two only studied antifungal capabilities. Seven articles evaluated antidiabetic potential wherein three studies used animal models, three used in vitro models, and one used a non-diabetic human model.
Bibliometric analysis is a computational method for analyzing selected published research/review articles, as well as other related works on the subject that aims to attract experts from pharmaceutical industries paying close attention to the outcomes from this statistical method. The network maps were created based on research relationships between article authors, keywords in papers, journals in which publications are published, and organizations where research was performed . In the present systematic review, the analysis was performed on published data on various Grewia species. The co-authorship analysis was performed to investigate the interactions among scholars in relation to a research topic and a formal means for researchers to collaborate intellectually . The goal was to create a network model that could describe the interactions between researchers from various areas of the world. For this purpose, relevant articles were found in the Mendeley database using the multiple key terms given in . Importantly, research articles were selected based on their publication year between 1975 and 2021 to provide the scope of research on the Grewia species during the last 50 years. A network visualization map was constructed based on this refined list, using VOS viewer software version 1.6.16 (available online: www.vosviewer.com , accessed on 5 November 2021) for bibliometric analysis. For the study, a supported RIS file type was uploaded in the software. The type of analysis selected was “co-authors”, the unit of analysis was “authors”, the counting method was “full counting”, and the maximum number per author selected was 25. In co-authorship networks, nodes represent authors, organizations, or countries, which are connected when they share the authorship of a paper, and these insights can be used to justify and encourage new studies among experts from underrepresented localities .
In general, 167 chemical compounds from 12 Grewia species included in the study, allocated to 21 categories were found . Flavonoids represented 41.9% of the reported bioactive compounds, followed by protein and amino acids (10.9%), fats and fatty acids (9.72%), ash and minerals (6.67%), non-flavonoid polyphenols (6.05%), triterpenes (4.86%), phenolic acids (4.79%), vitamins (3.03%), carboxylic acids (3.03%), and all other categories were below 2% of the total reported compounds . Of the 167 reported compounds, information on concentrations was available for 114 (68.3%) of them, grouped in 9 categories. The information on concentration was not available for 53 compounds (31.6%) grouped in 12 categories. Moreover, also presents the compounds according to the parts of the plant in which they were reported. A total of 15 categories were studied in fruit, 6 in seeds, 8 in leaves, 4 in stem bark, 6 in roots, and 3 in flowers. Concerning the methods used to identify and quantify the phytochemicals, we extracted information on the solvent or extract used to analyze every compound and the techniques used to identify or quantify them. As shown in , a wide variety of extracts/solvents and techniques were reported in the literature. In detail, 25.2% of the compounds were extracted with methanol in six studies , 47.8% with acidified methanol in two studies , 11.7% with water in two studies , 5.04% with 50% methanol in one study , 4.20% with petroleum ether in one study , 3.36% with chloroform in two studies , 2.52% with ethyl acetate in one study , 1.68% with aqueous acetone in one study , and 1.68% with 80% methanol in one study . Mass spectrometry was the most commonly employed technique for the identification of bioactive compounds (81.5%) wherein one study used ESI-MS/MS, one used LC–QToF–MS, one used GC-MS, and two used NMR spectroscopy. Secondly, liquid chromatography was used for the identification of bioactive compounds (11.7%), two studies employed HPLC using a diode array detector and in one article TLC was used, and information was not available for 6.70% of them. 3.1. Chemical Composition Five studies reported qualitative and quantitative analyses of the proximate composition of various Grewia species including G. asiatica , G. tenax , G. flavescence, G. villosa , G. tilifolia, and G. nervosa . The total content of carbohydrates, fibers, lipids, proteins, and ash was reported in the fruits, seeds, and leaves. The data illustrate that carbohydrate contents were higher in the fruits, ranging between 21 and 84% , followed by seeds, 39–66% , and leaves, 28–40% . The fat content in seeds was reported as 11.1% and was almost six times higher than that recoded in fruits (0.10–1.70%) and three times higher than leaves (2.60–3.86%) . On an average basis, leaves (12.9–18.9%) and seeds (7.50–17.4%) were reported to be a rich source of protein in contrast to fruits (1.57–8.7%). A similar trend was observed for fiber wherein the leaves exhibited more fiber content, 28.3–38.3%, followed by seeds at 14.8–26.1%, and fruits at 5.53–25.5% on average. Ash content (6–11%) in leaves was on average almost three or two times higher when compared to seeds (3–5.08%) or fruits (1.1–5.2%). represents in detail the proximate composition of the different Grewia species. Fruits and vegetables contain a huge array of secondary metabolites and in fact, these metabolites form the basis for numerous commercial pharmaceutical drugs, as well as herbal remedies derived from medicinal plants . Today, the pharmacological and disease-preventing role of various classes of phytochemicals is firmly established. These chemical constituents predominantly act as antioxidants, anticancer agents, detoxifying agents, and immunity-potentiating and neuropharmacological agents . Grewia has been shown to contain a wide variety of phytochemicals and bioactive compounds. Among the seven Grewia species considered for the phytochemistry study, G. asiatica was explored in eleven studies, G. optiva in three articles, and G. lasiocarpa, G. biloba, G. microcos, G. tiliaefolia , and G. hirsuta in each study. The information on phytochemical identification/quantification was reported in 19 articles, and three of them performed the quantification analysis . Regarding the plant parts analyzed in each study, the fruits of G. asiatica were the most explored, with five articles studying fruits alone . Two articles focused on G. asiatica leaves , two explored G. asiatica flowers , one studied Grewia optiva leaves , one studied G. asiatica stems and each studied G. microcos and G. lasiocarpa stems , G. tiliaefolia bark , G. hirsute leaves , and G. optiva roots and stems to identify the phytochemical constituents. We found 113 secondary metabolites reported from G. asiatica , G. optiva , G. tiliaefolia, G. biloba , G. microcos , G. hirsuta , and G. lasiocarpa allocated to 13 categories wherein 102 compounds were reported from G. asiatica , 19 were identified from G. optiva , three were identified from G. tiliaefolia , six were identified from G. biloba, seven were identified from G. microcos , one was identified from G. hirsuta , and one was identified from G. lasiocarpa. The same compounds identified in different studies were considered as a single compound with each presented with a respective reference. Flavonoids represented 41.3% of the reported bioactive compounds wherein the most dominant subgroup was anthocyanins (13.04%) followed by flavones (6.95%), flavanones (3.47%), isoflavonoids (3.47%), and flavanols (3.47%). Phenolic acids represented 6.95% of the reported compounds followed by triterpenes (6.95%), carboxylic acid (3.47%), phytosterols (2.60%), dihydroflavonols (1.73%), hydroxycinnamic acids (1.73%), sesquiterpenoids (1.73%), fatty acids (1.73%), 7-hydroxycoumarin (0.86%), fatty alcohol (0.86%), phenols (0.86%), xanthones (0.86%) and hydroxyquinols (0.86%). Of the 113 reported secondary metabolites, information on concentration was available for only 62 (54.86%) of them, grouped in 3 categories including flavonoids (anthocyanins, isoflavonoids, flavone, flavanones, flavanols, flavonols, dihydroflavonols), hydroxycinnamic acid, and 7-hydroxycoumarins. presents the compounds according to the parts of the plant. Eight categories of secondary metabolites were studied in fruits, four in stem bark, three in flowers, three were reported in leaves, and one in pomace. Out of 19 studies, only 3 performed the quantitative analysis whereas 16 articles without quantitative information of bioactive metabolites were reported . 3.2. Biological Activity 3.2.1. Antioxidant Activity Antioxidant-based drug formulations are used for the prevention and adjunct treatment of complex diseases such as Alzheimer’s, stroke, cancer, diabetes, and atherosclerosis, whose etiology is partly dependent on persistent oxidative damage by free radicals. Grewia has been identified as a candidate for the development of nutraceutical products by virtue of an array of relevant bioactive compounds. Further investigations at the molecular level, however, are still needed to explore and discuss the mechanisms of action of these active ingredients . Eleven studies investigated the antioxidant potential of the Grewia species; eight of them studied G. asiatica , one article focused on G. optiva , one evaluated G. lasiocarpa , and one appraised both G. flava and G. biocolor . The scavenging and reducing potential of various parts of the Grewia species was reported to be dose dependent. 2,2-Diphenyl-1-picrylhydrazyl (DPPH) was the most commonly employed antioxidant assay utilized in nine studies along with other methods, ABTS (2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid)) in three, ferric reducing antioxidant power (FRAP) in three, nitric oxide (NO) in two, and hydrogen peroxide (H 2 O 2 ) in one study. The principle of the DPPH assay involves measuring the change in DPPH color from violet to pale yellow, resulting from the existence of radical scavenging compounds . Six studies indicated that the Grewia species showed notable antioxidant potential against stable free radicals of which five studies were from G. asiatica , and one study was from G. optiva . Among the five studies on G. asiatica , four studied the edible portion and one studied the leaves of G. asiatica. Two studies adopted aqueous methanol extraction, two studies used pure methanol, and one study used benzene for extraction. The one reported study on G. optiva focused on leaves and one focused on stems using methanol or water as extraction solvents. Methanol and aqueous methanol extracts of the G. asiatica fruits showed the substantial scavenging activity to be between 60 and 85%. Another study reported by Gupta et al. recorded an IC 50 of 16.19 µg/mL for the benzene extract of the G. asiatica leaves against free radicals in the DPPH assay, which is almost 4.8 times more than standard ascorbic acid noticed with IC 50 78.17 µg/mL . Three studies used a FRAP assay to evaluate the reducing potential of the Grewia species, of which two studied G. asiatica fruits and one studied the stem of G. lasiocarpa . The FRAP assay estimates the electron donating capacity of any compound based on the reduction of ferric ion (Fe 3+ , as ferric tripyridyl triazine: Fe 3+ -TPTZ) into ferrous ion (Fe 2+ , as ferrous tripyridyl triazine: Fe 2+ -TPTZ) . Fifty per cent of the methanolic extract of the G. asiatica fruit evinced a dose-dependent reducing ability of 43 mg gallic acid equivalent per gram (GAE/g) which is approximately 10 times more than 100% methanolic extract of the G. asiatica fruit extract (4.14 mg GAE/g) . Three studies used the ABTS assay to measure the antioxidant activity of the Grewia species. Similar to the DPPH assay, the ABTS assay determines the antioxidant activity of hydrolysates that scavenge ABTS radicals. The Grewia species showed a dose-dependent ABTS scavenging. In , the meta-analysis for the antioxidant ( a), anticancer ( b), anti-inflammatory ( c), and antimicrobial ( d) activities is shown. Ten studies were included in the meta-analysis of the antioxidant activities of various Grewia species, as summarized in a. The meta-analysis revealed that the Grewia species showed notable antioxidant activity (MRAW = 59.71, 95% CI = 36.51–82.90, p = 0.0, I 2 = 100%) overall. However, a detailed sub meta-analysis of four and three studies unveiled significant antioxidant properties in the DPPH (MRAW = 64.34, 95% CI = 12.28–116.40, p = 0.0, I 2 = 100%) and ABTS assay (MRAW = 79.36, 95% CI = 18.43–140.28, p < 0.01, I 2 = 100%), respectively. In contrast, the NO, FRAP, and H 2 O 2 assays were only in one study; therefore, a heterogenic analysis was not possible. 3.2.2. Anticancer Properties Despite the overwhelming research response by researchers, cancer still represents the second leading cause of death and is trending towards becoming the leading cause in the elderly . Besides the tremendous development in anticancer therapies and drugs, the prevention of tumor generation by adopting a healthy lifestyle is generally considered as an effective strategy to reduce cancer risk. It is well established that diets rich in fruit and vegetables are useful in cancer prevention by virtue of their content of a wide variety of phytochemicals . Their preventive activity goes beyond the antioxidant capacity, and includes effects on the expression of oncogenes, tumor suppressor genes and transcription factors, and on cell cycle and apoptosis . Six studies were reported from 2011 to 2020 that evaluated the anticancer potential of the Grewia species, wherein five articles reported the anticancer effects of G. asiatica , one study focused on G. lasiocarpa , and all the studies employed the MTT assay as an index of cell proliferation. Among five studies on G. asiatica , two studies used a methanolic extract from the leaves, one study used the methanolic extract from fruit residues, one study used an aqueous methanol extract from fruit, one study presented the comparison between the aqueous extract from both fruits and leaves, and the last study used stem bark of G. lasiocarpa . In all studies, samples were prepared by initially drying the fruit/leaves/stems under shade and then were extracted with the previously mentioned solvents. Five studies were included in the meta-analysis of the anticancer activities of various Grewia species, as summarized in b, except for one study reported by Dattani et al. (2011) which was excluded as the results were presented in different units. The meta-analysis revealed that the Grewia species showed anticancer activities (MRAW = 65.94, 95% CI = 57.89–73.99, p < 0.01, I 2 = 93%) overall. However, a detailed sub-meta-analysis for specific cancer cell lines showed that G. asiatica exerted profound effects against the proliferation of HepG2 cells (MRAW = 66.77, 95% CI = 49.48–84.05, p < 0.01, I 2 = 82%), NCI-H 522 (MRAW = 67.09, 95% CI = 46.07–8.11, p < 0.30, I 2 = 70%), MCF-7 (MRAW = 61.94, 95% CI = 41.62–82.26, p < 0.01, I 2 = 97%), and HeLa (MRAW = 87.72, 95% CI = −52.52–227.97, p < 0.06, I 2 = 72%). In contrast, there was only one study that discussed the effects of the G. asiatica fruit against K562 and HL-60 (human leukemia) cells; therefore, a meta-analysis was not possible for that study. Marya et al. observed significant cytotoxic effects of the aqueous extract from the G. asiatica fruit against HEp-2 (larynx cancer), NCI-H522 (lung cancer), and MCF-7 (breast cancer) with IC 50 of 50.31 µg/mL, 59.03 μg/mL and 58.65 μg/mL, respectively. However, notable activity of the aqueous extract from the G. asiatica leaves was observed in the aforementioned study against HEp-2 and MCF-7 cancer cell lines with IC 50 extracts of 61.23 µg/mL, 50.37 µg/mL, respectively. Dattani et al. recorded a similar response of ethanolic extracts from the G. asiatica fruit against NCI-H522 and MCF-7 cells while the extracts appeared to be ineffective against a cervical cancer cell line (HeLa) and HEp-2. Moreover, the intraperitoneal administration of methanolic extracts from the G. asiatica fruit inhibited the growth of Ehrlich’s ascites carcinoma (EAC) cells resulting in a significant increase in the life span of tumor-bearing animals and exerted cytotoxic activity toward four human cancer cells i.e., HL-60, K-562, MCF-7, and HeLa with an IC 50 of 53.7, 54.9, 199.5, and 177.8 µg/mL, respectively . With regard to the pomace extract from G. asiatica , this was shown to elicit a significant cytotoxic activity against MCF-7 with an IC 50 of 68.91 µg/mL, and a less remarkable activity towards bone sarcoma cells (MG-63), HeLa, and hepatocellular carcinoma cells (HepG2) . A recent study illustrated that the aqueous methanol extract from G. asiatica to be more effective against breast cancer, lung cancer, and laryngeal cancer cell lines with IC 50 of 34.87 µg/mL, 73.01 µg/mL, and 80.41 µg/mL, respectively suggesting antitumor claims for the G. asiatica . However, the stem, bark, leaves, and pulp extracts from G. asiatica , when analyzed for cytotoxic potential by using a brine shrimp lethality assay and a hemagglutination assay, failed to show a significant cytotoxic response . The last study reported anticancer effects of the pure compound lupeol i.e., isolated from the stem bark of G. lasiocarpa against HEK293 (human embryonic kidney), HeLa, and MCF-7 cells. 3.2.3. Anti-Inflammatory Activity The therapeutic role of medicinal plants alone or as adjuncts to conventional treatments, is firmly recognized. This notion, along with the relatively low cost of medicinal plants, has been a reason to promote their use in poor countries where people have restricted access to expensive drugs . Inflammation is a fundamental and highly orchestrated physiological defensive process against noxious factors such as infections, exposure to toxicants, allergens, and other stimuli. Inflammation is often associated with pain that is quite often mediated with nonsteroidal drugs (NSAIDS) such as corticosteroids which possess remarkable anti-inflammatory activity and analgesics such as opioids and anticonvulsants . However, the prolonged use of these drugs is discouraged due to their adverse effects such as severe gastric lesions, digestive system disorders, nausea, urinary retention, and dependence on opioids. Six studies were included in this category from 2012 to 2020; five of them focused on G. asiatica whereas five articles focused on fruit parts and one study focused on the stem bark. The last study provided a comparison between the n -hexane extracts from G. asiatica and G. optiva for their protective effects against hypotonicity-induced lysis i.e., membrane stabilization. Lyophilization was the most commonly employed technique for sample preparation. The analgesic activity was evaluated using acetic acid-induced writhing and hot plate methods. Antipyretic activity was evaluated using the Brewerís yeast-induced pyrexia method, in vivo anti-inflammatory activity was recorded using carrageenan-induced paw edema, and in vitro anti-inflammatory activity was examined using the human RBC membrane stabilization method. Four studies were included in the meta-analysis of the anti-inflammatory activities of two Grewia species i.e., asiatica and optiva , as summarized in c, except one study reported by Bajpai et al. , where the results were not presented in any unit. The meta-analysis revealed that the Grewia species showed notable antinociceptive and anti-inflammatory activity (MRAW = 43.98, 95% CI = 21.98–65.97, p = 0.0, I 2 = 100%) overall. However, a detailed sub meta-analysis suggested notable antinociceptive activities against acetic acid-induced writhing (MRAW = 72.18, 95% CI = 13.09–131.27, p = 0.0, I 2 = 100%), anti-inflammatory activity against carrageenan-induced paw edema (MRAW = 45.18, 95% CI = 24.61–65.75, p = 0.0, I 2 = 100%), and protection of the membrane against heat-induced hemolysis (MRAW = 21.6, 95% CI = −41.35–84.55, p = 0.0, I 2 = 100%). Das et al. evaluated the anti-pain activity of aqueous extracts from G. asiatica fruits using the acetic acid-induced writhing ( n = 35, trial duration 30 min), tail immersion ( n = 35 trail duration was 10, 30, 60 min), and hot plate methods ( n = 35, trial duration 10 min) in rats. Aqueous extracts of the G. asiatica fruit (200–300 mg/kg body weight) were found to attenuate the pain induced by acetic acid in the writhing test, tail immersion, and hot plate tests. Paviaya et al. reported the analgesic efficacy of aqueous and methanolic extracts of G. asiatica bark in the hot plate test ( n = 30, trial duration was 0, 30, 90, 190 min) and the writhing response test ( n = 30, trial duration was 30 min). Similar studies also demonstrated that the methanolic and aqueous fruit extracts of G. asiatica at doses between 300 and 500 mg/kg counteracted the fever induced by lipopolysaccharide ( n = 25, trail duration was 30, 60, 90 min) and brewer’s yeast ( n = 25, trail duration was 1, 2, 3, 18 h) in rats, respectively . The most recent study by Qamar et al. found that 100% methanol and 50% aqueous methanol extracts of G. asiatica fruits protected the animals under experimentation from the painful stimulation of formalin ( n = 40, trial duration was 0–25 min) in a dose-dependent manner with the maximum effect being 62.9% and 62.6%, respectively at 400 mg/kg/body weight. Similarly, methanol and aqueous methanol extracts of the G. asiatica fruit subjected to a glutamate-induced ( n = 40, trial duration was 0–25 min) nociceptive response assessment in a mice model showed a significant anti-nociceptive effect from G. asiatica in comparison to the control and standard drug . The anti-inflammatory potential of G. asiatica has also been extensively investigated. The data demonstrating the efficacy of the various anatomical fractions of G. asiatica such as bark as anti-inflammatory agents were significant when tested against carrageenan-induced paw oedema ( n = 30 trail duration was 3 h) in rats. The authors confirmed that bark methanol and aqueous extracts were significant factors that attenuated paw edema at 400 mg/kg as 59.14% and 53.04%, respectively, while the response was quite comparable to that of indomethacin (64.02% reduction at 10 mg/kg) . Methanol extracts of G. asiatica fruits were also screened for their possible anti-inflammatory activity on carrageenan-induced paw edema in rats at an oral dose level of 250 and 500 mg/kg, orally. The extract showed significant anti-inflammatory activity at both doses . The methanol and aqueous extracts of G. asiatica fruits exerted anti-inflammatory activity against carrageenan-induced paw edema ( n = 25, trail duration was 1–3 h) in a dose-dependent manner at 36.1% and 32.4% at 500 mg/kg, respectively in comparison to the standard indomethacin, which exerted 36.4% inhibition at 10 mg/kg . Feeding 100% methanolic extracts of G. asiatica fruits to mice at the rate of 400 mg/kg b.w., inhibited formaldehyde ( n = 40, trail duration was 0–25 h) and carrageenan-induced paw oedema ( n = 40, trail duration was 1–3 h) by 74% and 71%, respectively, while the inhibition rate was 88% with indomethacin within 3 h of extract/standard drug feeding at 100 mg/kg. Further, a 50% methanolic extract also indicated increased efficacy against Prostaglandin E 2 (PGE2)-induced paw edema (68.7% inhibition at 400 mg/kg (b.w.) in 120 min of extract administration) in comparison with the control while indomethacin presented a relatively higher rate of inhibition to PGE2-induced paw edema i.e., 79% at 100 mg/kg . The last study reported that the traditional use of G. asiatica n- hexane extracts as anti-inflammatory ingredients was justified on account of the extract’s ability to significantly stabilize human red blood cells in comparison with diclofenac potassium. 3.2.4. Antidiabetic Activity Type 2 diabetes has emerged as an important health problem within the 21st century . Ever increasing infiltration trends of diabetes is one of the major health-threatening issues in both developed and developing societies and individuals . Hitherto, one human, five animal, and three in vitro studies have been conducted to investigate the antidiabetic potential of G. asiatica from 2011 to 2016 and no other Grewia species have been explored under the mentioned category so far. Among nine reported studies, 3, 3, 1, 1 focused on leaves, fruits, bark, and pomace, respectively, and one study presented a comparison of the antidiabetic activity of the fruit, stem, and bark ethanolic extracts of G. asiatica . Interestingly, the oral supplementation of an ethanol extract of the G. asiatica bark in alloxan-induced diabetic rats ( n = 20, trial duration was 0–120 min) significantly attenuated the blood glucose levels and increased the survival rate of diabetic rats when compared with metformin-treated rats . Likewise, ethanolic extracts of G. asiatica significantly lowered the blood glucose level in alloxan-induced diabetic rats ( n = 36, trail duration was 0–7 h), and appeared to be more effective than glibenclamide used as a reference antidiabetic drug . Another study on streptozotocin-induced diabetic rats ( n = 36, trail duration was 0–24 h) recorded that the oral administration of extracts of the G. asiatica leaves at the rate of ~500 mg/kg b.w. for 21 days efficiently shortened and reduced blood glucose spikes in rats previously exposed to overloads of glucose, and considerably increased the glucose tolerance in normal rats . Likewise, Khattab et al. recorded normalized glycemia in streptozotocin-induced rats fed with G. asiatica fruit extracts. The study ( n = 40, trail duration was 4 weeks) recorded reduced serum cytokine IL-1β and TNF-α levels, decreased pancreas malondialdehyde (MDA) levels, and normalized glycemia concomitantly with a higher accumulation of liver glycogen and increased liver and pancreas glutathione (GSH) and superoxide dismutase (SOD) enzyme activities. Moreover, the inhibitory properties of the aqueous extracts of the G. asiatica fruit against α-glucosidase and α-amylase activity with IC 50 of 8.93 and 0.41 mg/mL, respectively were also reported by Das et al. The inhibitory properties of the aqueous methanol extract of G. asiatica fruit residues with IC 50 of 45.70 mg/mL against α-Amylase were recorded and were more promising when compared with the extracts of B . vulgaris, A. comosus , A. lachoocha , and A. heterophyllus fruit . Notably, clinical trials revealed that G. asiatica fruit extracts have a moderate hypoglycemic effect on a non-diabetic human model. Furthermore, when tested in vitro with glucose, the fruit showed neutralizing effects on glucose-induced reactive oxygen species (ROS) suppression . 3.2.5. Radioprotective and Hepatoprotective Potential Exposure to ionizing radiations is unsafe for human health, even when used in therapeutics e.g., radiotherapy against cancer cells may cause severe side effects to irradiated patients. Eight studies were reported in this category and all studies explored the edible portion of G. asiatica . Seven of them performed the extraction with methanol and one study employed ethanol extraction. Methanol extract supplementation was reported to protect mice brain lipids against radiation-induced ( n = 120, trail duration was 1–30 days) oxidation and was shown to improve the GSH content by 14.3% . G. asiatica fruit extracts were also shown to significantly protect against the deleterious effects of whole-body irradiation in mice . Another study by Sisodia and Singh (2009) reported that G. asiatica fruit extracts prevented radiation-induced memory and learning deficits in addition to known histopathological, biochemical, and behavioral ameliorative effects. Numerous studies advocated a G. asiatica fruit-enriched diet to reduce lipid peroxidation rates and serum cholesterol, and to restore the normal levels of GSH, glutathione peroxide (GSH-Px), sugars, and proteins in irradiated mice models . A histopathological and biochemical investigation of the hepatic tissues of X-ray-irradiated G. asiatica extract-fed mice demonstrated hepatoprotective effects . Radioprotective effects were also noticed in histopathological specimens of mice testis where irradiation resulted in lower spermatogonia “A”, spermatogonia “B”, spermatocytes and spermatid count when compared with animals irradiated after supplementation with G. asiatica fruit extracts . 3.2.6. Antimicrobial Properties Plants serve as an important source of novel medicinal substances . Sufficient information is available to confirm the anti-infective role of bioactive compounds of natural origin. For centuries, the use of herbal drugs has been extensively recommended to modulate various opportunistic infections. Flavonoids isolated from ethnopharmacologically established plants are considered to be effective antimicrobial substances against a wide variety of microorganisms . Nine studies were reported in the mentioned category from 2011 to 2020, wherein two studies performed both antibacterial and antifungal activities, five studies reported only antibacterial activities, and lastly, two studies only evaluated the antifungal potential. Out of nine total reported articles, G. asiatica was the most commonly explored i.e., G. asiatica was the focus in six studies and G. optiva , G. lasiocarpa, and G. hirsuta were the focus in 1, 1, and 1 studies, separately. Five studies focused on leaves, two studies focused on fruit, and two studies used stem bark to evaluate the antibacterial and antifungal properties of the Grewia species. Researchers have shown that crude extracts of the Grewia spp. have valuable antibacterial activities predominately associated with their high flavonoid content. Beside the fruit fraction, the leaves and stem bark of G. asiatica have also been suggested to possess antimicrobial potential . Six studies were included in the meta-analysis of antimicrobial potential including three Grewia species i.e., asiatica , optiva , and hirsuta as summarized in d, except three studies reported by Akwu et al. (2020) , Zia et al. (2011) , Dawar et al. (2020) wherein standard deviation was not mentioned. The meta-analysis revealed that the Grewia species showed notable antibacterial and anti-fungal activity (MRAW = 17.15, 95% CI = 11.59–22.70, p = 0.0, I 2 = 100%) overall. However, the detailed sub meta-analysis suggested showed notable antibacterial (MRAW = 10.05, 95% CI = 7.72–12.38, p = 0.0, I 2 = 100%) and anti-fungal (MRAW = 32.51, 95% CI = 26.85–38.17, p = 0.0, I 2 = 100%) activities. Flavonoids and flavonoid-rich fractions isolated from the peel and pulp of G. asiatica caused significant inhibition against Gram-positive and Gram-negative bacterial strains. Staphylococcus aureus was reported to be the most susceptible and Bacillus subtilis was reported to be the least susceptible among the Gram-positive bacterial strains while Salmonella typhi were the most susceptible and Escherichia coli ranked among least susceptible among the Gram-negative bacterial strains . The antibacterial activity of the methanolic extract of the G. asiatica leaves has been reported against Staphylococcus aureus and Salmonella typhi while the aqueous extract of G. asiatica leaves was only found to be effective against S. aureus . G. asiatica fruit extracts can inhibit Gram-negative bacteria through their bioactive compounds such as flavonoids, alkaloids, and saponins without necessarily penetrating into the microbial cell . The potent antibacterial activity of G. asiatica leaf extracts has also been shown against eight different bacterial strains, i.e., Proteus mirabilis , Citrobacter sp., Pseudomonas aeruginosa , Escherichia coli , Salmonella typhi , Micrococcus luteus , Staphylococcus aureus , and Bacillus subtilis . Another study investigating the efficacy of the bark and fruit extracts of G. asiatica against four Gram-positive and six Gram-negative bacterial strains found that the extracts were more active toward S. aureus , E. coli , and Proteus vulgaris , and overall, were more active on Gram-positive strains as compared to Gram-negative bacteria . Another study found the aqueous extract of G. optiva leaves to exert moderate inhibition against three different bacterial strains named S. aureus , E. coli , Salmonella typhi , and Streptococcus pneumoniae . A seventy per cent methanol extract of G. hirsuta showed antibacterial activity against S. aureus and E. coli equivalent to the standard drug ciprofloxacin . Ethanol extracts of G. asiatica leaves were reported to have good antifungal activity against nine fungal strains, namely, Aspergillus effusus , A. parasiticus , A. niger , Saccharomyces cerevisiae , Candida albicans , Yersinia aldovae , Fusarium solani , Macrophomina phaseolina , and Trichophyton rubrum . Pathogenic fungi are responsible for huge crop production losses by perishing the roots system within plants. In vitro antifungal trials using paper disc and diffusion methods found that a 100% aqueous extract from the G. asiatica leaves induced significant inhibition against Rhizoctonia solani , Fusarium oxysporum , and Macrophomina phaseolina and consequently ameliorated the growth of bottle gourd and cowpea. Correspondingly, in vivo results disclosed that an addition of 1% of the powder of the G. asiatica leaves to organic matter considerably reduced the colonization of Macrophomina phaseolina, Rhizoctonia solani , and Fusarium spp . An even greater inhibition against colonization was offered by a 100% G. asiatica leaf extract when directly drenched into the soil. Further, seed treatment of food crops with a 100% leaf extract was reported to increase the bottle gourd and cowpea growth and notably suppressed fungal attack . In an in vitro antiviral trial, a G. asiatica extract was sprayed on test plants at different concentrations (500, 1000, 1500, and 2000 μg/mL) against ULCV (Urdbean leaf crinkle virus). Plants sprayed with 1000 μg/mL of a G. asiatica extract exhibited a minimum % infection (34%) as compared to the control which showed 90% infection, while notable activity against ULCV at concentrations of 1500 and 2000 μg/mL was observed . The traditional use of the G. asiatica fruit and its decoctions as a remedy for digestive and urinary disorders is hence justifiably linked to the broad-spectrum antimicrobial activity of the fruit extracts against digestive and urinary tract pathogens such as Salmonella , E. coli , S. aureus , P. aeruginosa , and M. luteus. In a recent study by Goswami et al. (2018) , it was reported that a G. asiatica leaf acetone extract inhibited the activity of different pathogenic fungi including A. fumigatus and C. glabratai at a concentration of 35 mg/mL. 3.2.7. Antiemetic and Antimalarial Activities The antiemetic potential of an ethanol extract of the G. asiatica fruit was evaluated in a canine model at quite low doses while acute oral toxicity assays proved that extracts were safe for consumption at 200 mg/kg b.w., . The referred study documented that administration of fruit extracts at 120 mg/kg b.w., was capable of inducing antiemetic effects in dogs and standard antiemetic drugs such as largactil and maxolon were shown to be active. Similar assays were performed on male chicks and the researchers suggested a dose-dependent inhibition, i.e., a decrease in the number of retches such that 39% inhibition was observed at 50 mg/kg while ~60% inhibition was recorded with a 100 mg/kg supplementation of methanol fruit extracts of G. asiatica . The literature confirmed the anti-malarial potential (69% inhibition) of the G. asiatica leaves assayed for their possible anti-malarial activity using Enoyl-ACP reductase inhibitory assay . 3.2.8. Other Activities: Immunomodulatory, Anti-Depressant, Anti-Neurodegenerative, Drug Delivery Polymers The clinical data are still scarce on the immunomodulatory or immunoregulatory properties of G. asiatica. However, the presence of bioactive compounds bearing significant immune-mediating activities hints at future trends in immunological research related to the genus Grewia . As discussed earlier, the fruit extracts of G. asiatica carry a significant concentration of compounds such as quercetin, isovitexin, kaempferol, iso-liquiritigenin, and umbelliferone that have been extensively explored for their innate and adoptive immune response in inflammatory disorders A flavonoid-rich ethanol extract of the G. asiatica leaves was reported to exhibit immunomodulatory properties with satisfactory immunostimulation . The notable sedative–hypnotic potential of methanol leaf extracts of G. asiatica in mice models was investigated and no toxicity was observed at a 300 mg/kg dose level . Further studies explicated that G . asiatica methanol extracts improved scopolamine-induced learning and memory deficits in rats through the employment of behavior assessment models by reinstating the cytoarchitecture of effected neuronal cells, elevating neurotransmitter acetylcholine, and settling oxidative stress . G. asiatica leaf fractions derived using petroleum ether and chloroform solvents showed considerable effectiveness against neurodegenerative ailments by inhibiting bovine brain acetylcholinesterase (IC 50 = 55.88 µg/mL) and human blood butyrylcholinesterase (IC 50 = 26.14 µg/mL) enzymes, respectively . G. asiatica extracts have the ability to generate colloidal dispersions and viscous gel in water. The mucilage of G. asiatica was therefore tested as natural polymeric ingredients for gel formulation in drug design and suggested identical behavior to that of marketed formulation without negatively affecting drug release .
Five studies reported qualitative and quantitative analyses of the proximate composition of various Grewia species including G. asiatica , G. tenax , G. flavescence, G. villosa , G. tilifolia, and G. nervosa . The total content of carbohydrates, fibers, lipids, proteins, and ash was reported in the fruits, seeds, and leaves. The data illustrate that carbohydrate contents were higher in the fruits, ranging between 21 and 84% , followed by seeds, 39–66% , and leaves, 28–40% . The fat content in seeds was reported as 11.1% and was almost six times higher than that recoded in fruits (0.10–1.70%) and three times higher than leaves (2.60–3.86%) . On an average basis, leaves (12.9–18.9%) and seeds (7.50–17.4%) were reported to be a rich source of protein in contrast to fruits (1.57–8.7%). A similar trend was observed for fiber wherein the leaves exhibited more fiber content, 28.3–38.3%, followed by seeds at 14.8–26.1%, and fruits at 5.53–25.5% on average. Ash content (6–11%) in leaves was on average almost three or two times higher when compared to seeds (3–5.08%) or fruits (1.1–5.2%). represents in detail the proximate composition of the different Grewia species. Fruits and vegetables contain a huge array of secondary metabolites and in fact, these metabolites form the basis for numerous commercial pharmaceutical drugs, as well as herbal remedies derived from medicinal plants . Today, the pharmacological and disease-preventing role of various classes of phytochemicals is firmly established. These chemical constituents predominantly act as antioxidants, anticancer agents, detoxifying agents, and immunity-potentiating and neuropharmacological agents . Grewia has been shown to contain a wide variety of phytochemicals and bioactive compounds. Among the seven Grewia species considered for the phytochemistry study, G. asiatica was explored in eleven studies, G. optiva in three articles, and G. lasiocarpa, G. biloba, G. microcos, G. tiliaefolia , and G. hirsuta in each study. The information on phytochemical identification/quantification was reported in 19 articles, and three of them performed the quantification analysis . Regarding the plant parts analyzed in each study, the fruits of G. asiatica were the most explored, with five articles studying fruits alone . Two articles focused on G. asiatica leaves , two explored G. asiatica flowers , one studied Grewia optiva leaves , one studied G. asiatica stems and each studied G. microcos and G. lasiocarpa stems , G. tiliaefolia bark , G. hirsute leaves , and G. optiva roots and stems to identify the phytochemical constituents. We found 113 secondary metabolites reported from G. asiatica , G. optiva , G. tiliaefolia, G. biloba , G. microcos , G. hirsuta , and G. lasiocarpa allocated to 13 categories wherein 102 compounds were reported from G. asiatica , 19 were identified from G. optiva , three were identified from G. tiliaefolia , six were identified from G. biloba, seven were identified from G. microcos , one was identified from G. hirsuta , and one was identified from G. lasiocarpa. The same compounds identified in different studies were considered as a single compound with each presented with a respective reference. Flavonoids represented 41.3% of the reported bioactive compounds wherein the most dominant subgroup was anthocyanins (13.04%) followed by flavones (6.95%), flavanones (3.47%), isoflavonoids (3.47%), and flavanols (3.47%). Phenolic acids represented 6.95% of the reported compounds followed by triterpenes (6.95%), carboxylic acid (3.47%), phytosterols (2.60%), dihydroflavonols (1.73%), hydroxycinnamic acids (1.73%), sesquiterpenoids (1.73%), fatty acids (1.73%), 7-hydroxycoumarin (0.86%), fatty alcohol (0.86%), phenols (0.86%), xanthones (0.86%) and hydroxyquinols (0.86%). Of the 113 reported secondary metabolites, information on concentration was available for only 62 (54.86%) of them, grouped in 3 categories including flavonoids (anthocyanins, isoflavonoids, flavone, flavanones, flavanols, flavonols, dihydroflavonols), hydroxycinnamic acid, and 7-hydroxycoumarins. presents the compounds according to the parts of the plant. Eight categories of secondary metabolites were studied in fruits, four in stem bark, three in flowers, three were reported in leaves, and one in pomace. Out of 19 studies, only 3 performed the quantitative analysis whereas 16 articles without quantitative information of bioactive metabolites were reported .
3.2.1. Antioxidant Activity Antioxidant-based drug formulations are used for the prevention and adjunct treatment of complex diseases such as Alzheimer’s, stroke, cancer, diabetes, and atherosclerosis, whose etiology is partly dependent on persistent oxidative damage by free radicals. Grewia has been identified as a candidate for the development of nutraceutical products by virtue of an array of relevant bioactive compounds. Further investigations at the molecular level, however, are still needed to explore and discuss the mechanisms of action of these active ingredients . Eleven studies investigated the antioxidant potential of the Grewia species; eight of them studied G. asiatica , one article focused on G. optiva , one evaluated G. lasiocarpa , and one appraised both G. flava and G. biocolor . The scavenging and reducing potential of various parts of the Grewia species was reported to be dose dependent. 2,2-Diphenyl-1-picrylhydrazyl (DPPH) was the most commonly employed antioxidant assay utilized in nine studies along with other methods, ABTS (2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid)) in three, ferric reducing antioxidant power (FRAP) in three, nitric oxide (NO) in two, and hydrogen peroxide (H 2 O 2 ) in one study. The principle of the DPPH assay involves measuring the change in DPPH color from violet to pale yellow, resulting from the existence of radical scavenging compounds . Six studies indicated that the Grewia species showed notable antioxidant potential against stable free radicals of which five studies were from G. asiatica , and one study was from G. optiva . Among the five studies on G. asiatica , four studied the edible portion and one studied the leaves of G. asiatica. Two studies adopted aqueous methanol extraction, two studies used pure methanol, and one study used benzene for extraction. The one reported study on G. optiva focused on leaves and one focused on stems using methanol or water as extraction solvents. Methanol and aqueous methanol extracts of the G. asiatica fruits showed the substantial scavenging activity to be between 60 and 85%. Another study reported by Gupta et al. recorded an IC 50 of 16.19 µg/mL for the benzene extract of the G. asiatica leaves against free radicals in the DPPH assay, which is almost 4.8 times more than standard ascorbic acid noticed with IC 50 78.17 µg/mL . Three studies used a FRAP assay to evaluate the reducing potential of the Grewia species, of which two studied G. asiatica fruits and one studied the stem of G. lasiocarpa . The FRAP assay estimates the electron donating capacity of any compound based on the reduction of ferric ion (Fe 3+ , as ferric tripyridyl triazine: Fe 3+ -TPTZ) into ferrous ion (Fe 2+ , as ferrous tripyridyl triazine: Fe 2+ -TPTZ) . Fifty per cent of the methanolic extract of the G. asiatica fruit evinced a dose-dependent reducing ability of 43 mg gallic acid equivalent per gram (GAE/g) which is approximately 10 times more than 100% methanolic extract of the G. asiatica fruit extract (4.14 mg GAE/g) . Three studies used the ABTS assay to measure the antioxidant activity of the Grewia species. Similar to the DPPH assay, the ABTS assay determines the antioxidant activity of hydrolysates that scavenge ABTS radicals. The Grewia species showed a dose-dependent ABTS scavenging. In , the meta-analysis for the antioxidant ( a), anticancer ( b), anti-inflammatory ( c), and antimicrobial ( d) activities is shown. Ten studies were included in the meta-analysis of the antioxidant activities of various Grewia species, as summarized in a. The meta-analysis revealed that the Grewia species showed notable antioxidant activity (MRAW = 59.71, 95% CI = 36.51–82.90, p = 0.0, I 2 = 100%) overall. However, a detailed sub meta-analysis of four and three studies unveiled significant antioxidant properties in the DPPH (MRAW = 64.34, 95% CI = 12.28–116.40, p = 0.0, I 2 = 100%) and ABTS assay (MRAW = 79.36, 95% CI = 18.43–140.28, p < 0.01, I 2 = 100%), respectively. In contrast, the NO, FRAP, and H 2 O 2 assays were only in one study; therefore, a heterogenic analysis was not possible. 3.2.2. Anticancer Properties Despite the overwhelming research response by researchers, cancer still represents the second leading cause of death and is trending towards becoming the leading cause in the elderly . Besides the tremendous development in anticancer therapies and drugs, the prevention of tumor generation by adopting a healthy lifestyle is generally considered as an effective strategy to reduce cancer risk. It is well established that diets rich in fruit and vegetables are useful in cancer prevention by virtue of their content of a wide variety of phytochemicals . Their preventive activity goes beyond the antioxidant capacity, and includes effects on the expression of oncogenes, tumor suppressor genes and transcription factors, and on cell cycle and apoptosis . Six studies were reported from 2011 to 2020 that evaluated the anticancer potential of the Grewia species, wherein five articles reported the anticancer effects of G. asiatica , one study focused on G. lasiocarpa , and all the studies employed the MTT assay as an index of cell proliferation. Among five studies on G. asiatica , two studies used a methanolic extract from the leaves, one study used the methanolic extract from fruit residues, one study used an aqueous methanol extract from fruit, one study presented the comparison between the aqueous extract from both fruits and leaves, and the last study used stem bark of G. lasiocarpa . In all studies, samples were prepared by initially drying the fruit/leaves/stems under shade and then were extracted with the previously mentioned solvents. Five studies were included in the meta-analysis of the anticancer activities of various Grewia species, as summarized in b, except for one study reported by Dattani et al. (2011) which was excluded as the results were presented in different units. The meta-analysis revealed that the Grewia species showed anticancer activities (MRAW = 65.94, 95% CI = 57.89–73.99, p < 0.01, I 2 = 93%) overall. However, a detailed sub-meta-analysis for specific cancer cell lines showed that G. asiatica exerted profound effects against the proliferation of HepG2 cells (MRAW = 66.77, 95% CI = 49.48–84.05, p < 0.01, I 2 = 82%), NCI-H 522 (MRAW = 67.09, 95% CI = 46.07–8.11, p < 0.30, I 2 = 70%), MCF-7 (MRAW = 61.94, 95% CI = 41.62–82.26, p < 0.01, I 2 = 97%), and HeLa (MRAW = 87.72, 95% CI = −52.52–227.97, p < 0.06, I 2 = 72%). In contrast, there was only one study that discussed the effects of the G. asiatica fruit against K562 and HL-60 (human leukemia) cells; therefore, a meta-analysis was not possible for that study. Marya et al. observed significant cytotoxic effects of the aqueous extract from the G. asiatica fruit against HEp-2 (larynx cancer), NCI-H522 (lung cancer), and MCF-7 (breast cancer) with IC 50 of 50.31 µg/mL, 59.03 μg/mL and 58.65 μg/mL, respectively. However, notable activity of the aqueous extract from the G. asiatica leaves was observed in the aforementioned study against HEp-2 and MCF-7 cancer cell lines with IC 50 extracts of 61.23 µg/mL, 50.37 µg/mL, respectively. Dattani et al. recorded a similar response of ethanolic extracts from the G. asiatica fruit against NCI-H522 and MCF-7 cells while the extracts appeared to be ineffective against a cervical cancer cell line (HeLa) and HEp-2. Moreover, the intraperitoneal administration of methanolic extracts from the G. asiatica fruit inhibited the growth of Ehrlich’s ascites carcinoma (EAC) cells resulting in a significant increase in the life span of tumor-bearing animals and exerted cytotoxic activity toward four human cancer cells i.e., HL-60, K-562, MCF-7, and HeLa with an IC 50 of 53.7, 54.9, 199.5, and 177.8 µg/mL, respectively . With regard to the pomace extract from G. asiatica , this was shown to elicit a significant cytotoxic activity against MCF-7 with an IC 50 of 68.91 µg/mL, and a less remarkable activity towards bone sarcoma cells (MG-63), HeLa, and hepatocellular carcinoma cells (HepG2) . A recent study illustrated that the aqueous methanol extract from G. asiatica to be more effective against breast cancer, lung cancer, and laryngeal cancer cell lines with IC 50 of 34.87 µg/mL, 73.01 µg/mL, and 80.41 µg/mL, respectively suggesting antitumor claims for the G. asiatica . However, the stem, bark, leaves, and pulp extracts from G. asiatica , when analyzed for cytotoxic potential by using a brine shrimp lethality assay and a hemagglutination assay, failed to show a significant cytotoxic response . The last study reported anticancer effects of the pure compound lupeol i.e., isolated from the stem bark of G. lasiocarpa against HEK293 (human embryonic kidney), HeLa, and MCF-7 cells. 3.2.3. Anti-Inflammatory Activity The therapeutic role of medicinal plants alone or as adjuncts to conventional treatments, is firmly recognized. This notion, along with the relatively low cost of medicinal plants, has been a reason to promote their use in poor countries where people have restricted access to expensive drugs . Inflammation is a fundamental and highly orchestrated physiological defensive process against noxious factors such as infections, exposure to toxicants, allergens, and other stimuli. Inflammation is often associated with pain that is quite often mediated with nonsteroidal drugs (NSAIDS) such as corticosteroids which possess remarkable anti-inflammatory activity and analgesics such as opioids and anticonvulsants . However, the prolonged use of these drugs is discouraged due to their adverse effects such as severe gastric lesions, digestive system disorders, nausea, urinary retention, and dependence on opioids. Six studies were included in this category from 2012 to 2020; five of them focused on G. asiatica whereas five articles focused on fruit parts and one study focused on the stem bark. The last study provided a comparison between the n -hexane extracts from G. asiatica and G. optiva for their protective effects against hypotonicity-induced lysis i.e., membrane stabilization. Lyophilization was the most commonly employed technique for sample preparation. The analgesic activity was evaluated using acetic acid-induced writhing and hot plate methods. Antipyretic activity was evaluated using the Brewerís yeast-induced pyrexia method, in vivo anti-inflammatory activity was recorded using carrageenan-induced paw edema, and in vitro anti-inflammatory activity was examined using the human RBC membrane stabilization method. Four studies were included in the meta-analysis of the anti-inflammatory activities of two Grewia species i.e., asiatica and optiva , as summarized in c, except one study reported by Bajpai et al. , where the results were not presented in any unit. The meta-analysis revealed that the Grewia species showed notable antinociceptive and anti-inflammatory activity (MRAW = 43.98, 95% CI = 21.98–65.97, p = 0.0, I 2 = 100%) overall. However, a detailed sub meta-analysis suggested notable antinociceptive activities against acetic acid-induced writhing (MRAW = 72.18, 95% CI = 13.09–131.27, p = 0.0, I 2 = 100%), anti-inflammatory activity against carrageenan-induced paw edema (MRAW = 45.18, 95% CI = 24.61–65.75, p = 0.0, I 2 = 100%), and protection of the membrane against heat-induced hemolysis (MRAW = 21.6, 95% CI = −41.35–84.55, p = 0.0, I 2 = 100%). Das et al. evaluated the anti-pain activity of aqueous extracts from G. asiatica fruits using the acetic acid-induced writhing ( n = 35, trial duration 30 min), tail immersion ( n = 35 trail duration was 10, 30, 60 min), and hot plate methods ( n = 35, trial duration 10 min) in rats. Aqueous extracts of the G. asiatica fruit (200–300 mg/kg body weight) were found to attenuate the pain induced by acetic acid in the writhing test, tail immersion, and hot plate tests. Paviaya et al. reported the analgesic efficacy of aqueous and methanolic extracts of G. asiatica bark in the hot plate test ( n = 30, trial duration was 0, 30, 90, 190 min) and the writhing response test ( n = 30, trial duration was 30 min). Similar studies also demonstrated that the methanolic and aqueous fruit extracts of G. asiatica at doses between 300 and 500 mg/kg counteracted the fever induced by lipopolysaccharide ( n = 25, trail duration was 30, 60, 90 min) and brewer’s yeast ( n = 25, trail duration was 1, 2, 3, 18 h) in rats, respectively . The most recent study by Qamar et al. found that 100% methanol and 50% aqueous methanol extracts of G. asiatica fruits protected the animals under experimentation from the painful stimulation of formalin ( n = 40, trial duration was 0–25 min) in a dose-dependent manner with the maximum effect being 62.9% and 62.6%, respectively at 400 mg/kg/body weight. Similarly, methanol and aqueous methanol extracts of the G. asiatica fruit subjected to a glutamate-induced ( n = 40, trial duration was 0–25 min) nociceptive response assessment in a mice model showed a significant anti-nociceptive effect from G. asiatica in comparison to the control and standard drug . The anti-inflammatory potential of G. asiatica has also been extensively investigated. The data demonstrating the efficacy of the various anatomical fractions of G. asiatica such as bark as anti-inflammatory agents were significant when tested against carrageenan-induced paw oedema ( n = 30 trail duration was 3 h) in rats. The authors confirmed that bark methanol and aqueous extracts were significant factors that attenuated paw edema at 400 mg/kg as 59.14% and 53.04%, respectively, while the response was quite comparable to that of indomethacin (64.02% reduction at 10 mg/kg) . Methanol extracts of G. asiatica fruits were also screened for their possible anti-inflammatory activity on carrageenan-induced paw edema in rats at an oral dose level of 250 and 500 mg/kg, orally. The extract showed significant anti-inflammatory activity at both doses . The methanol and aqueous extracts of G. asiatica fruits exerted anti-inflammatory activity against carrageenan-induced paw edema ( n = 25, trail duration was 1–3 h) in a dose-dependent manner at 36.1% and 32.4% at 500 mg/kg, respectively in comparison to the standard indomethacin, which exerted 36.4% inhibition at 10 mg/kg . Feeding 100% methanolic extracts of G. asiatica fruits to mice at the rate of 400 mg/kg b.w., inhibited formaldehyde ( n = 40, trail duration was 0–25 h) and carrageenan-induced paw oedema ( n = 40, trail duration was 1–3 h) by 74% and 71%, respectively, while the inhibition rate was 88% with indomethacin within 3 h of extract/standard drug feeding at 100 mg/kg. Further, a 50% methanolic extract also indicated increased efficacy against Prostaglandin E 2 (PGE2)-induced paw edema (68.7% inhibition at 400 mg/kg (b.w.) in 120 min of extract administration) in comparison with the control while indomethacin presented a relatively higher rate of inhibition to PGE2-induced paw edema i.e., 79% at 100 mg/kg . The last study reported that the traditional use of G. asiatica n- hexane extracts as anti-inflammatory ingredients was justified on account of the extract’s ability to significantly stabilize human red blood cells in comparison with diclofenac potassium. 3.2.4. Antidiabetic Activity Type 2 diabetes has emerged as an important health problem within the 21st century . Ever increasing infiltration trends of diabetes is one of the major health-threatening issues in both developed and developing societies and individuals . Hitherto, one human, five animal, and three in vitro studies have been conducted to investigate the antidiabetic potential of G. asiatica from 2011 to 2016 and no other Grewia species have been explored under the mentioned category so far. Among nine reported studies, 3, 3, 1, 1 focused on leaves, fruits, bark, and pomace, respectively, and one study presented a comparison of the antidiabetic activity of the fruit, stem, and bark ethanolic extracts of G. asiatica . Interestingly, the oral supplementation of an ethanol extract of the G. asiatica bark in alloxan-induced diabetic rats ( n = 20, trial duration was 0–120 min) significantly attenuated the blood glucose levels and increased the survival rate of diabetic rats when compared with metformin-treated rats . Likewise, ethanolic extracts of G. asiatica significantly lowered the blood glucose level in alloxan-induced diabetic rats ( n = 36, trail duration was 0–7 h), and appeared to be more effective than glibenclamide used as a reference antidiabetic drug . Another study on streptozotocin-induced diabetic rats ( n = 36, trail duration was 0–24 h) recorded that the oral administration of extracts of the G. asiatica leaves at the rate of ~500 mg/kg b.w. for 21 days efficiently shortened and reduced blood glucose spikes in rats previously exposed to overloads of glucose, and considerably increased the glucose tolerance in normal rats . Likewise, Khattab et al. recorded normalized glycemia in streptozotocin-induced rats fed with G. asiatica fruit extracts. The study ( n = 40, trail duration was 4 weeks) recorded reduced serum cytokine IL-1β and TNF-α levels, decreased pancreas malondialdehyde (MDA) levels, and normalized glycemia concomitantly with a higher accumulation of liver glycogen and increased liver and pancreas glutathione (GSH) and superoxide dismutase (SOD) enzyme activities. Moreover, the inhibitory properties of the aqueous extracts of the G. asiatica fruit against α-glucosidase and α-amylase activity with IC 50 of 8.93 and 0.41 mg/mL, respectively were also reported by Das et al. The inhibitory properties of the aqueous methanol extract of G. asiatica fruit residues with IC 50 of 45.70 mg/mL against α-Amylase were recorded and were more promising when compared with the extracts of B . vulgaris, A. comosus , A. lachoocha , and A. heterophyllus fruit . Notably, clinical trials revealed that G. asiatica fruit extracts have a moderate hypoglycemic effect on a non-diabetic human model. Furthermore, when tested in vitro with glucose, the fruit showed neutralizing effects on glucose-induced reactive oxygen species (ROS) suppression . 3.2.5. Radioprotective and Hepatoprotective Potential Exposure to ionizing radiations is unsafe for human health, even when used in therapeutics e.g., radiotherapy against cancer cells may cause severe side effects to irradiated patients. Eight studies were reported in this category and all studies explored the edible portion of G. asiatica . Seven of them performed the extraction with methanol and one study employed ethanol extraction. Methanol extract supplementation was reported to protect mice brain lipids against radiation-induced ( n = 120, trail duration was 1–30 days) oxidation and was shown to improve the GSH content by 14.3% . G. asiatica fruit extracts were also shown to significantly protect against the deleterious effects of whole-body irradiation in mice . Another study by Sisodia and Singh (2009) reported that G. asiatica fruit extracts prevented radiation-induced memory and learning deficits in addition to known histopathological, biochemical, and behavioral ameliorative effects. Numerous studies advocated a G. asiatica fruit-enriched diet to reduce lipid peroxidation rates and serum cholesterol, and to restore the normal levels of GSH, glutathione peroxide (GSH-Px), sugars, and proteins in irradiated mice models . A histopathological and biochemical investigation of the hepatic tissues of X-ray-irradiated G. asiatica extract-fed mice demonstrated hepatoprotective effects . Radioprotective effects were also noticed in histopathological specimens of mice testis where irradiation resulted in lower spermatogonia “A”, spermatogonia “B”, spermatocytes and spermatid count when compared with animals irradiated after supplementation with G. asiatica fruit extracts . 3.2.6. Antimicrobial Properties Plants serve as an important source of novel medicinal substances . Sufficient information is available to confirm the anti-infective role of bioactive compounds of natural origin. For centuries, the use of herbal drugs has been extensively recommended to modulate various opportunistic infections. Flavonoids isolated from ethnopharmacologically established plants are considered to be effective antimicrobial substances against a wide variety of microorganisms . Nine studies were reported in the mentioned category from 2011 to 2020, wherein two studies performed both antibacterial and antifungal activities, five studies reported only antibacterial activities, and lastly, two studies only evaluated the antifungal potential. Out of nine total reported articles, G. asiatica was the most commonly explored i.e., G. asiatica was the focus in six studies and G. optiva , G. lasiocarpa, and G. hirsuta were the focus in 1, 1, and 1 studies, separately. Five studies focused on leaves, two studies focused on fruit, and two studies used stem bark to evaluate the antibacterial and antifungal properties of the Grewia species. Researchers have shown that crude extracts of the Grewia spp. have valuable antibacterial activities predominately associated with their high flavonoid content. Beside the fruit fraction, the leaves and stem bark of G. asiatica have also been suggested to possess antimicrobial potential . Six studies were included in the meta-analysis of antimicrobial potential including three Grewia species i.e., asiatica , optiva , and hirsuta as summarized in d, except three studies reported by Akwu et al. (2020) , Zia et al. (2011) , Dawar et al. (2020) wherein standard deviation was not mentioned. The meta-analysis revealed that the Grewia species showed notable antibacterial and anti-fungal activity (MRAW = 17.15, 95% CI = 11.59–22.70, p = 0.0, I 2 = 100%) overall. However, the detailed sub meta-analysis suggested showed notable antibacterial (MRAW = 10.05, 95% CI = 7.72–12.38, p = 0.0, I 2 = 100%) and anti-fungal (MRAW = 32.51, 95% CI = 26.85–38.17, p = 0.0, I 2 = 100%) activities. Flavonoids and flavonoid-rich fractions isolated from the peel and pulp of G. asiatica caused significant inhibition against Gram-positive and Gram-negative bacterial strains. Staphylococcus aureus was reported to be the most susceptible and Bacillus subtilis was reported to be the least susceptible among the Gram-positive bacterial strains while Salmonella typhi were the most susceptible and Escherichia coli ranked among least susceptible among the Gram-negative bacterial strains . The antibacterial activity of the methanolic extract of the G. asiatica leaves has been reported against Staphylococcus aureus and Salmonella typhi while the aqueous extract of G. asiatica leaves was only found to be effective against S. aureus . G. asiatica fruit extracts can inhibit Gram-negative bacteria through their bioactive compounds such as flavonoids, alkaloids, and saponins without necessarily penetrating into the microbial cell . The potent antibacterial activity of G. asiatica leaf extracts has also been shown against eight different bacterial strains, i.e., Proteus mirabilis , Citrobacter sp., Pseudomonas aeruginosa , Escherichia coli , Salmonella typhi , Micrococcus luteus , Staphylococcus aureus , and Bacillus subtilis . Another study investigating the efficacy of the bark and fruit extracts of G. asiatica against four Gram-positive and six Gram-negative bacterial strains found that the extracts were more active toward S. aureus , E. coli , and Proteus vulgaris , and overall, were more active on Gram-positive strains as compared to Gram-negative bacteria . Another study found the aqueous extract of G. optiva leaves to exert moderate inhibition against three different bacterial strains named S. aureus , E. coli , Salmonella typhi , and Streptococcus pneumoniae . A seventy per cent methanol extract of G. hirsuta showed antibacterial activity against S. aureus and E. coli equivalent to the standard drug ciprofloxacin . Ethanol extracts of G. asiatica leaves were reported to have good antifungal activity against nine fungal strains, namely, Aspergillus effusus , A. parasiticus , A. niger , Saccharomyces cerevisiae , Candida albicans , Yersinia aldovae , Fusarium solani , Macrophomina phaseolina , and Trichophyton rubrum . Pathogenic fungi are responsible for huge crop production losses by perishing the roots system within plants. In vitro antifungal trials using paper disc and diffusion methods found that a 100% aqueous extract from the G. asiatica leaves induced significant inhibition against Rhizoctonia solani , Fusarium oxysporum , and Macrophomina phaseolina and consequently ameliorated the growth of bottle gourd and cowpea. Correspondingly, in vivo results disclosed that an addition of 1% of the powder of the G. asiatica leaves to organic matter considerably reduced the colonization of Macrophomina phaseolina, Rhizoctonia solani , and Fusarium spp . An even greater inhibition against colonization was offered by a 100% G. asiatica leaf extract when directly drenched into the soil. Further, seed treatment of food crops with a 100% leaf extract was reported to increase the bottle gourd and cowpea growth and notably suppressed fungal attack . In an in vitro antiviral trial, a G. asiatica extract was sprayed on test plants at different concentrations (500, 1000, 1500, and 2000 μg/mL) against ULCV (Urdbean leaf crinkle virus). Plants sprayed with 1000 μg/mL of a G. asiatica extract exhibited a minimum % infection (34%) as compared to the control which showed 90% infection, while notable activity against ULCV at concentrations of 1500 and 2000 μg/mL was observed . The traditional use of the G. asiatica fruit and its decoctions as a remedy for digestive and urinary disorders is hence justifiably linked to the broad-spectrum antimicrobial activity of the fruit extracts against digestive and urinary tract pathogens such as Salmonella , E. coli , S. aureus , P. aeruginosa , and M. luteus. In a recent study by Goswami et al. (2018) , it was reported that a G. asiatica leaf acetone extract inhibited the activity of different pathogenic fungi including A. fumigatus and C. glabratai at a concentration of 35 mg/mL. 3.2.7. Antiemetic and Antimalarial Activities The antiemetic potential of an ethanol extract of the G. asiatica fruit was evaluated in a canine model at quite low doses while acute oral toxicity assays proved that extracts were safe for consumption at 200 mg/kg b.w., . The referred study documented that administration of fruit extracts at 120 mg/kg b.w., was capable of inducing antiemetic effects in dogs and standard antiemetic drugs such as largactil and maxolon were shown to be active. Similar assays were performed on male chicks and the researchers suggested a dose-dependent inhibition, i.e., a decrease in the number of retches such that 39% inhibition was observed at 50 mg/kg while ~60% inhibition was recorded with a 100 mg/kg supplementation of methanol fruit extracts of G. asiatica . The literature confirmed the anti-malarial potential (69% inhibition) of the G. asiatica leaves assayed for their possible anti-malarial activity using Enoyl-ACP reductase inhibitory assay . 3.2.8. Other Activities: Immunomodulatory, Anti-Depressant, Anti-Neurodegenerative, Drug Delivery Polymers The clinical data are still scarce on the immunomodulatory or immunoregulatory properties of G. asiatica. However, the presence of bioactive compounds bearing significant immune-mediating activities hints at future trends in immunological research related to the genus Grewia . As discussed earlier, the fruit extracts of G. asiatica carry a significant concentration of compounds such as quercetin, isovitexin, kaempferol, iso-liquiritigenin, and umbelliferone that have been extensively explored for their innate and adoptive immune response in inflammatory disorders A flavonoid-rich ethanol extract of the G. asiatica leaves was reported to exhibit immunomodulatory properties with satisfactory immunostimulation . The notable sedative–hypnotic potential of methanol leaf extracts of G. asiatica in mice models was investigated and no toxicity was observed at a 300 mg/kg dose level . Further studies explicated that G . asiatica methanol extracts improved scopolamine-induced learning and memory deficits in rats through the employment of behavior assessment models by reinstating the cytoarchitecture of effected neuronal cells, elevating neurotransmitter acetylcholine, and settling oxidative stress . G. asiatica leaf fractions derived using petroleum ether and chloroform solvents showed considerable effectiveness against neurodegenerative ailments by inhibiting bovine brain acetylcholinesterase (IC 50 = 55.88 µg/mL) and human blood butyrylcholinesterase (IC 50 = 26.14 µg/mL) enzymes, respectively . G. asiatica extracts have the ability to generate colloidal dispersions and viscous gel in water. The mucilage of G. asiatica was therefore tested as natural polymeric ingredients for gel formulation in drug design and suggested identical behavior to that of marketed formulation without negatively affecting drug release .
Antioxidant-based drug formulations are used for the prevention and adjunct treatment of complex diseases such as Alzheimer’s, stroke, cancer, diabetes, and atherosclerosis, whose etiology is partly dependent on persistent oxidative damage by free radicals. Grewia has been identified as a candidate for the development of nutraceutical products by virtue of an array of relevant bioactive compounds. Further investigations at the molecular level, however, are still needed to explore and discuss the mechanisms of action of these active ingredients . Eleven studies investigated the antioxidant potential of the Grewia species; eight of them studied G. asiatica , one article focused on G. optiva , one evaluated G. lasiocarpa , and one appraised both G. flava and G. biocolor . The scavenging and reducing potential of various parts of the Grewia species was reported to be dose dependent. 2,2-Diphenyl-1-picrylhydrazyl (DPPH) was the most commonly employed antioxidant assay utilized in nine studies along with other methods, ABTS (2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid)) in three, ferric reducing antioxidant power (FRAP) in three, nitric oxide (NO) in two, and hydrogen peroxide (H 2 O 2 ) in one study. The principle of the DPPH assay involves measuring the change in DPPH color from violet to pale yellow, resulting from the existence of radical scavenging compounds . Six studies indicated that the Grewia species showed notable antioxidant potential against stable free radicals of which five studies were from G. asiatica , and one study was from G. optiva . Among the five studies on G. asiatica , four studied the edible portion and one studied the leaves of G. asiatica. Two studies adopted aqueous methanol extraction, two studies used pure methanol, and one study used benzene for extraction. The one reported study on G. optiva focused on leaves and one focused on stems using methanol or water as extraction solvents. Methanol and aqueous methanol extracts of the G. asiatica fruits showed the substantial scavenging activity to be between 60 and 85%. Another study reported by Gupta et al. recorded an IC 50 of 16.19 µg/mL for the benzene extract of the G. asiatica leaves against free radicals in the DPPH assay, which is almost 4.8 times more than standard ascorbic acid noticed with IC 50 78.17 µg/mL . Three studies used a FRAP assay to evaluate the reducing potential of the Grewia species, of which two studied G. asiatica fruits and one studied the stem of G. lasiocarpa . The FRAP assay estimates the electron donating capacity of any compound based on the reduction of ferric ion (Fe 3+ , as ferric tripyridyl triazine: Fe 3+ -TPTZ) into ferrous ion (Fe 2+ , as ferrous tripyridyl triazine: Fe 2+ -TPTZ) . Fifty per cent of the methanolic extract of the G. asiatica fruit evinced a dose-dependent reducing ability of 43 mg gallic acid equivalent per gram (GAE/g) which is approximately 10 times more than 100% methanolic extract of the G. asiatica fruit extract (4.14 mg GAE/g) . Three studies used the ABTS assay to measure the antioxidant activity of the Grewia species. Similar to the DPPH assay, the ABTS assay determines the antioxidant activity of hydrolysates that scavenge ABTS radicals. The Grewia species showed a dose-dependent ABTS scavenging. In , the meta-analysis for the antioxidant ( a), anticancer ( b), anti-inflammatory ( c), and antimicrobial ( d) activities is shown. Ten studies were included in the meta-analysis of the antioxidant activities of various Grewia species, as summarized in a. The meta-analysis revealed that the Grewia species showed notable antioxidant activity (MRAW = 59.71, 95% CI = 36.51–82.90, p = 0.0, I 2 = 100%) overall. However, a detailed sub meta-analysis of four and three studies unveiled significant antioxidant properties in the DPPH (MRAW = 64.34, 95% CI = 12.28–116.40, p = 0.0, I 2 = 100%) and ABTS assay (MRAW = 79.36, 95% CI = 18.43–140.28, p < 0.01, I 2 = 100%), respectively. In contrast, the NO, FRAP, and H 2 O 2 assays were only in one study; therefore, a heterogenic analysis was not possible.
Despite the overwhelming research response by researchers, cancer still represents the second leading cause of death and is trending towards becoming the leading cause in the elderly . Besides the tremendous development in anticancer therapies and drugs, the prevention of tumor generation by adopting a healthy lifestyle is generally considered as an effective strategy to reduce cancer risk. It is well established that diets rich in fruit and vegetables are useful in cancer prevention by virtue of their content of a wide variety of phytochemicals . Their preventive activity goes beyond the antioxidant capacity, and includes effects on the expression of oncogenes, tumor suppressor genes and transcription factors, and on cell cycle and apoptosis . Six studies were reported from 2011 to 2020 that evaluated the anticancer potential of the Grewia species, wherein five articles reported the anticancer effects of G. asiatica , one study focused on G. lasiocarpa , and all the studies employed the MTT assay as an index of cell proliferation. Among five studies on G. asiatica , two studies used a methanolic extract from the leaves, one study used the methanolic extract from fruit residues, one study used an aqueous methanol extract from fruit, one study presented the comparison between the aqueous extract from both fruits and leaves, and the last study used stem bark of G. lasiocarpa . In all studies, samples were prepared by initially drying the fruit/leaves/stems under shade and then were extracted with the previously mentioned solvents. Five studies were included in the meta-analysis of the anticancer activities of various Grewia species, as summarized in b, except for one study reported by Dattani et al. (2011) which was excluded as the results were presented in different units. The meta-analysis revealed that the Grewia species showed anticancer activities (MRAW = 65.94, 95% CI = 57.89–73.99, p < 0.01, I 2 = 93%) overall. However, a detailed sub-meta-analysis for specific cancer cell lines showed that G. asiatica exerted profound effects against the proliferation of HepG2 cells (MRAW = 66.77, 95% CI = 49.48–84.05, p < 0.01, I 2 = 82%), NCI-H 522 (MRAW = 67.09, 95% CI = 46.07–8.11, p < 0.30, I 2 = 70%), MCF-7 (MRAW = 61.94, 95% CI = 41.62–82.26, p < 0.01, I 2 = 97%), and HeLa (MRAW = 87.72, 95% CI = −52.52–227.97, p < 0.06, I 2 = 72%). In contrast, there was only one study that discussed the effects of the G. asiatica fruit against K562 and HL-60 (human leukemia) cells; therefore, a meta-analysis was not possible for that study. Marya et al. observed significant cytotoxic effects of the aqueous extract from the G. asiatica fruit against HEp-2 (larynx cancer), NCI-H522 (lung cancer), and MCF-7 (breast cancer) with IC 50 of 50.31 µg/mL, 59.03 μg/mL and 58.65 μg/mL, respectively. However, notable activity of the aqueous extract from the G. asiatica leaves was observed in the aforementioned study against HEp-2 and MCF-7 cancer cell lines with IC 50 extracts of 61.23 µg/mL, 50.37 µg/mL, respectively. Dattani et al. recorded a similar response of ethanolic extracts from the G. asiatica fruit against NCI-H522 and MCF-7 cells while the extracts appeared to be ineffective against a cervical cancer cell line (HeLa) and HEp-2. Moreover, the intraperitoneal administration of methanolic extracts from the G. asiatica fruit inhibited the growth of Ehrlich’s ascites carcinoma (EAC) cells resulting in a significant increase in the life span of tumor-bearing animals and exerted cytotoxic activity toward four human cancer cells i.e., HL-60, K-562, MCF-7, and HeLa with an IC 50 of 53.7, 54.9, 199.5, and 177.8 µg/mL, respectively . With regard to the pomace extract from G. asiatica , this was shown to elicit a significant cytotoxic activity against MCF-7 with an IC 50 of 68.91 µg/mL, and a less remarkable activity towards bone sarcoma cells (MG-63), HeLa, and hepatocellular carcinoma cells (HepG2) . A recent study illustrated that the aqueous methanol extract from G. asiatica to be more effective against breast cancer, lung cancer, and laryngeal cancer cell lines with IC 50 of 34.87 µg/mL, 73.01 µg/mL, and 80.41 µg/mL, respectively suggesting antitumor claims for the G. asiatica . However, the stem, bark, leaves, and pulp extracts from G. asiatica , when analyzed for cytotoxic potential by using a brine shrimp lethality assay and a hemagglutination assay, failed to show a significant cytotoxic response . The last study reported anticancer effects of the pure compound lupeol i.e., isolated from the stem bark of G. lasiocarpa against HEK293 (human embryonic kidney), HeLa, and MCF-7 cells.
The therapeutic role of medicinal plants alone or as adjuncts to conventional treatments, is firmly recognized. This notion, along with the relatively low cost of medicinal plants, has been a reason to promote their use in poor countries where people have restricted access to expensive drugs . Inflammation is a fundamental and highly orchestrated physiological defensive process against noxious factors such as infections, exposure to toxicants, allergens, and other stimuli. Inflammation is often associated with pain that is quite often mediated with nonsteroidal drugs (NSAIDS) such as corticosteroids which possess remarkable anti-inflammatory activity and analgesics such as opioids and anticonvulsants . However, the prolonged use of these drugs is discouraged due to their adverse effects such as severe gastric lesions, digestive system disorders, nausea, urinary retention, and dependence on opioids. Six studies were included in this category from 2012 to 2020; five of them focused on G. asiatica whereas five articles focused on fruit parts and one study focused on the stem bark. The last study provided a comparison between the n -hexane extracts from G. asiatica and G. optiva for their protective effects against hypotonicity-induced lysis i.e., membrane stabilization. Lyophilization was the most commonly employed technique for sample preparation. The analgesic activity was evaluated using acetic acid-induced writhing and hot plate methods. Antipyretic activity was evaluated using the Brewerís yeast-induced pyrexia method, in vivo anti-inflammatory activity was recorded using carrageenan-induced paw edema, and in vitro anti-inflammatory activity was examined using the human RBC membrane stabilization method. Four studies were included in the meta-analysis of the anti-inflammatory activities of two Grewia species i.e., asiatica and optiva , as summarized in c, except one study reported by Bajpai et al. , where the results were not presented in any unit. The meta-analysis revealed that the Grewia species showed notable antinociceptive and anti-inflammatory activity (MRAW = 43.98, 95% CI = 21.98–65.97, p = 0.0, I 2 = 100%) overall. However, a detailed sub meta-analysis suggested notable antinociceptive activities against acetic acid-induced writhing (MRAW = 72.18, 95% CI = 13.09–131.27, p = 0.0, I 2 = 100%), anti-inflammatory activity against carrageenan-induced paw edema (MRAW = 45.18, 95% CI = 24.61–65.75, p = 0.0, I 2 = 100%), and protection of the membrane against heat-induced hemolysis (MRAW = 21.6, 95% CI = −41.35–84.55, p = 0.0, I 2 = 100%). Das et al. evaluated the anti-pain activity of aqueous extracts from G. asiatica fruits using the acetic acid-induced writhing ( n = 35, trial duration 30 min), tail immersion ( n = 35 trail duration was 10, 30, 60 min), and hot plate methods ( n = 35, trial duration 10 min) in rats. Aqueous extracts of the G. asiatica fruit (200–300 mg/kg body weight) were found to attenuate the pain induced by acetic acid in the writhing test, tail immersion, and hot plate tests. Paviaya et al. reported the analgesic efficacy of aqueous and methanolic extracts of G. asiatica bark in the hot plate test ( n = 30, trial duration was 0, 30, 90, 190 min) and the writhing response test ( n = 30, trial duration was 30 min). Similar studies also demonstrated that the methanolic and aqueous fruit extracts of G. asiatica at doses between 300 and 500 mg/kg counteracted the fever induced by lipopolysaccharide ( n = 25, trail duration was 30, 60, 90 min) and brewer’s yeast ( n = 25, trail duration was 1, 2, 3, 18 h) in rats, respectively . The most recent study by Qamar et al. found that 100% methanol and 50% aqueous methanol extracts of G. asiatica fruits protected the animals under experimentation from the painful stimulation of formalin ( n = 40, trial duration was 0–25 min) in a dose-dependent manner with the maximum effect being 62.9% and 62.6%, respectively at 400 mg/kg/body weight. Similarly, methanol and aqueous methanol extracts of the G. asiatica fruit subjected to a glutamate-induced ( n = 40, trial duration was 0–25 min) nociceptive response assessment in a mice model showed a significant anti-nociceptive effect from G. asiatica in comparison to the control and standard drug . The anti-inflammatory potential of G. asiatica has also been extensively investigated. The data demonstrating the efficacy of the various anatomical fractions of G. asiatica such as bark as anti-inflammatory agents were significant when tested against carrageenan-induced paw oedema ( n = 30 trail duration was 3 h) in rats. The authors confirmed that bark methanol and aqueous extracts were significant factors that attenuated paw edema at 400 mg/kg as 59.14% and 53.04%, respectively, while the response was quite comparable to that of indomethacin (64.02% reduction at 10 mg/kg) . Methanol extracts of G. asiatica fruits were also screened for their possible anti-inflammatory activity on carrageenan-induced paw edema in rats at an oral dose level of 250 and 500 mg/kg, orally. The extract showed significant anti-inflammatory activity at both doses . The methanol and aqueous extracts of G. asiatica fruits exerted anti-inflammatory activity against carrageenan-induced paw edema ( n = 25, trail duration was 1–3 h) in a dose-dependent manner at 36.1% and 32.4% at 500 mg/kg, respectively in comparison to the standard indomethacin, which exerted 36.4% inhibition at 10 mg/kg . Feeding 100% methanolic extracts of G. asiatica fruits to mice at the rate of 400 mg/kg b.w., inhibited formaldehyde ( n = 40, trail duration was 0–25 h) and carrageenan-induced paw oedema ( n = 40, trail duration was 1–3 h) by 74% and 71%, respectively, while the inhibition rate was 88% with indomethacin within 3 h of extract/standard drug feeding at 100 mg/kg. Further, a 50% methanolic extract also indicated increased efficacy against Prostaglandin E 2 (PGE2)-induced paw edema (68.7% inhibition at 400 mg/kg (b.w.) in 120 min of extract administration) in comparison with the control while indomethacin presented a relatively higher rate of inhibition to PGE2-induced paw edema i.e., 79% at 100 mg/kg . The last study reported that the traditional use of G. asiatica n- hexane extracts as anti-inflammatory ingredients was justified on account of the extract’s ability to significantly stabilize human red blood cells in comparison with diclofenac potassium.
Type 2 diabetes has emerged as an important health problem within the 21st century . Ever increasing infiltration trends of diabetes is one of the major health-threatening issues in both developed and developing societies and individuals . Hitherto, one human, five animal, and three in vitro studies have been conducted to investigate the antidiabetic potential of G. asiatica from 2011 to 2016 and no other Grewia species have been explored under the mentioned category so far. Among nine reported studies, 3, 3, 1, 1 focused on leaves, fruits, bark, and pomace, respectively, and one study presented a comparison of the antidiabetic activity of the fruit, stem, and bark ethanolic extracts of G. asiatica . Interestingly, the oral supplementation of an ethanol extract of the G. asiatica bark in alloxan-induced diabetic rats ( n = 20, trial duration was 0–120 min) significantly attenuated the blood glucose levels and increased the survival rate of diabetic rats when compared with metformin-treated rats . Likewise, ethanolic extracts of G. asiatica significantly lowered the blood glucose level in alloxan-induced diabetic rats ( n = 36, trail duration was 0–7 h), and appeared to be more effective than glibenclamide used as a reference antidiabetic drug . Another study on streptozotocin-induced diabetic rats ( n = 36, trail duration was 0–24 h) recorded that the oral administration of extracts of the G. asiatica leaves at the rate of ~500 mg/kg b.w. for 21 days efficiently shortened and reduced blood glucose spikes in rats previously exposed to overloads of glucose, and considerably increased the glucose tolerance in normal rats . Likewise, Khattab et al. recorded normalized glycemia in streptozotocin-induced rats fed with G. asiatica fruit extracts. The study ( n = 40, trail duration was 4 weeks) recorded reduced serum cytokine IL-1β and TNF-α levels, decreased pancreas malondialdehyde (MDA) levels, and normalized glycemia concomitantly with a higher accumulation of liver glycogen and increased liver and pancreas glutathione (GSH) and superoxide dismutase (SOD) enzyme activities. Moreover, the inhibitory properties of the aqueous extracts of the G. asiatica fruit against α-glucosidase and α-amylase activity with IC 50 of 8.93 and 0.41 mg/mL, respectively were also reported by Das et al. The inhibitory properties of the aqueous methanol extract of G. asiatica fruit residues with IC 50 of 45.70 mg/mL against α-Amylase were recorded and were more promising when compared with the extracts of B . vulgaris, A. comosus , A. lachoocha , and A. heterophyllus fruit . Notably, clinical trials revealed that G. asiatica fruit extracts have a moderate hypoglycemic effect on a non-diabetic human model. Furthermore, when tested in vitro with glucose, the fruit showed neutralizing effects on glucose-induced reactive oxygen species (ROS) suppression .
Exposure to ionizing radiations is unsafe for human health, even when used in therapeutics e.g., radiotherapy against cancer cells may cause severe side effects to irradiated patients. Eight studies were reported in this category and all studies explored the edible portion of G. asiatica . Seven of them performed the extraction with methanol and one study employed ethanol extraction. Methanol extract supplementation was reported to protect mice brain lipids against radiation-induced ( n = 120, trail duration was 1–30 days) oxidation and was shown to improve the GSH content by 14.3% . G. asiatica fruit extracts were also shown to significantly protect against the deleterious effects of whole-body irradiation in mice . Another study by Sisodia and Singh (2009) reported that G. asiatica fruit extracts prevented radiation-induced memory and learning deficits in addition to known histopathological, biochemical, and behavioral ameliorative effects. Numerous studies advocated a G. asiatica fruit-enriched diet to reduce lipid peroxidation rates and serum cholesterol, and to restore the normal levels of GSH, glutathione peroxide (GSH-Px), sugars, and proteins in irradiated mice models . A histopathological and biochemical investigation of the hepatic tissues of X-ray-irradiated G. asiatica extract-fed mice demonstrated hepatoprotective effects . Radioprotective effects were also noticed in histopathological specimens of mice testis where irradiation resulted in lower spermatogonia “A”, spermatogonia “B”, spermatocytes and spermatid count when compared with animals irradiated after supplementation with G. asiatica fruit extracts .
Plants serve as an important source of novel medicinal substances . Sufficient information is available to confirm the anti-infective role of bioactive compounds of natural origin. For centuries, the use of herbal drugs has been extensively recommended to modulate various opportunistic infections. Flavonoids isolated from ethnopharmacologically established plants are considered to be effective antimicrobial substances against a wide variety of microorganisms . Nine studies were reported in the mentioned category from 2011 to 2020, wherein two studies performed both antibacterial and antifungal activities, five studies reported only antibacterial activities, and lastly, two studies only evaluated the antifungal potential. Out of nine total reported articles, G. asiatica was the most commonly explored i.e., G. asiatica was the focus in six studies and G. optiva , G. lasiocarpa, and G. hirsuta were the focus in 1, 1, and 1 studies, separately. Five studies focused on leaves, two studies focused on fruit, and two studies used stem bark to evaluate the antibacterial and antifungal properties of the Grewia species. Researchers have shown that crude extracts of the Grewia spp. have valuable antibacterial activities predominately associated with their high flavonoid content. Beside the fruit fraction, the leaves and stem bark of G. asiatica have also been suggested to possess antimicrobial potential . Six studies were included in the meta-analysis of antimicrobial potential including three Grewia species i.e., asiatica , optiva , and hirsuta as summarized in d, except three studies reported by Akwu et al. (2020) , Zia et al. (2011) , Dawar et al. (2020) wherein standard deviation was not mentioned. The meta-analysis revealed that the Grewia species showed notable antibacterial and anti-fungal activity (MRAW = 17.15, 95% CI = 11.59–22.70, p = 0.0, I 2 = 100%) overall. However, the detailed sub meta-analysis suggested showed notable antibacterial (MRAW = 10.05, 95% CI = 7.72–12.38, p = 0.0, I 2 = 100%) and anti-fungal (MRAW = 32.51, 95% CI = 26.85–38.17, p = 0.0, I 2 = 100%) activities. Flavonoids and flavonoid-rich fractions isolated from the peel and pulp of G. asiatica caused significant inhibition against Gram-positive and Gram-negative bacterial strains. Staphylococcus aureus was reported to be the most susceptible and Bacillus subtilis was reported to be the least susceptible among the Gram-positive bacterial strains while Salmonella typhi were the most susceptible and Escherichia coli ranked among least susceptible among the Gram-negative bacterial strains . The antibacterial activity of the methanolic extract of the G. asiatica leaves has been reported against Staphylococcus aureus and Salmonella typhi while the aqueous extract of G. asiatica leaves was only found to be effective against S. aureus . G. asiatica fruit extracts can inhibit Gram-negative bacteria through their bioactive compounds such as flavonoids, alkaloids, and saponins without necessarily penetrating into the microbial cell . The potent antibacterial activity of G. asiatica leaf extracts has also been shown against eight different bacterial strains, i.e., Proteus mirabilis , Citrobacter sp., Pseudomonas aeruginosa , Escherichia coli , Salmonella typhi , Micrococcus luteus , Staphylococcus aureus , and Bacillus subtilis . Another study investigating the efficacy of the bark and fruit extracts of G. asiatica against four Gram-positive and six Gram-negative bacterial strains found that the extracts were more active toward S. aureus , E. coli , and Proteus vulgaris , and overall, were more active on Gram-positive strains as compared to Gram-negative bacteria . Another study found the aqueous extract of G. optiva leaves to exert moderate inhibition against three different bacterial strains named S. aureus , E. coli , Salmonella typhi , and Streptococcus pneumoniae . A seventy per cent methanol extract of G. hirsuta showed antibacterial activity against S. aureus and E. coli equivalent to the standard drug ciprofloxacin . Ethanol extracts of G. asiatica leaves were reported to have good antifungal activity against nine fungal strains, namely, Aspergillus effusus , A. parasiticus , A. niger , Saccharomyces cerevisiae , Candida albicans , Yersinia aldovae , Fusarium solani , Macrophomina phaseolina , and Trichophyton rubrum . Pathogenic fungi are responsible for huge crop production losses by perishing the roots system within plants. In vitro antifungal trials using paper disc and diffusion methods found that a 100% aqueous extract from the G. asiatica leaves induced significant inhibition against Rhizoctonia solani , Fusarium oxysporum , and Macrophomina phaseolina and consequently ameliorated the growth of bottle gourd and cowpea. Correspondingly, in vivo results disclosed that an addition of 1% of the powder of the G. asiatica leaves to organic matter considerably reduced the colonization of Macrophomina phaseolina, Rhizoctonia solani , and Fusarium spp . An even greater inhibition against colonization was offered by a 100% G. asiatica leaf extract when directly drenched into the soil. Further, seed treatment of food crops with a 100% leaf extract was reported to increase the bottle gourd and cowpea growth and notably suppressed fungal attack . In an in vitro antiviral trial, a G. asiatica extract was sprayed on test plants at different concentrations (500, 1000, 1500, and 2000 μg/mL) against ULCV (Urdbean leaf crinkle virus). Plants sprayed with 1000 μg/mL of a G. asiatica extract exhibited a minimum % infection (34%) as compared to the control which showed 90% infection, while notable activity against ULCV at concentrations of 1500 and 2000 μg/mL was observed . The traditional use of the G. asiatica fruit and its decoctions as a remedy for digestive and urinary disorders is hence justifiably linked to the broad-spectrum antimicrobial activity of the fruit extracts against digestive and urinary tract pathogens such as Salmonella , E. coli , S. aureus , P. aeruginosa , and M. luteus. In a recent study by Goswami et al. (2018) , it was reported that a G. asiatica leaf acetone extract inhibited the activity of different pathogenic fungi including A. fumigatus and C. glabratai at a concentration of 35 mg/mL.
The antiemetic potential of an ethanol extract of the G. asiatica fruit was evaluated in a canine model at quite low doses while acute oral toxicity assays proved that extracts were safe for consumption at 200 mg/kg b.w., . The referred study documented that administration of fruit extracts at 120 mg/kg b.w., was capable of inducing antiemetic effects in dogs and standard antiemetic drugs such as largactil and maxolon were shown to be active. Similar assays were performed on male chicks and the researchers suggested a dose-dependent inhibition, i.e., a decrease in the number of retches such that 39% inhibition was observed at 50 mg/kg while ~60% inhibition was recorded with a 100 mg/kg supplementation of methanol fruit extracts of G. asiatica . The literature confirmed the anti-malarial potential (69% inhibition) of the G. asiatica leaves assayed for their possible anti-malarial activity using Enoyl-ACP reductase inhibitory assay .
The clinical data are still scarce on the immunomodulatory or immunoregulatory properties of G. asiatica. However, the presence of bioactive compounds bearing significant immune-mediating activities hints at future trends in immunological research related to the genus Grewia . As discussed earlier, the fruit extracts of G. asiatica carry a significant concentration of compounds such as quercetin, isovitexin, kaempferol, iso-liquiritigenin, and umbelliferone that have been extensively explored for their innate and adoptive immune response in inflammatory disorders A flavonoid-rich ethanol extract of the G. asiatica leaves was reported to exhibit immunomodulatory properties with satisfactory immunostimulation . The notable sedative–hypnotic potential of methanol leaf extracts of G. asiatica in mice models was investigated and no toxicity was observed at a 300 mg/kg dose level . Further studies explicated that G . asiatica methanol extracts improved scopolamine-induced learning and memory deficits in rats through the employment of behavior assessment models by reinstating the cytoarchitecture of effected neuronal cells, elevating neurotransmitter acetylcholine, and settling oxidative stress . G. asiatica leaf fractions derived using petroleum ether and chloroform solvents showed considerable effectiveness against neurodegenerative ailments by inhibiting bovine brain acetylcholinesterase (IC 50 = 55.88 µg/mL) and human blood butyrylcholinesterase (IC 50 = 26.14 µg/mL) enzymes, respectively . G. asiatica extracts have the ability to generate colloidal dispersions and viscous gel in water. The mucilage of G. asiatica was therefore tested as natural polymeric ingredients for gel formulation in drug design and suggested identical behavior to that of marketed formulation without negatively affecting drug release .
From the results of the selected papers on the nutritional, phytochemical composition, and health-promoting potential of the Grewia species, our review identified primary metabolites (carbohydrate, protein and amino acids, fiber, fat, and fatty acids), minerals (calcium, sodium, iron, zinc, manganese), vitamins, and phytochemicals including flavonoids (flavones and anthocyanidins), phenolic acids, and triterpenes as major classes. These findings underscore the importance of this genus in maintaining a healthy and balanced diet. In comparison to the fruits, we discovered that the leaves and seeds have a better nutritional value and a larger quantity of bioactive substances. Although the composition varies according to the Grewia species, in general, the Grewia species are high in protein and fiber and have a low to intermediate fat and carbohydrate content, making them an excellent choice for people who are trying to lose weight. Importantly, the contents of minerals such as calcium, potassium, sodium, iron, zinc, and manganese were found in notable amounts. The Institute of Medicine recommended a daily allowance (RDA) of calcium as 1000 mg/day for adults (19–50 years) wherein 100 g of the powder of the G. asiatica seed, G. tenax fruit, and G. villosa fruit can cover approximately 82%, 78%, and 54%, respectively of the RDA for calcium i.e., important for bone health. In the same manner, 100 g of the powder of G. asiatica seeds, G. villosa fruits, G. flavescence fruits, and G. tenax fruits can cover 100% of the RDA for iron. The Institute of Medicine suggested 8 mg/day iron for all age groups of men and postmenopausal women. That functions as a component of a number of proteins, including enzymes and hemoglobin, the latter being important for the transport of oxygen to tissues throughout the body for metabolism . A hundred grams of the fruit powder of G. tenax, G. flavescence, and G. villosa can fulfil the RDA of zinc up to 23%, 13%, and 19% as the zinc RDA for adults is 8 mg/day for women and 11 mg/day for men . Zinc functions as a component of various enzymes in the maintenance of the structural integrity of proteins and in the regulation of gene expression . The RDA for manganese is 2.3 and 1.8 mg/day, respectively for adult men and women and 100 g of G. tenax fruit powder can cover 100% RDA of manganese whereas the powder of G. asiatica fruits and seeds can satisfy almost 50% of the RDA of manganese, which is involved in the formation of bones and in the amino acid, lipid, and carbohydrate metabolisms . The RDA of ascorbic acid is 90 mg/day as per the guidelines of the Food and Nutrition Board and 100 g of the powder of the G. asiatica seeds can fulfill 5.7% of the RDA of vitamin C which is involved in the maintenance of normal connective tissue, wound healing and is needed for bone remodeling. It also acts as an antioxidant, opposes mutation in DNA, and is utilized in the treatment of several cancers . A hundred grams of the fruit powder of G. tenax , G. flavescence, and G. villosa can contribute towards the adequate intake (AI) of potassium at 16.3%,17.5%, and 19.6%, respectively, and the Food and Nutrition Board suggested that the AI of potassium should be up to 4700 mg/day. Potassium is responsible for acid-base control, maintaining osmotic pressure, nerve impulse transmission, muscular contraction, and the transport of carbon dioxide and oxygen . The fatty acid profiling of two Grewia species i.e., G. asiatica and G. bicolor suggested the presence of saturated and unsaturated fatty acids. The polyunsaturated fatty acids are dominant in concentration as compared to saturated fatty acids. Polyunsaturated fatty acids in the diet should be increased since they contribute to lower total plasma cholesterol and protect against cardiovascular disease . The intake of saturated fatty acids is linked to hypercholesterolemia and heart problems . Both species had a moderate amount of saturated fatty acids such as palmitic acid (12.17–11.46%) and stearic acid (5.01–5.77%). Stearic acid has been demonstrated to worsen coronary artery disease by reducing high density lipoprotein cholesterol . Studies have shown that palmitic acid is a potent inducer of DNA damage in insulin-secreting cell linen . In contrast, unsaturated fatty acids including oleic acid and linoleic acid are reported in notable amounts in G. asiatica and G. bicolor seed oils ranging between 16.31–19.33% and 60.06%, 53.21%, respectively. Oleic acid is an ω-9 unsaturated fatty acid known to improve high-density lipoprotein (HDL) cholesterol while lowering low-density lipoprotein (LDL) cholesterol, lowering the risk of heart disease and atherosclerosis . Oleic acid also prevents breast cancer cells from proliferating by inhibiting the growth of cancer-causing oncogenes HER-2/neu (erbB-2) expression . Diets high in oleic acid have been demonstrated to decrease slightly obese women lose weight . The phytochemicals identified in this review establish linkages to the underlying mechanisms on the health benefits of Grewia species. Most of the Grewia species compounds are known to have several health benefits, including antioxidant, anti-inflammatory, anticancer, hepatoprotective, radioprotective, and antimicrobial aspects. The antioxidant activity of the Grewia species lies mainly in its leaves, seeds, and pulp since they possess a higher radical scavenging ability while peel and stem bark extracts possess non-influential activity, confirmed in in vitro research mediated by their higher content on flavonoids, phenolic acids, and triterpenes. Regarding the anti-inflammatory properties, the fruit, bark, and leaf extracts of Grewia species and mainly the asiatica species controlled the pain mediation by suppressing the pro-inflammatory cytokines during in vivo assays and also imparted protection to red blood cell membrane against heat-induced hemolysis. In the anticancer analysis, fruit extracts exhibited remarkable activity followed by the leaf extracts, but the fruit residues and the stem bark extracts showed non-influential activity that is consistent with their bioactive metabolite potential. Studies have shown an improvement of glycemic profile by reducing serum glucose level, inhibiting α-amylase, and α-glucosidase evaluated using in vitro and in vivo research. The Grewia species may also facilitate antimicrobial activity associated with an inhibitory effect on Gram-positive and negative bacteria’s growth and acting as an antifungal. Quercetin, chlorogenic acid, caffeic acid, morin, and catechin were the compounds identified in more than one paper. A plethora of literature is available on the biological activities of the listed compounds from different plant sources. The literature cited below correlates the biological activity of the key compounds reported in this study with the existing set of information wherein these compounds have been individually explored for their antioxidant, anti-inflammatory, anticancer, and antimicrobial properties. Quercetin retrieved from the methanol extract of Asparagus cochinchinensis had notable antioxidant activity with an IC 50 of 14.52 µg/mL against free radicals, i.e., DPPH in contrast to standard vitamin C recorded with an IC 50 of 10.49 µg/mL . Notable free radical scavenging activity i.e., 6.35 μM (SC 50 ) was reported for chlorogenic acid isolated from the n -butanol fraction of the Eriobotrya japonica leaves . Caffeic acid was reported to inhibit the DPPH free radicals with an EC 50 of 111 mg/mL . The compound was also found to be effective at reducing ferric iron with a FRAP value of 11.50 μmol Fe(II)/g d.w. Flavonoids (i.e., catechin, morin) and phenolic acids (i.e., caffeic acid, chlorogenic acid) were reported to exhibit notable antioxidant potential in four different biological assays including ORAC, FRAP, ABTS and DPPH . Regarding the anti-inflammatory activity of the key identified compounds from various Grewia species, the intraperitoneal administration of quercetin at 80 mg/kg was reported to alter the carrageenan-induced paw edema in rats . Methanolic extracts of Cheilanthes farinosa are potential carriers of chlorogenic acid. In a study by Yonathan et al. , the authors suggested that 10 mg/mL of chlorogenic acid had a remarkable anti-inflammatory activity against edema comparable with that of acetyl salicylic acid at a relatively higher concentration i.e., 200 mg/mL. Inhibition of inflammation in carrageenan-induced paw edema using catechin in mice was reported to be approximately 28% at a dose of 30 mg/kg . Plausible information on the anticancer activities of plants derived from bioactive compounds including flavonoids exists. This section briefly describes the findings of studies on the anticancer properties of plants originating from bioactive compounds. Quercetin isolated from the methanol extract of Asparagus cochinchinensis was reported to exhibit strong cytotoxicity against the HeLa cell line (IC 50 of 5.78 µg/mL), followed by NCI–H460 (IC 50 of 12.57 µg/mL), Hep-G2 (IC 50 of 20.58 µg/mL), and MCF-7 with an IC 50 of 31.04 µg/mL . Catechin isolated from green tea has been reported to inhibit the proliferation of lung cancer cells through the upregulation of the let-7 signaling pathway and the downregulation of the C-MYC, LIN-28 signaling pathway . Previously, some reports have suggested that morin showed a diverse range of biological functions and has been reported to play essential roles in suppressing the growth of cancer cells (HepG2, HT29, and HCT116) as shown by Hussain et al. . Morin-treated lung (A549) cells showed a decreased cell viability, colony formation, and migration rate when compared with the dimethyl sulfoxide-treated cells by suppressing the expression of miR-135b . Chlorogenic acid derived from Laurocerasus officinalis was reported to exert notable inhibition against the breast cancer cell line (MCF-7) with IC 50 30.9 μg/mL . Chlorogenic acid has been reported to inhibit the proliferation of human lung cancer (A549) cell lines by targeting annexin A2 in vitro and in vivo . Chlorogenic acid and caffeic acid exhibited strong cytotoxic activity in vitro against A549 lung cancer cells, with an IC 50 value of 9.8 μM and 8.9 μM, which was similar to that of the positive control 5-fluorouracil i.e., 3.8 μM . In an earlier study by Garcia et al. , caffeic acid isolated from Scrophularia frutescens was reported to exhibit ID 50 of 28.55 × 10 −3 μM against the Hep-2 cell line i.e., derived from a human epidermoid carcinoma of the larynx. A total of 10 μM quercetin was recorded to reduce the expression of the immunoreactive P-glycoproteins (Pgp) in MCF-7 ADR-resistant cells. Myricetin was also reported to suppress breast cancer metastasis through down-regulating the activity of the metalloproteinase matrix (MMP)-2/9 . Another study by Rajendran et al. reported that plant-based myricetin exhibited cytotoxic potential by inducing cell cycle arrest and ROS-reliant mitochondria-facilitated apoptosis in A549 lung cancer cells. Regarding the antimicrobial activity, quercetin was reported to inhibit S. aureus and P. aeruginosa at dose of 20 mg/mL while P. vulgaris and E. coli were inhibited at a concentration 300 mg/mL and 400 mg/mL, respectively . Recently, a combined treatment of caffeic acid and UV-A LEDs effectively inactivated E. coli , S. Typhimurium , and L. monocytogenes in both a phosphate buffered saline (PBS) and commercial apple juice with no adverse effect on quality . The antibacterial activity of morin was tested against three bacterial strains named E. coli , K. pneumoniae , S. aureus wherein at a concentration of 100 µg/cylinder morin was effectively inhibited all strains .
The Grewia species contains biologically significant amounts of primary metabolites such as carbohydrates, protein and amino acids, ash and minerals, and fiber, but low contents of fats and fatty acids. These characteristics make them a good choice for a healthy life and for weight conscious people. Other than that, crude extracts of various parts i.e., the fruit, stem, bark, leaves, seeds, and identified/quantified compounds, including gallic acid, chlorogenic acid, caffeic acid, quercetin, morin, myricetin, vitexin, and catechins can be used for the development of nutraceuticals in order to address life-threatening ailments. The present review discussed in detail the health-promoting potential of the various anatomical parts of all included Grewia species and the compounds extractable from those parts. Future studies should be conducted to isolate the identified compounds from G. asiatica and to conduct their clinical investigations and safety assessment. We also encourage researchers to work on other Grewia species for nutritional and phytochemical profiling so a comparison can be drawn, enabling an identification of the “best” species, from a bioactive and therapeutics point of view. A bibliometric analysis of co-authorships highlighted that most of the authors and regions of study are from South Asia, mainly India and Pakistan. So far, authors from India have collaborated and explored the antioxidant, anti-inflammatory, anticancer, radioprotective, and hepatoprotective aspects of the Grewia species, whereas authors from Pakistan have collaborated and evaluated antioxidant, anti-inflammatory, anticancer, antibacterial, antiemetic, and antimalarial activities. Surprisingly, the antibacterial, and antimalarial aspects were not explored by Indian authors, and the antidiabetic, hepatoprotective, and radioprotective potential were not explored by Pakistani authors as of the date of the present review. The mentioned loop could motivate the authors from other geographical regions, where the Grewia species also grows, to join the international ethno-geo-pharmacological investigation and to provide a comprehensive evaluation of the bio-potency by applying a unified methodology. Providing a unified specification of this potential Grewia genus and its parts (seeds, stems, roots, leaves and fruits) and identified compounds (quercetin, myricetin, morin, catechins, gallic acid, chlorogenic acid, caffeic acid, and others) would allow researchers to make a geo-biopotency relationship based on plant growth in the different regions, exhibiting various high altitudes, sun exposure time, climate, soil type, humidity, and irrigation methods.
|
Bark beetle infestation alters mycobiomes in wood, litter, and soil associated with Norway spruce | ba3c0856-0ed7-4355-b457-d9e991d5105a | 11840958 | Microbiology[mh] | Over the last decade, climate change has been characterized by a repeated series of dry and hot summers coupled with mild winters, favouring massive outbreaks of bark beetles in coniferous forests and subsequent diebacks (Marini et al. , Vacek et al. ). In Germany, approximately one quarter of the whole country is covered with forests of Norway spruce ( Picea abies L. Karst)—a fast-growing conifer producing good, usable wood (Umweltbundesamt ). However, due to its shallow root systems, Norway spruce is sensitive to drought (Puhe ). Higher average temperatures and decreased precipitation during the vegetative growth period have both reduced the resilience of trees to infections, and enhanced conditions suitable for bark beetle infestations (Swift and Ran , Bentz and Jönsson ). Different beetle species often attack trees simultaneously, enabling secondary pests to enter the trees via boreholes (Franceschi et al. ). Norway spruces, like all other plants, are intimately associated with an array of microbes, which live endophytically or in close contact with their partner hosts, including their proximate soil (Bulgarelli et al. , Van Nuland et al. , Lyu et al. ). These so-called ‘phytomicrobiomes’ (for definition see Chen et al. ) can serve the plant during phases of increased environmental stress to adapt the composition of their communities (Trivedi et al. ), which can be plant- and even organ- or tissue-specific (Purahong et al. , Gaube et al. ), so that a microbial adaptation in reaction to a stressor can be very localized to a plant or each part of a plant. Recent studies on the consequences of bark beetle outbreaks in forest stands have mainly focused on below-ground microbial communities (Štursová et al. , Mikkelson et al. , Kosunen et al. ). Forest disturbances, which are in principle necessary and often beneficial processes for forest structure dynamics (Turner ), also affect the close link between vegetation, nutrient cycles, and soil microbes (Fierer ). However, an increased frequency and severity of climate-driven events may disturb forest equilibria to such an extent that subsequent transformations could only be considered negative (Seidl et al. ). Among microbes, plant-associated and soil-borne fungal communities appear to be particularly altered by bark beetle infestations (Štursová et al. ). Such infestations accelerate litter fall (Kopáček et al. ), which affects carbon and nitrogen cycles (Morehouse et al. ), and increases soil temperature and moisture (Reed et al. ). Needle loss, in addition to hindering the flow of photosynthates in phloem (Biedermann et al. ), might weaken the mutualistic association between ectomycorrhizal fungi and trees (Štursová et al. , Veselá et al. ), and leads to less photo-assimilated carbon to be received by fungi in exchange for their supply of nutrients and water to trees (Smith and Read ). Accordingly, in soils a shift from ectomycorrhizal fungi towards other fungal guilds, most likely saprotrophs due to input of wood debris and dying roots, can be expected (Štursová et al. , Karst et al. , Veselá et al. , Custer et al. ). Bark beetles also influence the community of wood-inhabiting fungi since they carry fungal spores on their surface or in their gut (Rostás et al. , Jacobsen et al. , Heck , Seibold et al. ). Among soil-born and plant-associated microbes, fungi are highly involved in nutrient cycling, decomposition and soil aggregation processes, multitrophic interactions, and can also harm or protect their plant partners under conditions of environmental stress (Frąc et al. , Zanne et al. ). In the present study, we focus on the Norway spruce mycobiome. We took advantage of a bark beetle-infested Norway spruce stand within the Hainich-Dün area in Central Germany, which is part of a long-term and large-scale research platform Biodiversity Exploratories (Fischer et al. ). Due to tree mortality, the chosen Norway spruce stand was scheduled to be completely harvested in the 2019/2020 winter. Thus, we were able to sample not only soil and needle litter, but also stem wood of the still living trees. In this context, an inventory of the Norway spruce mycobiome at two different bark beetle infestation stages could be realized. We address the following hypotheses by analysing how bark beetle infestation stage affects both fungal taxonomy and guild assignment, in terms of Shannon diversity and community composition: (i) Norway spruce-associated fungi are habitat specific, with higher proportions of symbiotrophs in soil and litter, where ectomycorrhizal fungi dominate; (ii) bark beetle infection stage affects Norway spruce-associated fungi, shifting towards a predominance of saprotrophic and pathogenic fungi at an advanced infestation stage; and (iii) habitat-specific fungi adapt in response to continuous bark beetle infestation, which leads to an increased community dissimilarity among the habitats that might indicate a destabilization of the overall Norway spruce-associated mycobiome. Study area and sampling The study was carried out as part of the Biodiversity Exploratories, a long-term and large-scale research platform, which aims to study how land-use regimes affect biodiversity across Germany (Fischer et al. ). The 100-m×100-m experimental plot was located in the Hainich-Dün region in Central Germany (51°11′N 10°21′E). It was a planted monoculture of ∼0-year-old Norway spruce ( P. abies ), fully colonized by bark beetles. In January 2020, eight individual trees with comparable diameters of ≈18 cm at breast height were selected (see ). Four of the trees were chosen as being in an early stage of bark beetle colonization, i.e. still having green needles and without obvious signs of boreholes and gallery entries; the other four trees already showed signs of late-stage bark beetle colonization, i.e. obvious boreholes, bark galleries, and advanced needle loss. From each tree, 16 samples were taken. From each principal aspect, i.e. along compass directions north, east, south and west, we took stem wood at 130- and 15-cm trunk height, needle litter ≈50 cm away from the trunk, and bulk soil from 0 to 10 cm below that. We use the term ‘habitat’ to label these four sampling locations within the spruce forest environment. The stem wood samples were taken after removing remaining bark with a 6-mm-diameter drill; the wood chips were collected separately. Between taking samples, the drill was cleaned with 70% ethanol. Litter was collected with a 6-cm spatula without discriminating the horizons. Sampled soil cores had a diameter of 3 cm. All samples were stored at 4°C until the next day when wood and litter samples were frozen in liquid nitrogen to homogenize them individually for 5 min with 28 r/s in a TissueLyser II (Qiagen GmbH, Hilden, Germany) by 9-mm-diameter steel grinding balls. The soil samples were homogenized manually, while any remaining plant particles were removed. DNA extraction, library preparation, and Illumina sequencing Genomic DNA of all homogenized samples was extracted using the Quick-DNA TM Fecal/Soil Microbe Miniprep Kit (Zymo Research Europe, Freiburg, Germany), following the manufacturer’s guidelines. Resulting DNA concentrations were measured using a NanoDrop8000 UV-Vis spectrophotometer (Peqlab Biotechnologie GmbH, Erlangen, Germany), and all extracts were equally diluted prior to amplification. Amplification of the fungal fungal internal transcripten spacer region 2 (ITS2) followed the descriptions in Prada-Salcedo et al. using P7-3 N-fITS7 and P7-4 N-fITS7 together with P5-5 N-ITS4 and P5-6 N-ITS4 (Gardes and Bruns , Ihrmark et al. , Leonhardt et al. ). Each sample was amplified in triplicate, accompanied by one negative control per polymerase chain reaction (PCR) plate. The success of the amplifications was checked by gel electrophoresis. The amplicon triplicates were pooled together and purified with an Agencourt AMPure XP kit (Beckman Coulter, Krefeld, Germany). These cleaned products were used as templates in a subsequent PCR, introducing the Illumina Nextera XT indices and sequencing adaptors according to the manufacturer’s instructions. The PCR conditions were as follows: initial denaturation at 95°C for 3 min, eight cycles of denaturation at 98°C for 30 s, annealing at 55°C for 30 s, followed by elongation at 72°C for 30 s, and a final extension at 72°C for 5 min. Resulting PCR products were purified again with AMPure beads. The amplicon libraries were quantified using PicoGreen assays (Molecular Probes, Eugene, OR, USA) and pooled equimolar. Fragment sizes and the quality of the DNA sequencing libraries were checked using an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). The final pool was used for paired-end sequencing of 2 × 300 bp with a MiSeq Reagent kit v3 on an Illumina MiSeq platform. The sequencing was performed at the Department of Soil Ecology of the Helmholtz-Centre for Environmental Research—UFZ in Halle (Saale), Germany. Raw sequences were deposited in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) under study accession number PRJNA930322. Bioinformatics Raw forward and reverse ITS2 reads were demultiplexed with default parameters by the Illumina reporter software v2.5.1.3 according to the index combinations, and provided as fastq files with the Illumina adaptors, indices, and sequencing primers removed. Further downstream processing was realized using the DADA2-based (Callahan et al. ) pipeline dadasnake v0.11 (Weißbecker et al. ). Amplification primers were removed using cutadapt (Martin ). The minimal read length was set to 70 bp, with a minimum Phred score of 20. To merge the sequences, a minimum overlap of 20 bp was required. After removing chimeric sequences, amplicon sequence variants (ASVs; Callahan et al. ) were generated. Due to intraspecific variability of the ITS (Estensmo et al. ), ASVs were clustered at 97% sequence similarity using VSEARCH v2.22 (Rognes et al. ). Hence, we refer to operational taxonomic units (OTUs) as a fungal species equivalent subsequently. The representative sequences of each OTU were used as the taxonomic assignment for the ITS2 sequences, which was performed using the MOTHUR implementation of the Bayesian classifier (Schloss et al. ) and the database UNITE, v10.0 (Abarenkov et al. ) within dadasnake (Weißbecker et al. ). The database FungalTraits, v1.2 (Põlme et al. ), was then used to parse fungal taxonomy and determine guilds, i.e. saprotroph, symbiotroph, and pathotroph according to the ‘primary lifestyle’. Statistical analyses All statistical analyses were performed using R (v4.2.2) (R Core Team ). Initially, the processed data were reduced and only fungal OTUs with >10 sequence reads (to exclude singletons and sequencing noise), which had been detected in at least two samples (to correct for our small sample size), were kept. The samples of this filtered dataset were then evenly rarefied to 95% of the smallest number, i.e. 32 774 sequences per sample using the R package phyloseq (McMurdie and Holmes ). Rarefaction curves, prepared with the R package vegan (Oksanen et al. ), confirmed that a saturation of fungal OTUs per sample had been reached. Subsequent analyses were based on the retrieved fungal OTU matrix. We used the R package metacoder (Foster et al. ) to analyse shifts in fungal taxonomy in relation to spruce habitat and bark beetle infection stage. By comparing the abundance of individual taxa within treatments, heat trees, i.e. tree-based visualizations, which contained all OTUs with ≥5 reads, were calculated to depict statistical differences using divergent colours. The core mycobiome of Norway spruces at the early and late stages of bark beetle infection was evaluated using proportional Venn diagrams, produced with the R package eulerr (Larsson ). Fungal taxonomy and lifestyle, i.e. guild annotation, of the OTUs shared within all Norway spruce habitats were depicted using pie charts. The fungal Shannon diversity was calculated using phyloseq (McMurdie and Holmes ). Boxplots, to visualize differences in fungal Shannon diversity in relation to spruce habitat and bark beetle infestation stage, were prepared using the R package ggplot2 (Wickham ). Wilcoxon rank sum exact tests for overall and lifestyle-specific fungal Shannon diversities were calculated, and significant differences ( P ≤ .05) were evaluated via the Tukey honestly significant difference (HSD) post hoc test using the R package vegan (Oksanen et al. ). To display shifts in overall and lifestyle-specific fungal community compositions, principal coordinate analyses (PCoA) were conducted and visualized based on Bray–Curtis dissimilarities using ggplot2 (Wickham ). Permutational analyses of variance (perMANOVA), as implemented in vegan (Oksanen et al. ), were applied to calculate the impact of spruce habitat and bark beetle infection stage on fungal communities. Bray–Curtis test results were also used to explore dissimilarities among mycobiome communities sampled from early and late infestation sites and comparing fungal community dissimilarity between and within habitats. Comparisons were visualized as split violin plots prepared with ggplot2 (Wickham ). The study was carried out as part of the Biodiversity Exploratories, a long-term and large-scale research platform, which aims to study how land-use regimes affect biodiversity across Germany (Fischer et al. ). The 100-m×100-m experimental plot was located in the Hainich-Dün region in Central Germany (51°11′N 10°21′E). It was a planted monoculture of ∼0-year-old Norway spruce ( P. abies ), fully colonized by bark beetles. In January 2020, eight individual trees with comparable diameters of ≈18 cm at breast height were selected (see ). Four of the trees were chosen as being in an early stage of bark beetle colonization, i.e. still having green needles and without obvious signs of boreholes and gallery entries; the other four trees already showed signs of late-stage bark beetle colonization, i.e. obvious boreholes, bark galleries, and advanced needle loss. From each tree, 16 samples were taken. From each principal aspect, i.e. along compass directions north, east, south and west, we took stem wood at 130- and 15-cm trunk height, needle litter ≈50 cm away from the trunk, and bulk soil from 0 to 10 cm below that. We use the term ‘habitat’ to label these four sampling locations within the spruce forest environment. The stem wood samples were taken after removing remaining bark with a 6-mm-diameter drill; the wood chips were collected separately. Between taking samples, the drill was cleaned with 70% ethanol. Litter was collected with a 6-cm spatula without discriminating the horizons. Sampled soil cores had a diameter of 3 cm. All samples were stored at 4°C until the next day when wood and litter samples were frozen in liquid nitrogen to homogenize them individually for 5 min with 28 r/s in a TissueLyser II (Qiagen GmbH, Hilden, Germany) by 9-mm-diameter steel grinding balls. The soil samples were homogenized manually, while any remaining plant particles were removed. Genomic DNA of all homogenized samples was extracted using the Quick-DNA TM Fecal/Soil Microbe Miniprep Kit (Zymo Research Europe, Freiburg, Germany), following the manufacturer’s guidelines. Resulting DNA concentrations were measured using a NanoDrop8000 UV-Vis spectrophotometer (Peqlab Biotechnologie GmbH, Erlangen, Germany), and all extracts were equally diluted prior to amplification. Amplification of the fungal fungal internal transcripten spacer region 2 (ITS2) followed the descriptions in Prada-Salcedo et al. using P7-3 N-fITS7 and P7-4 N-fITS7 together with P5-5 N-ITS4 and P5-6 N-ITS4 (Gardes and Bruns , Ihrmark et al. , Leonhardt et al. ). Each sample was amplified in triplicate, accompanied by one negative control per polymerase chain reaction (PCR) plate. The success of the amplifications was checked by gel electrophoresis. The amplicon triplicates were pooled together and purified with an Agencourt AMPure XP kit (Beckman Coulter, Krefeld, Germany). These cleaned products were used as templates in a subsequent PCR, introducing the Illumina Nextera XT indices and sequencing adaptors according to the manufacturer’s instructions. The PCR conditions were as follows: initial denaturation at 95°C for 3 min, eight cycles of denaturation at 98°C for 30 s, annealing at 55°C for 30 s, followed by elongation at 72°C for 30 s, and a final extension at 72°C for 5 min. Resulting PCR products were purified again with AMPure beads. The amplicon libraries were quantified using PicoGreen assays (Molecular Probes, Eugene, OR, USA) and pooled equimolar. Fragment sizes and the quality of the DNA sequencing libraries were checked using an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). The final pool was used for paired-end sequencing of 2 × 300 bp with a MiSeq Reagent kit v3 on an Illumina MiSeq platform. The sequencing was performed at the Department of Soil Ecology of the Helmholtz-Centre for Environmental Research—UFZ in Halle (Saale), Germany. Raw sequences were deposited in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) under study accession number PRJNA930322. Raw forward and reverse ITS2 reads were demultiplexed with default parameters by the Illumina reporter software v2.5.1.3 according to the index combinations, and provided as fastq files with the Illumina adaptors, indices, and sequencing primers removed. Further downstream processing was realized using the DADA2-based (Callahan et al. ) pipeline dadasnake v0.11 (Weißbecker et al. ). Amplification primers were removed using cutadapt (Martin ). The minimal read length was set to 70 bp, with a minimum Phred score of 20. To merge the sequences, a minimum overlap of 20 bp was required. After removing chimeric sequences, amplicon sequence variants (ASVs; Callahan et al. ) were generated. Due to intraspecific variability of the ITS (Estensmo et al. ), ASVs were clustered at 97% sequence similarity using VSEARCH v2.22 (Rognes et al. ). Hence, we refer to operational taxonomic units (OTUs) as a fungal species equivalent subsequently. The representative sequences of each OTU were used as the taxonomic assignment for the ITS2 sequences, which was performed using the MOTHUR implementation of the Bayesian classifier (Schloss et al. ) and the database UNITE, v10.0 (Abarenkov et al. ) within dadasnake (Weißbecker et al. ). The database FungalTraits, v1.2 (Põlme et al. ), was then used to parse fungal taxonomy and determine guilds, i.e. saprotroph, symbiotroph, and pathotroph according to the ‘primary lifestyle’. All statistical analyses were performed using R (v4.2.2) (R Core Team ). Initially, the processed data were reduced and only fungal OTUs with >10 sequence reads (to exclude singletons and sequencing noise), which had been detected in at least two samples (to correct for our small sample size), were kept. The samples of this filtered dataset were then evenly rarefied to 95% of the smallest number, i.e. 32 774 sequences per sample using the R package phyloseq (McMurdie and Holmes ). Rarefaction curves, prepared with the R package vegan (Oksanen et al. ), confirmed that a saturation of fungal OTUs per sample had been reached. Subsequent analyses were based on the retrieved fungal OTU matrix. We used the R package metacoder (Foster et al. ) to analyse shifts in fungal taxonomy in relation to spruce habitat and bark beetle infection stage. By comparing the abundance of individual taxa within treatments, heat trees, i.e. tree-based visualizations, which contained all OTUs with ≥5 reads, were calculated to depict statistical differences using divergent colours. The core mycobiome of Norway spruces at the early and late stages of bark beetle infection was evaluated using proportional Venn diagrams, produced with the R package eulerr (Larsson ). Fungal taxonomy and lifestyle, i.e. guild annotation, of the OTUs shared within all Norway spruce habitats were depicted using pie charts. The fungal Shannon diversity was calculated using phyloseq (McMurdie and Holmes ). Boxplots, to visualize differences in fungal Shannon diversity in relation to spruce habitat and bark beetle infestation stage, were prepared using the R package ggplot2 (Wickham ). Wilcoxon rank sum exact tests for overall and lifestyle-specific fungal Shannon diversities were calculated, and significant differences ( P ≤ .05) were evaluated via the Tukey honestly significant difference (HSD) post hoc test using the R package vegan (Oksanen et al. ). To display shifts in overall and lifestyle-specific fungal community compositions, principal coordinate analyses (PCoA) were conducted and visualized based on Bray–Curtis dissimilarities using ggplot2 (Wickham ). Permutational analyses of variance (perMANOVA), as implemented in vegan (Oksanen et al. ), were applied to calculate the impact of spruce habitat and bark beetle infection stage on fungal communities. Bray–Curtis test results were also used to explore dissimilarities among mycobiome communities sampled from early and late infestation sites and comparing fungal community dissimilarity between and within habitats. Comparisons were visualized as split violin plots prepared with ggplot2 (Wickham ). Bark beetle identification Individual beetles and bark galleries were identified and indicated Pityogenes chalcographus (Linnaeus, 1760), Polygraphus poligraphus (Linnaeus, 1758), and Crypturgus cinereus (Hbst., 1793) to be the main spruce colonizers on the studied forest plot. Relationship between bark beetle infestation stage, fungal taxonomy, and fungal guilds After rarefaction, a total of 1706 fungal OTUs were found in 128 samples across the four spruce habitats and both bark beetle infestation stages under consideration. The communities were dominated by fungi belonging to Asco- and Basidiomycota (937 and 453 OTUs, respectively). We were able to detect significant differences in fungal taxonomy according to the infestation stage and spruce habitat (see ). While significant enrichments of certain fungal taxa in soil and litter were observed at higher taxonomic levels such as fungal order and family ( and ), taxonomic differences in both stem wood habitats occurred already at the fungal class level ( and ). The fungal communities were dominated by saprotrophs (513 OTUs), followed by pathotrophs (170 OTUs), and symbiotic fungi (158 OTUs). Approximately 51% of all OTUs could not be linked to any fungal lifestyle (865 OTUs). Considering the distribution among the fungal guilds, soil and litter samples were dominated by saprotrophic and symbiotrophic fungi. In contrast, the proportion of pathotrophic fungi within the stem wood increased (Fig. ). However, the overall number of OTUs decreased from soil upwards to the stem wood, as did the number of saprotrophic, pathotrophic, and symbiotrophic OTUs for both stages of bark beetle infestation (see ). Nevertheless, though the number of fungal OTUs found in wood samples was the lowest compared to soil and litter, proportionally this habitat contained the highest uncertainty recharging assignable fungal guilds (Fig. ; ). Norway spruce core mycobiome The core mycobiome appeared to be relatively stable with bark beetle infestation stage . For both Norway spruce infestation stages, 13% of all fungal OTUs could be found in all sample types. However, the proportion of Ascomycota was higher at the late bark beetle infection stage, which was associated with a reduction of Basidiomycota. Saprotrophic and symbiotrophic fungi decreased in the core mycobiome of late-stage Norway spruces, while pathotrophs and fungi without guild annotation increased. Spruce habitat and bark beetle infestation stage affect fungal Shannon diversity The fungal Shannon diversity was specifically affected by the bark beetle infestation stage in the different habitats (Fig. ). With the exception of soil, the fungal Shannon diversity decreased significantly between early and late infestation stages. Moreover, the fungal Shannon diversity was highest in soil and litter, and decreased significantly moving up through the sampling habitats. Similar patterns were found when analysing the Shannon diversity of fungal saprotrophs and symbiotrophs pathotrophs separately (Fig. ). However, the Shannon diversity of pathogenic fungi was highest in litter (Fig. ). Habitat-specific fungal communities respond to bark beetle infestation stage The fungal communities were habitat specific, i.e. they were significantly divergent between soil, litter, and the two levels of stem wood (perMANOVA: spruce habitat: R ² = 0.33, P ≤ .001, bark beetle infestation stage: R ² = 0.06, P ≤ .001, interaction effect: R ² = 0.07, P ≤ .001; Fig. ). Additionally, the bark beetle infestation stage particularly affected the fungal communities present in stem wood, independent of the sampling height. A very similar pattern was found for the saprotrophic subcommunities, which were habitat-specific; based on bark beetle infestation stage, there was an apparent difference, especially in stem wood (Fig. ; perMANOVA Saprotrophs : spruce habitat: R ² = 0.25, P ≤ .001, bark beetle infestation stage: R ² = 0.03, P ≤ .001, interaction effect: R ² = 0.05, P ≤ .001). Pathogenic fungal subcommunities were especially habitat-specific, and shaped by bark beetle infestation stage, though the ordination showed less distinct communities (Fig. ; perMANOVA Pathotrophs : spruce habitat: R ² = 0.39, P ≤ .001, bark beetle infestation stage: R ² = 0.11, P ≤ .001, interaction effect: R ² = 0.11, P ≤ .001). In contrast, symbiotic communities appeared to be less habitat-specific or shaped by bark beetle infestation stage (Fig. ; perMANOVA Symbiotrophs : spruce habitat: R ² = 0.18, P ≤ .001, bark beetle infestation stage: R ² = 0.02, P ≤ .001, interaction effect: R ² = 0.04, P ≤ .001). Bark beetle infestation stage increased the dissimilarity of the Norway spruce mycobiome Complementary to the results of the perMANOVA and the PCoA visualization, the dissimilarity of fungal communities between habitats was relatively high, and was higher in trees at later rather than earlier stages of infestation (see ). Habitat-specific fungal communities displayed the highest community dissimilarity in soils, which significantly decreased from litter to the stem wood (Wilcoxon rank sum exact test, P > .05). Likewise, a significantly higher variance of community dissimilarities with infestation stage for habitat-specific fungal communities was detectable ( P > .005; Fig. ). Bark beetle infestations thus appear to cause a significant change in the overall community composition of fungi associated with these habitats. Individual beetles and bark galleries were identified and indicated Pityogenes chalcographus (Linnaeus, 1760), Polygraphus poligraphus (Linnaeus, 1758), and Crypturgus cinereus (Hbst., 1793) to be the main spruce colonizers on the studied forest plot. After rarefaction, a total of 1706 fungal OTUs were found in 128 samples across the four spruce habitats and both bark beetle infestation stages under consideration. The communities were dominated by fungi belonging to Asco- and Basidiomycota (937 and 453 OTUs, respectively). We were able to detect significant differences in fungal taxonomy according to the infestation stage and spruce habitat (see ). While significant enrichments of certain fungal taxa in soil and litter were observed at higher taxonomic levels such as fungal order and family ( and ), taxonomic differences in both stem wood habitats occurred already at the fungal class level ( and ). The fungal communities were dominated by saprotrophs (513 OTUs), followed by pathotrophs (170 OTUs), and symbiotic fungi (158 OTUs). Approximately 51% of all OTUs could not be linked to any fungal lifestyle (865 OTUs). Considering the distribution among the fungal guilds, soil and litter samples were dominated by saprotrophic and symbiotrophic fungi. In contrast, the proportion of pathotrophic fungi within the stem wood increased (Fig. ). However, the overall number of OTUs decreased from soil upwards to the stem wood, as did the number of saprotrophic, pathotrophic, and symbiotrophic OTUs for both stages of bark beetle infestation (see ). Nevertheless, though the number of fungal OTUs found in wood samples was the lowest compared to soil and litter, proportionally this habitat contained the highest uncertainty recharging assignable fungal guilds (Fig. ; ). The core mycobiome appeared to be relatively stable with bark beetle infestation stage . For both Norway spruce infestation stages, 13% of all fungal OTUs could be found in all sample types. However, the proportion of Ascomycota was higher at the late bark beetle infection stage, which was associated with a reduction of Basidiomycota. Saprotrophic and symbiotrophic fungi decreased in the core mycobiome of late-stage Norway spruces, while pathotrophs and fungi without guild annotation increased. The fungal Shannon diversity was specifically affected by the bark beetle infestation stage in the different habitats (Fig. ). With the exception of soil, the fungal Shannon diversity decreased significantly between early and late infestation stages. Moreover, the fungal Shannon diversity was highest in soil and litter, and decreased significantly moving up through the sampling habitats. Similar patterns were found when analysing the Shannon diversity of fungal saprotrophs and symbiotrophs pathotrophs separately (Fig. ). However, the Shannon diversity of pathogenic fungi was highest in litter (Fig. ). The fungal communities were habitat specific, i.e. they were significantly divergent between soil, litter, and the two levels of stem wood (perMANOVA: spruce habitat: R ² = 0.33, P ≤ .001, bark beetle infestation stage: R ² = 0.06, P ≤ .001, interaction effect: R ² = 0.07, P ≤ .001; Fig. ). Additionally, the bark beetle infestation stage particularly affected the fungal communities present in stem wood, independent of the sampling height. A very similar pattern was found for the saprotrophic subcommunities, which were habitat-specific; based on bark beetle infestation stage, there was an apparent difference, especially in stem wood (Fig. ; perMANOVA Saprotrophs : spruce habitat: R ² = 0.25, P ≤ .001, bark beetle infestation stage: R ² = 0.03, P ≤ .001, interaction effect: R ² = 0.05, P ≤ .001). Pathogenic fungal subcommunities were especially habitat-specific, and shaped by bark beetle infestation stage, though the ordination showed less distinct communities (Fig. ; perMANOVA Pathotrophs : spruce habitat: R ² = 0.39, P ≤ .001, bark beetle infestation stage: R ² = 0.11, P ≤ .001, interaction effect: R ² = 0.11, P ≤ .001). In contrast, symbiotic communities appeared to be less habitat-specific or shaped by bark beetle infestation stage (Fig. ; perMANOVA Symbiotrophs : spruce habitat: R ² = 0.18, P ≤ .001, bark beetle infestation stage: R ² = 0.02, P ≤ .001, interaction effect: R ² = 0.04, P ≤ .001). Complementary to the results of the perMANOVA and the PCoA visualization, the dissimilarity of fungal communities between habitats was relatively high, and was higher in trees at later rather than earlier stages of infestation (see ). Habitat-specific fungal communities displayed the highest community dissimilarity in soils, which significantly decreased from litter to the stem wood (Wilcoxon rank sum exact test, P > .05). Likewise, a significantly higher variance of community dissimilarities with infestation stage for habitat-specific fungal communities was detectable ( P > .005; Fig. ). Bark beetle infestations thus appear to cause a significant change in the overall community composition of fungi associated with these habitats. This study aimed to analyse changes in the respective mycobiomes associated with soil and tree habitats of Norway spruce trees at different stages of bark beetle infestation following repeated periods of meteorologically extreme drought and heat affecting a forest plot in Central Germany. To this end, we investigated soil, litter, and two stem wood heights of eight individual spruce trees, infested by the six-toothed bark beetle Pityogenes chalcographus L., and the four-eyed spruce bark beetle Polygraphus poligraphus L. These two beetle species are phloem-feeding bark breeders, known to prefer spruce (Hammerbacher , Rizan et al. ). The third beetle species, we found, Crypturgus cinereus Hbst., is a secondary pest insect that uses the boreholes of other bark beetles (Kleine ). The investigated infestation event led to heterogeneous tree phenotypes on the studied Norway spruce plot, which shows that the progress of bark beetle mass-spreading can be slow and have specific effects on individual trees. To our knowledge, this is the first study considering Norway spruce-associated fungi in different habitats of the plant–soil system after being infested by bark beetles. We found that fungal communities were highly specific to the different habitats, but the bark beetle infestation stage only had an effect on fungi associated with stem wood, and not on fungi within the litter or the soil. Bark beetle infestations can have significant effects on the properties of wood—changes that might result in the predominance of saprotrophic fungal taxa (Hýsek et al. ). Tree stems are, moreover, exposed to a variety of above- and below-ground influences, which are also associated with fungi (Gönczöl and Révay , Golan and Pringle , Vasutova et al. ). The overall fungal diversity in stem wood was lower compared to the one in soil or litter, and it significantly dropped and differed more at the late bark beetle infestation stage. However, due to a lack of a formal control group in the stand, i.e. there were no healthy Norway spruce individuals, the question concerning whether the fungal reaction arising from a bark beetle infestation, and subsequent tree dieback enhances or reduces the specificity of habitat-specific mycobiomes remains open. Bark beetle infestations lead to needle loss, and eventually tree mortality, which has cascading effects for the whole forest stand (Morehouse et al. ). In the worst case, disturbances to nutrient cycles can transform such stands from carbon sinks to temporary sources (Ghimire et al. ). Within terrestrial ecosystems, microbes are often key players in nutrient cycling (Gougoulias et al. ). Accordingly, most recent studies have focused on the effect bark beetle infestations have on soil microbes (Štursová et al. , Mikkelson et al. , Veselá et al. , Kosunen et al. ). Our results show no effect of bark beetle infestation stage on the general fungal or guild-specific Shannon diversity, nor on their community composition within the soil habitat. These results agree with those reported by Štursová et al. , who investigated changes in the soil fungal communities several years after a bark beetle outbreak. A possible explanation for the stable soil fungal communities might be the high buffering abilities of soils as habitats after environmental changes (Goldmann et al. ). Even the communities of symbiotrophic fungi appeared to not shift as the infestation progresses, although they rely directly on photoassimilation. However, it might be expected that a decrease in symbiotrophs, particularly those defined as ectomycorrhizal fungi, coupled with an increase in saprotrophic fungi, should occur as a bark beetle infestation progresses (Štursová et al. , Treu et al. ). The comparable patterns in communities of soil fungi that we found across different infestation stages could also arise from our sequencing approach. This method cannot detect eventual shifts within subcommunities of active fungi in response to environmental stress (Mikkelson et al. ) as it detects the spores of both active and inactive mycelia. Compared to the soil, the litter was inhabited by a distinct fungal community, which did not differ significantly between the early and late infestation stages. The fungal Shannon diversity in litter was comparable to that in soil at the early stage, but significantly lower at the late infestation stage. This suggests that the addition of fallen bark due to the beetle galleries excavated under it (Lieutier et al. ), twigs, and branches (Kosunen et al. ) does not increase the diversity of fungal communities in litter. It is still unclear what influence bark beetles hibernating in litter may have on the diversity of fungal communities. To summarize, our results confirmed our hypotheses that fungal communities are highly habitat-specific, and that the bark beetle infestation led to a reduction of symbionts and an increase of pathotrophic fungi. Considering the fungal communities as habitat-interdependent, the bark beetle infestation increased the spruce mycobiome dissimilarity. The presence of so many unclassifiable fungi in all habitats and infestation stages we studied could have significantly affected our ability to interpret the results, particularly concerning expected shifts towards saprotrophic and pathogenic fungi. Thus, we cannot exclude an increase of antagonistic fungi at later stages of bark beetle infestation. Although there is still a paucity of knowledge concerning endophytic and wood-associated communities (de Errasti et al. , Pellitier et al. , Lee et al. ), they appear to be very dynamic after disturbances and therefore warrant further investigation. Continuing climate change will lead to more disturbances and diebacks to previously relatively stable forest ecosystems, including massive infestations of pest organisms such as bark beetles. It is crucial to deepen our understanding of how such disruptions may affect, not just forest trees, but also their cohabiting and associated microorganisms, their trophic interactions, and their role in ecosystem functioning, in order to know how to manage these important resources under future climate change. With our study, we have made a first step towards understanding the dynamics in fungal community composition in different forest habitats intimately associated with Norway spruce. Our results have shown that the habitat suffering most attack by bark beetle, the stem wood, also underwent major changes between the early and late stages of infestation. These changes, in turn, had consequences for the overall mycobiomes in the habitat below ground. This insight underscores the value of accessing not just terrestrial substrates but also the trees themselves. fiaf015_Supplemental_File |
Education‐based differences in alcohol health literacy in Germany | 0a829f14-f717-44fb-ba7f-83e078c2dc1b | 11814345 | Health Literacy[mh] | INTRODUCTION Alcohol use is causally linked to more than 200 diseases and injuries , rendering it a major contributor to the health burden in Europe. In numbers, more than 583,000 people died due to an alcohol‐attributable cause in 2019 in the World Health Organization European Region, equivalent to 6% (95% confidence interval [CI] 5.4–7.1%) of all deaths according to the Global Burden of Disease study . The three major disease categories are digestive diseases, neoplasms and cardiovascular diseases with 17.7 (95% CI 14.5–20.5), 14.4 (95% CI 12.9–15.9) and 12.2 (95% CI 5.6–19.1) deaths per 100,000. Despite ample epidemiological evidence demonstrating alcohol use as a major preventable risk factor for these diseases , there is a concerning lack of health literacy on these health risks in the European population. According to a recent online survey in 14 European countries, the knowledge about alcohol's causal role in the development of a range of diseases reached 90% for liver disease but dropped to 53% for cancers and only 11% for respiratory diseases (e.g., tuberculosis) . Knowledge about the potential health hazards of a health‐related behaviour such as alcohol use is often discussed as a critical determinant to practicing this behaviour. However, just knowing about health risks appear to be insufficient to lead to behavioural change , especially for behaviours such as drinking alcohol, which is deeply rooted socially and culturally in many countries . The concept of health literacy acknowledges the complexity of making health‐related decisions and involves multiple components: (i) knowledge and understanding (e.g., alcohol content in beverages, health risks); (ii) skills (e.g., access health‐related information); (iii) critical thinking (e.g., appraise marketing messages); and (iv) system competence (e.g., navigate in health systems; ). If alcohol health literacy is perceived in its full complexity, including higher‐level changes to the health care system and environment, for example, by reducing the affordability and availability of alcoholic beverages, it is very likely to lower population‐level alcohol consumption. In an expert panel study, education and information measures targeting the individual were rated most effective to increase alcohol health literacy, while being least effective to leading to an actual decline in alcohol use, for which alcohol control policies were rated the highest . As most European countries follow a narrative of ‘responsible alcohol consumption’ characterised by limited regulations of alcoholic products , high levels of alcohol health literacy are vital to public health. In fact, this liberal approach presupposes that all consumers can make informed decision by being able to access, understand and apply the relevant information to their day‐to‐day life in order to achieve desirable public health outcomes (e.g., reduced alcohol‐related health burden) . However, this appears not to be the case as knowledge on alcohol's causal role in the development of different diseases were found to be higher in people with tertiary education compared to those with primary or secondary education in a sample of Europeans from 14 countries . Such a correlation between years of education and health literacy has also been reported in the reach of German alcohol prevention campaigns and on general health literacy (e.g., ), suggesting that the current narrative of ‘responsible alcohol consumption’ may penalise people with lower levels of education (see ). Against this backdrop, we aim: (i) to capture the level of alcohol health literacy in the general adult population in Germany; as well as (ii) to investigate education‐based differences therein. We conduct our research in Germany that stands out for very high levels of alcohol use (alcohol sales in 2019: 10.6 L of pure alcohol, per capita ) and very liberal alcohol regulations . While alcohol consumption and harms have declined slightly in the past decade in Germany, the alcohol‐attributable burden remains high . Currently, there is no data available on how alcohol health literacy is distributed in Germany. We hypothesise that alcohol health literacy is significantly lower in people with low or medium levels of education compared to those with high education. METHODS 2.1 Study design and data collection We conducted a cross‐sectional survey of a convenience sample drawn from the general adult (18+ years) population in Germany. The survey was developed for the purpose of this study and comprised 29 items, covering questions on: (i) alcohol consumption (based on the AUDIT‐C , Lübecker translation); (ii) alcohol health literacy; (iii) general health literacy (HLS‐EU‐Q6 ); and (iv) socio‐demographics (for details, see below). The questionnaire was implemented online via LimeSurvey and participation was possible between 27 February 2023 and 11 April 2023. Respondents had to be at least 18 years of age to be eligible for participation. Potential respondents were recruited by using the personal networks of the study authors, the professional network of the Center for Interdisciplinary Addiction Research (University Medical Center Hamburg‐Eppendorf, Hamburg) and paid advertisements (for 500 clicks) on Facebook. This study's sample was limited to respondents who completed the survey (i.e., provided any answers to all items), which resulted in a study sample of n = 611 respondents. This study was approved by the local psychological ethics committee of the Center for Psychosocial Medicine, University Medical Center Hamburg‐Eppendorf, Hamburg (2023/02/08). 2.2 Measures 2.2.1 Dependent variable Alcohol health literacy was measured by a composite score that was based on nine items (see Table , Supporting Information). The questionnaire was developed under consideration of available topic‐related surveys (e.g., the World Health Organization survey on alcohol labelling ) and used a similar item scale ranging from 1 (lowest) to 4 (highest) as the HLS‐EU‐Q6 questionnaire to enable comparisons. Participants were asked about their knowledge regarding alcohol‐related diseases (one item), typical misconceptions and wrong beliefs regarding alcohol (four items), German alcohol drinking recommendations (as of January 2023 ; three items), and their ability to access and understand information on alcohol‐related health risks (one item). One additional item inquired whether respondents feel well‐informed about alcohol‐related risks, which was, however, not included in the alcohol health literacy score as it does not measure alcohol knowledge (item 2.7, see Table ). The composite score was calculated as the sum of all nine items divided by the number of items and ranged between 1 (lowest) and 4 (highest). Each item had an equal weight in the composite score, though the first item on alcohol‐related diseases covered nine different health conditions and respondents had to identify all nine to receive the highest score (i.e., four points; three points for 8 conditions, two points for 6–7 conditions, one point for 5 or less conditions). The composite score was dichotomised to differentiate ‘sufficient’ (scores >3) from ‘insufficient’ (scores ≤3) alcohol health literacy, that is, ‘sufficient’ alcohol health literacy could be achieved by giving correct answers (four points) to most questions. Cut‐offs were adopted from previous research applying the HLS‐EU in Germany . While the original HLS‐EU differentiates between three levels of health literacy (sufficient, problematic, inadequate), we grouped the two lower levels (problematic, inadequate) together to reflect ‘insufficient’ alcohol health literacy, given that only very few respondents (4%) had scores below or equal to two. 2.2.2 Independent variable Educational attainment was recorded in accordance to the International Standard Classification of Education (ISCED 11) and grouped into low, medium, and high based on the highest level of education completed . In cases where respondents indicated to have not yet completed their education, we asked to indicate their anticipated level of education. 2.2.3 Covariates Respondents were asked about their gender (men, women, ‘other’) and age (continuous). The age variable was grouped into three broad age groups (18–34, 35–49, 50+ years [highest recorded age was 80 years]). Alcohol use based on the Alcohol Use Disorders Identification Test‐Consumption (AUDIT‐C) sum score (range: 0–12) was grouped into current non‐drinker (AUDIT‐C score = 0), low‐risk (AUDIT‐C score ≥1 and <4 and <5 for women and men, respectively), and high‐risk alcohol use (AUDIT‐C score ≥4 and ≥5 for women and men, respectively) . General health literacy based on the HLS‐EU‐Q6 was classified into ‘inadequate’ (scores ≤2), ‘problematic’ (scores 2–3), and ‘sufficient’ (scores ≥3) health literacy in line with previous studies . 2.3 Statistical analysis This study does not build upon a study protocol; thus, all findings should be considered exploratory. We first used descriptive statistics (proportions, means) to describe our sample by alcohol health literacy status and gender, using chi 2 test statistics for statistical inference. Next, to test our hypothesis, we built logistic regression models with alcohol health literacy and educational attainment as dependent and independent variables, respectively. A first model was adjusted for gender and age group only. In a second and third model, we added the AUDIT‐C and general health literacy, respectively. As we expected the outcome of interest (insufficient alcohol health literacy) to be a non‐rare event (prevalence >10%), odds ratios were recalculated into risk ratios to ease interpretation . Level of statistical significance was α ≤ 0.05. Data cleaning was undertaken in Stata SE 15.1 and statistical analysis in R version 4.2.1 . Study design and data collection We conducted a cross‐sectional survey of a convenience sample drawn from the general adult (18+ years) population in Germany. The survey was developed for the purpose of this study and comprised 29 items, covering questions on: (i) alcohol consumption (based on the AUDIT‐C , Lübecker translation); (ii) alcohol health literacy; (iii) general health literacy (HLS‐EU‐Q6 ); and (iv) socio‐demographics (for details, see below). The questionnaire was implemented online via LimeSurvey and participation was possible between 27 February 2023 and 11 April 2023. Respondents had to be at least 18 years of age to be eligible for participation. Potential respondents were recruited by using the personal networks of the study authors, the professional network of the Center for Interdisciplinary Addiction Research (University Medical Center Hamburg‐Eppendorf, Hamburg) and paid advertisements (for 500 clicks) on Facebook. This study's sample was limited to respondents who completed the survey (i.e., provided any answers to all items), which resulted in a study sample of n = 611 respondents. This study was approved by the local psychological ethics committee of the Center for Psychosocial Medicine, University Medical Center Hamburg‐Eppendorf, Hamburg (2023/02/08). Measures 2.2.1 Dependent variable Alcohol health literacy was measured by a composite score that was based on nine items (see Table , Supporting Information). The questionnaire was developed under consideration of available topic‐related surveys (e.g., the World Health Organization survey on alcohol labelling ) and used a similar item scale ranging from 1 (lowest) to 4 (highest) as the HLS‐EU‐Q6 questionnaire to enable comparisons. Participants were asked about their knowledge regarding alcohol‐related diseases (one item), typical misconceptions and wrong beliefs regarding alcohol (four items), German alcohol drinking recommendations (as of January 2023 ; three items), and their ability to access and understand information on alcohol‐related health risks (one item). One additional item inquired whether respondents feel well‐informed about alcohol‐related risks, which was, however, not included in the alcohol health literacy score as it does not measure alcohol knowledge (item 2.7, see Table ). The composite score was calculated as the sum of all nine items divided by the number of items and ranged between 1 (lowest) and 4 (highest). Each item had an equal weight in the composite score, though the first item on alcohol‐related diseases covered nine different health conditions and respondents had to identify all nine to receive the highest score (i.e., four points; three points for 8 conditions, two points for 6–7 conditions, one point for 5 or less conditions). The composite score was dichotomised to differentiate ‘sufficient’ (scores >3) from ‘insufficient’ (scores ≤3) alcohol health literacy, that is, ‘sufficient’ alcohol health literacy could be achieved by giving correct answers (four points) to most questions. Cut‐offs were adopted from previous research applying the HLS‐EU in Germany . While the original HLS‐EU differentiates between three levels of health literacy (sufficient, problematic, inadequate), we grouped the two lower levels (problematic, inadequate) together to reflect ‘insufficient’ alcohol health literacy, given that only very few respondents (4%) had scores below or equal to two. 2.2.2 Independent variable Educational attainment was recorded in accordance to the International Standard Classification of Education (ISCED 11) and grouped into low, medium, and high based on the highest level of education completed . In cases where respondents indicated to have not yet completed their education, we asked to indicate their anticipated level of education. 2.2.3 Covariates Respondents were asked about their gender (men, women, ‘other’) and age (continuous). The age variable was grouped into three broad age groups (18–34, 35–49, 50+ years [highest recorded age was 80 years]). Alcohol use based on the Alcohol Use Disorders Identification Test‐Consumption (AUDIT‐C) sum score (range: 0–12) was grouped into current non‐drinker (AUDIT‐C score = 0), low‐risk (AUDIT‐C score ≥1 and <4 and <5 for women and men, respectively), and high‐risk alcohol use (AUDIT‐C score ≥4 and ≥5 for women and men, respectively) . General health literacy based on the HLS‐EU‐Q6 was classified into ‘inadequate’ (scores ≤2), ‘problematic’ (scores 2–3), and ‘sufficient’ (scores ≥3) health literacy in line with previous studies . Dependent variable Alcohol health literacy was measured by a composite score that was based on nine items (see Table , Supporting Information). The questionnaire was developed under consideration of available topic‐related surveys (e.g., the World Health Organization survey on alcohol labelling ) and used a similar item scale ranging from 1 (lowest) to 4 (highest) as the HLS‐EU‐Q6 questionnaire to enable comparisons. Participants were asked about their knowledge regarding alcohol‐related diseases (one item), typical misconceptions and wrong beliefs regarding alcohol (four items), German alcohol drinking recommendations (as of January 2023 ; three items), and their ability to access and understand information on alcohol‐related health risks (one item). One additional item inquired whether respondents feel well‐informed about alcohol‐related risks, which was, however, not included in the alcohol health literacy score as it does not measure alcohol knowledge (item 2.7, see Table ). The composite score was calculated as the sum of all nine items divided by the number of items and ranged between 1 (lowest) and 4 (highest). Each item had an equal weight in the composite score, though the first item on alcohol‐related diseases covered nine different health conditions and respondents had to identify all nine to receive the highest score (i.e., four points; three points for 8 conditions, two points for 6–7 conditions, one point for 5 or less conditions). The composite score was dichotomised to differentiate ‘sufficient’ (scores >3) from ‘insufficient’ (scores ≤3) alcohol health literacy, that is, ‘sufficient’ alcohol health literacy could be achieved by giving correct answers (four points) to most questions. Cut‐offs were adopted from previous research applying the HLS‐EU in Germany . While the original HLS‐EU differentiates between three levels of health literacy (sufficient, problematic, inadequate), we grouped the two lower levels (problematic, inadequate) together to reflect ‘insufficient’ alcohol health literacy, given that only very few respondents (4%) had scores below or equal to two. Independent variable Educational attainment was recorded in accordance to the International Standard Classification of Education (ISCED 11) and grouped into low, medium, and high based on the highest level of education completed . In cases where respondents indicated to have not yet completed their education, we asked to indicate their anticipated level of education. Covariates Respondents were asked about their gender (men, women, ‘other’) and age (continuous). The age variable was grouped into three broad age groups (18–34, 35–49, 50+ years [highest recorded age was 80 years]). Alcohol use based on the Alcohol Use Disorders Identification Test‐Consumption (AUDIT‐C) sum score (range: 0–12) was grouped into current non‐drinker (AUDIT‐C score = 0), low‐risk (AUDIT‐C score ≥1 and <4 and <5 for women and men, respectively), and high‐risk alcohol use (AUDIT‐C score ≥4 and ≥5 for women and men, respectively) . General health literacy based on the HLS‐EU‐Q6 was classified into ‘inadequate’ (scores ≤2), ‘problematic’ (scores 2–3), and ‘sufficient’ (scores ≥3) health literacy in line with previous studies . Statistical analysis This study does not build upon a study protocol; thus, all findings should be considered exploratory. We first used descriptive statistics (proportions, means) to describe our sample by alcohol health literacy status and gender, using chi 2 test statistics for statistical inference. Next, to test our hypothesis, we built logistic regression models with alcohol health literacy and educational attainment as dependent and independent variables, respectively. A first model was adjusted for gender and age group only. In a second and third model, we added the AUDIT‐C and general health literacy, respectively. As we expected the outcome of interest (insufficient alcohol health literacy) to be a non‐rare event (prevalence >10%), odds ratios were recalculated into risk ratios to ease interpretation . Level of statistical significance was α ≤ 0.05. Data cleaning was undertaken in Stata SE 15.1 and statistical analysis in R version 4.2.1 . RESULTS 3.1 Alcohol health literacy in the study population Six‐hundred and eleven individuals completed our online survey, of which 162 (26.5%) were excluded due to missing information on alcohol health literacy (i.e., response option ‘Not specified’). Another 8 (1.3%), 1 (0.1%), 2 (0.3%) and 44 (7.2%) observations were excluded due to missing information on educational attainment, gender, alcohol use and general health literacy, respectively. As only three respondents indicated ‘other’ gender, we restricted our analysis to men and women. The final sample with complete data on key variables comprised of n = 228 men and n = 163 women, with a mean age of 44.1 (SD = 17.8) and 45.7 (SD = 15.5) years, respectively. In our sample, insufficient alcohol health literacy was recorded in 47.8% of men and 41.1% of women. Table depicts the descriptive sample statistics by alcohol health literacy status and gender. The proportion of men and women with insufficient and sufficient alcohol health literacy did not statistically significantly differ regarding their age, education and general health literacy distributions. However, there were statistically significant more men with sufficient alcohol health literacy reporting to currently abstain from alcohol ( n = 23, 19.3%) compared to men with insufficient alcohol health literacy ( n = 8, 7.3%). Among women, 1 in 10 indicated to not drink alcohol (for comparison, men: 13.5%). Most respondents (≥75%) indicated that liver diseases, injuries, fetal damage, mental health conditions, and heart diseases are causally linked to alcohol use (Figure ). The alcohol‐cancer link was known by 57.7% (low education) to 83.0% (high education), while alcohol's causal contribution to dementia was reported by 50.0% (low education) to 66.0% (high education)—suggesting an educational gradient as hypothesised. The role of alcohol use in respiratory and infectious diseases was identified by less than a quarter of respondents across educational groups. Moreover, at least three out of four respondents stated that drinking wine would benefit their health and that wine would be generally better for their health than the same amount of pure alcohol in beer (Figure ). Respondents were further asked to indicate the limits of at‐risk alcohol use according to the German drinking guidelines as of early 2023 (Figure ). Most respondents were able to report the relevant drinking limits (per day, men: 2 drinks, women: 1 drink, women during pregnancy: 0 drinks). However, 23.1% to 34.6% of respondents from the low‐education group indicated values above the recommended limits, while this was the case in 11.0% to 18.8% of those in the other education groups. Lastly, we asked respondents how easily they understand information on alcohol‐related risks (Figure ). Across education groups, the majority indicated that they consider it easy to very easy to understand health‐related information on alcohol risks (low education: 61.6%, high education: 76%). 3.2 Educational differences in alcohol health literacy Table presents the results from the logistic regression models. According the fully adjusted model, respondents with low education were about 1.35 times more likely to have an insufficient alcohol health literacy compared to those with high education. Moreover, current non‐drinkers were significantly less likely to have an insufficient alcohol health literacy than low‐risk alcohol users, while there was no statistically significant difference between high‐risk and low‐risk alcohol users. All other covariates (gender, age group, general health literacy) were not statistically significantly associated with the outcome. Given the high degree of missingness in the outcome variable, we ran sensitivity analysis using a unique category for missing alcohol health literacy and fitted a multinomial logistic regression (alcohol health literacy: sufficient, insufficient, missing). Including respondents with missing data on alcohol health literacy, the analytical sample size was substantially larger ( n = 508). The results are shown in Table and suggest no statistically significant association of missing alcohol health literacy data and education. Women were about 1.4 times more likely to have missing data compared to men. This significant association, however, diminished when adjusting for alcohol use and general health literacy. In multinomial regression analyses, the statistically significant association between low education and insufficient alcohol health literacy was about risk ratio = 1.6 across models. Alcohol health literacy in the study population Six‐hundred and eleven individuals completed our online survey, of which 162 (26.5%) were excluded due to missing information on alcohol health literacy (i.e., response option ‘Not specified’). Another 8 (1.3%), 1 (0.1%), 2 (0.3%) and 44 (7.2%) observations were excluded due to missing information on educational attainment, gender, alcohol use and general health literacy, respectively. As only three respondents indicated ‘other’ gender, we restricted our analysis to men and women. The final sample with complete data on key variables comprised of n = 228 men and n = 163 women, with a mean age of 44.1 (SD = 17.8) and 45.7 (SD = 15.5) years, respectively. In our sample, insufficient alcohol health literacy was recorded in 47.8% of men and 41.1% of women. Table depicts the descriptive sample statistics by alcohol health literacy status and gender. The proportion of men and women with insufficient and sufficient alcohol health literacy did not statistically significantly differ regarding their age, education and general health literacy distributions. However, there were statistically significant more men with sufficient alcohol health literacy reporting to currently abstain from alcohol ( n = 23, 19.3%) compared to men with insufficient alcohol health literacy ( n = 8, 7.3%). Among women, 1 in 10 indicated to not drink alcohol (for comparison, men: 13.5%). Most respondents (≥75%) indicated that liver diseases, injuries, fetal damage, mental health conditions, and heart diseases are causally linked to alcohol use (Figure ). The alcohol‐cancer link was known by 57.7% (low education) to 83.0% (high education), while alcohol's causal contribution to dementia was reported by 50.0% (low education) to 66.0% (high education)—suggesting an educational gradient as hypothesised. The role of alcohol use in respiratory and infectious diseases was identified by less than a quarter of respondents across educational groups. Moreover, at least three out of four respondents stated that drinking wine would benefit their health and that wine would be generally better for their health than the same amount of pure alcohol in beer (Figure ). Respondents were further asked to indicate the limits of at‐risk alcohol use according to the German drinking guidelines as of early 2023 (Figure ). Most respondents were able to report the relevant drinking limits (per day, men: 2 drinks, women: 1 drink, women during pregnancy: 0 drinks). However, 23.1% to 34.6% of respondents from the low‐education group indicated values above the recommended limits, while this was the case in 11.0% to 18.8% of those in the other education groups. Lastly, we asked respondents how easily they understand information on alcohol‐related risks (Figure ). Across education groups, the majority indicated that they consider it easy to very easy to understand health‐related information on alcohol risks (low education: 61.6%, high education: 76%). Educational differences in alcohol health literacy Table presents the results from the logistic regression models. According the fully adjusted model, respondents with low education were about 1.35 times more likely to have an insufficient alcohol health literacy compared to those with high education. Moreover, current non‐drinkers were significantly less likely to have an insufficient alcohol health literacy than low‐risk alcohol users, while there was no statistically significant difference between high‐risk and low‐risk alcohol users. All other covariates (gender, age group, general health literacy) were not statistically significantly associated with the outcome. Given the high degree of missingness in the outcome variable, we ran sensitivity analysis using a unique category for missing alcohol health literacy and fitted a multinomial logistic regression (alcohol health literacy: sufficient, insufficient, missing). Including respondents with missing data on alcohol health literacy, the analytical sample size was substantially larger ( n = 508). The results are shown in Table and suggest no statistically significant association of missing alcohol health literacy data and education. Women were about 1.4 times more likely to have missing data compared to men. This significant association, however, diminished when adjusting for alcohol use and general health literacy. In multinomial regression analyses, the statistically significant association between low education and insufficient alcohol health literacy was about risk ratio = 1.6 across models. DISCUSSION In our exploratory study, we investigated the distribution of alcohol health literacy in a convenience sample of adults residing in Germany. We applied a brief nine‐item questionnaire of alcohol health literacy assessing knowledge pertaining to alcohol risks and guidelines of high‐risk alcohol use, as well as the understanding of health information related to alcohol use. In our sample, more than 50% of respondents were classified as having sufficient alcohol health literacy, defined as having comprehensive knowledge about alcohol‐related health harms and drinking guidelines in addition to the ability to access and understand this information. High levels of alcohol health literacy were mainly driven by most respondents correctly identifying misconceptions and wrong beliefs about alcohol, as well as by being able to correctly specify the low risk drinking limits for women and women during pregnancy. As people with lower levels of education were more likely to be classified as having insufficient alcohol health literacy, we can confirm our hypothesis. 4.1 Limitations Before interpreting the results further, we would like to highlight four important limitations of our study. First, we used a novel set of questions to measure alcohol health literacy, which had not been validated prior to or in the current study. This was necessary due to lack of an existing instrument in German language. Although the questionnaire was based on published surveys in the field, we cannot ascertain that it reliably and validly captures alcohol health literacy. Future studies may use our set of questions to develop a more comprehensive tool to capture alcohol health literacy that is systematically evaluated for reliability and validity. Second, we must acknowledge the limitations inherent in the nature of convenience samples. We do not expect the prevalence estimates to be generalisable to Germany at large. There is no reason to believe that the sample biases the association examined between educational attainment and alcohol health literacy, but we cannot rule out this possibility. Third, some respondents may have looked up answers to knowledge questions. This may bias our findings if this was more likely done by people within a specific education group. In anonymous online surveys, there is a low threshold for manipulating responses, but as there was no incentive to do so, we assume that few, if any, respondents engaged in this behaviour. Lastly, we only included n = 26 persons with low educational attainment, precluding gender‐stratified analysis and resulting in limited confidence for hypothesis testing. 4.2 Interpretation Comparing our results to a recent cross‐country study , we find remarkable similarities. As in our study, findings from 19,000 persons residing in 14 European countries show that knowledge pertaining to health risks from using alcohol is highest for liver diseases, followed by heart diseases and cancer, and lowest for respiratory diseases. People with higher education consistently reported better knowledge in that European study, but in our study, we find a possible reverse link (i.e., better knowledge among lower educated) for mental health and respiratory as well as infectious diseases (see Figure ). Future studies should follow up whether the educational gap in knowledge differs by health outcome. Given the observed education‐based differences in alcohol health literacy, the question arises as to why these exist. Obviously, the knowledge about potential health risks is an important component of alcohol health literacy that was assessed in our questionnaire and is directly linked to education. However, there are also structural issues that may contribute to lower health literacy in low‐educated individuals. For example, the federal prevention campaign ‘Alkohol—Kenn dein Limit’ (translation: ‘Alcohol—know your limit’; Bundeszentrale für gesundheitliche Aufklärung) was found to have a larger reach to adolescents and young adults with higher educational attainments , potentially contributing to the observed education‐based gap in alcohol health literacy. In Germany, there are only very few campaigns addressing adults and these campaigns are often invisible in the day‐to‐day life of the people . A very simple measure to raise awareness of alcohol's health risks is the introduction of health warning labels on alcohol containers , which, however, is currently not part of the public discourse in Germany. Overall, the education‐based differences in alcohol health literacy were small to moderate (risk ratio 1.3 to 1.6). However, these small gaps should be interpreted in light of other possible determinants of increased harm among disadvantaged populations . There is heterogeneous data on the education gradient with respect to alcohol use from Germany. General population data suggest that people with lower educational attainment appear to be less likely to engage in alcohol use in general and risky use patterns . In other words, the exposure to alcohol generally and high levels of alcohol specifically is below‐average for lower educated adults in Germany. Currently, there is no data available on the education gradient of alcohol harms in Germany. As for treatment, it appears that people with hazardous drinking patterns are twice as likely to be given a brief advice for hazardous drinking if they have a low educational background . Limitations Before interpreting the results further, we would like to highlight four important limitations of our study. First, we used a novel set of questions to measure alcohol health literacy, which had not been validated prior to or in the current study. This was necessary due to lack of an existing instrument in German language. Although the questionnaire was based on published surveys in the field, we cannot ascertain that it reliably and validly captures alcohol health literacy. Future studies may use our set of questions to develop a more comprehensive tool to capture alcohol health literacy that is systematically evaluated for reliability and validity. Second, we must acknowledge the limitations inherent in the nature of convenience samples. We do not expect the prevalence estimates to be generalisable to Germany at large. There is no reason to believe that the sample biases the association examined between educational attainment and alcohol health literacy, but we cannot rule out this possibility. Third, some respondents may have looked up answers to knowledge questions. This may bias our findings if this was more likely done by people within a specific education group. In anonymous online surveys, there is a low threshold for manipulating responses, but as there was no incentive to do so, we assume that few, if any, respondents engaged in this behaviour. Lastly, we only included n = 26 persons with low educational attainment, precluding gender‐stratified analysis and resulting in limited confidence for hypothesis testing. Interpretation Comparing our results to a recent cross‐country study , we find remarkable similarities. As in our study, findings from 19,000 persons residing in 14 European countries show that knowledge pertaining to health risks from using alcohol is highest for liver diseases, followed by heart diseases and cancer, and lowest for respiratory diseases. People with higher education consistently reported better knowledge in that European study, but in our study, we find a possible reverse link (i.e., better knowledge among lower educated) for mental health and respiratory as well as infectious diseases (see Figure ). Future studies should follow up whether the educational gap in knowledge differs by health outcome. Given the observed education‐based differences in alcohol health literacy, the question arises as to why these exist. Obviously, the knowledge about potential health risks is an important component of alcohol health literacy that was assessed in our questionnaire and is directly linked to education. However, there are also structural issues that may contribute to lower health literacy in low‐educated individuals. For example, the federal prevention campaign ‘Alkohol—Kenn dein Limit’ (translation: ‘Alcohol—know your limit’; Bundeszentrale für gesundheitliche Aufklärung) was found to have a larger reach to adolescents and young adults with higher educational attainments , potentially contributing to the observed education‐based gap in alcohol health literacy. In Germany, there are only very few campaigns addressing adults and these campaigns are often invisible in the day‐to‐day life of the people . A very simple measure to raise awareness of alcohol's health risks is the introduction of health warning labels on alcohol containers , which, however, is currently not part of the public discourse in Germany. Overall, the education‐based differences in alcohol health literacy were small to moderate (risk ratio 1.3 to 1.6). However, these small gaps should be interpreted in light of other possible determinants of increased harm among disadvantaged populations . There is heterogeneous data on the education gradient with respect to alcohol use from Germany. General population data suggest that people with lower educational attainment appear to be less likely to engage in alcohol use in general and risky use patterns . In other words, the exposure to alcohol generally and high levels of alcohol specifically is below‐average for lower educated adults in Germany. Currently, there is no data available on the education gradient of alcohol harms in Germany. As for treatment, it appears that people with hazardous drinking patterns are twice as likely to be given a brief advice for hazardous drinking if they have a low educational background . CONCLUSION This is the first study undertaking an in‐depth exploration of alcohol health literacy and its educational distribution in Germany. We found respondents with low educational attainment to be more likely to have insufficient alcohol health literacy compared to those with high education. This educational gap is concerning given worse alcohol‐related health outcomes in individuals with low socio‐economic status , as it potentially increases socioeconomic differences in alcohol‐attributable morbidity and mortality. Moreover, our findings call into question the premise of the German alcohol policy that keeps alcohol largely unregulated and is based on the notion of informed consumer choice. However, given the observed systematic education‐based differences in alcohol health literacy, this premise appears to be no longer valid. To increase alcohol health literacy across all population groups, a comprehensive strategy is needed, including effective prevention programs in schools, easily accessible information on alcohol use, information provided in simple language and languages other than German, as well as health warning labels on alcohol containers . Conceptualisation: Carolin Kilian, Moritz Liebig, Jakob Manthey; Data curation: Moritz Liebig; Formal analysis: Carolin Kilian, Moritz Liebig; Methodology: Carolin Kilian, Moritz Liebig; Supervision: Jakob Manthey; Validation: Carolin Kilian, Moritz Liebig; Visualisation: Carolin Kilian; Writing—original draft: Carolin Kilian; and writing—review and editing: all authors. None. Data S1: Supporting information. |
National Guidelines for the Performance of the Sweat Test in Diagnosis of Cystic Fibrosis on behalf of the Croatian Society of Medical Biochemistry and Laboratory Medicine and the Cystic Fibrosis Centre - Paediatrics and adults, University Hospital Centre Zagreb | 5f99f1a7-399c-4e37-9536-9ad49257ab91 | 8833251 | Pediatrics[mh] | Cystic fibrosis Cystic fibrosis (CF) is an autosomal recessive disease caused by two mutations in the gene encoding the cystic fibrosis transmembrane conductance regulator (CFTR) protein which is found in the membranes of most epithelial cells in the human body . CFTR protein primarily regulates the transport of electrolytes through the cell membrane. Its widespread expression in different tissues explains the multisystemic nature of CF. CFTR dysfunction leads the fluid produced by organs to become viscous and to accumulate, so CF frequently presents as chronic lung disease, malabsorption followed by malnutrition, male infertility, and salt wasting syndrome, with the most common symptom of CF being excessive excretion of salt in sweat . Over time, patients with CF develop complications such as diabetes, low bone mineral density and chronic liver disease . Although CF is a monogenic disease, its clinical presentation varies from patient to patient, even among those with the same genotype, and the symptoms of one individual can vary during the course of the disease . Cystic fibrosis is considered the most common hereditary disease in Caucasians, with a worldwide incidence of 0.25-5 per 10,000 live births . The prevalence of the disease in the European Union is 0.75 per 10,000, which makes CF a rare disease . However, the total number of patients with CF is growing, reflecting improvements in diagnosis and treatment, so the number of affected adults is expected to substantially exceed the number of affected children in the future . According to published data from the European Cystic Fibrosis Society (ECFS) Patient Registry, 49,886 CF patients were registered in Europe in 2018, of whom 51.2% were adults . Croatia is included in the ECFS Patient Registry via its Database of Cystic Fibrosis Patients, which contained 132 patients in 2018, of whom 37% were adults. Demographic and epidemiological data suggest that 12-14 newborns each year have CF in Croatia .
There is an international consensus on the criteria for diagnosing CF. The first guideline was published in 1998, and the Croatian Society of Paediatric Gastroenterology, Hepatology and Nutrition (CSPGHN) of the Croatian Medical Association (CMA) published “A protocol for the Diagnosis of Cystic Fibrosis” in 2004 . Over the years, international guidelines have been updated in accordance with new findings, with the Consensus Guidelines from the Cystic Fibrosis Foundation (published in 2017) considered the current standard . Throughout their evolution, guidelines have retained the same basic starting points: in a patient showing clinical signs of the disease, impaired CFTR protein function should be tested to confirm the diagnosis. Clinical suspicion means that a person has at least one of the characteristic symptoms, tested positive during newborn screening for CF (not yet available in Croatia), or has a close family member with CF . One of the indicators of impaired CFTR protein function is elevated chloride concentration in sweat, as measured using the sweat test (ST) . The ST is therefore the first and most important laboratory test to confirm a CF diagnosis. Two other tests can assess CFTR protein function, but they are not widely used: in vivo measurement of the potential difference across the nasal mucosa, and ex vivo intestinal current measurement on rectal biopsies. Instead of demonstrating impaired CFTR protein function, diagnosis can also be confirmed by genetic analysis, but only by identifying two mutations that undoubtedly cause CF . More than half of patients newly diagnosed with CF in many European countries and around the world are diagnosed due to the newborn screening . Some infants that score positive during newborn screening but otherwise fail to fulfill other diagnostic criteria are described as having „CFTR-related metabolic syndrome (CRMS)” or „CF screen positive - inconclusive diagnosis (CFSPID)” . Since newborn screening is not performed in Croatia, the diagnosis of CF depends solely on the clinician’s awareness of disease symptoms from the neonatal period through adulthood. Regardless of whether screening is implemented or not, and regardless of how extensively genetic analysis is performed, the ST is the main test that reliably confirms disease. Therefore, the ST must be performed correctly and its results interpreted accurately for reliable confirmation of CF diagnosis . Sweat test The ST includes a series of procedures to evaluate CFTR protein function in the sweat glands of children and adults: stimulating sweating, collecting the sweat, measuring chloride concentration, and reporting and interpreting results. In CF patients, impaired CFTR protein function leads to elevated chloride concentration in sweat, which has given rise to an informal name of the ST in Croatian as „sweat chloride concentration measurement”. The ST is considered useful for diagnosing CF in 95% of patients. Recent studies show chloride concentration in sweat to be an outcome measure in clinical trials involving CFTR modulators . The preanalytical phase of the ST includes patient preparation, stimulation of sweating and sweat collection by pilocarpine iontophoresis (PI). The sweat glands are stimulated by applying weak current and a solution of pilocarpine nitrate soaked onto filter paper or gauze, or applied as a gel onto the subject’s forearm. Then sweat is collected onto chloride-free filter paper or gauze, or into a commercial capillary tube system. PI may be performed using a commercial system or the original or modified method of Gibson and Cooke . Before commercial PI systems became available, laboratories or paediatric outpatient clinics would typically use in-house equipment and materials adapted for the Gibson and Cooke method . The analytical phase of the ST includes chloride concentration measurement in the collected sweat using a validated method or conductivity measurement of sweat anions (chlorides, bicarbonates, lactates). Regardless of the analytical method, ST results are expressed in mmol/L. With the conductivity method, the result includes the concentration not only of chloride but also of other sweat anions, so the value represents the molar concentration of pure sodium chloride (NaCl) solution that has the same conductivity as the sweat sample at the same temperature; thus, the result is expressed as NaCl equivalents in mmol/L . According to current guidelines, the conductivity method can be used for screening, but not as a reference method for CF diagnosis . Commercial systems are available at the market for the pre-analytical and analytical phases separately or combined into an all-in-one system like the Nanoduct Neonatal Sweat Analysis System (Nanoduct, ELITechGroup, USA). It has begun to be used at some healthcare institutions in Croatia. It uses PI to stimulate sweating and it relies on the conductivity method to measure anions in sweat . In the postanalytical phase, interpretation of the ST results depends on the indication for testing. The indication may be to confirm or exclude a CF diagnosis or to monitor the response of patients receiving CFTR modulator therapy. A flowchart of the ST is shown in . The aim of the National Guidelines for the Performance of the Sweat Test (NGPST) in the diagnosis of CF is to standardise the ST across healthcare centres in Croatia, ensuring consistent procedural quality as well as accuracy of the results and their interpretation.
Cystic fibrosis (CF) is an autosomal recessive disease caused by two mutations in the gene encoding the cystic fibrosis transmembrane conductance regulator (CFTR) protein which is found in the membranes of most epithelial cells in the human body . CFTR protein primarily regulates the transport of electrolytes through the cell membrane. Its widespread expression in different tissues explains the multisystemic nature of CF. CFTR dysfunction leads the fluid produced by organs to become viscous and to accumulate, so CF frequently presents as chronic lung disease, malabsorption followed by malnutrition, male infertility, and salt wasting syndrome, with the most common symptom of CF being excessive excretion of salt in sweat . Over time, patients with CF develop complications such as diabetes, low bone mineral density and chronic liver disease . Although CF is a monogenic disease, its clinical presentation varies from patient to patient, even among those with the same genotype, and the symptoms of one individual can vary during the course of the disease . Cystic fibrosis is considered the most common hereditary disease in Caucasians, with a worldwide incidence of 0.25-5 per 10,000 live births . The prevalence of the disease in the European Union is 0.75 per 10,000, which makes CF a rare disease . However, the total number of patients with CF is growing, reflecting improvements in diagnosis and treatment, so the number of affected adults is expected to substantially exceed the number of affected children in the future . According to published data from the European Cystic Fibrosis Society (ECFS) Patient Registry, 49,886 CF patients were registered in Europe in 2018, of whom 51.2% were adults . Croatia is included in the ECFS Patient Registry via its Database of Cystic Fibrosis Patients, which contained 132 patients in 2018, of whom 37% were adults. Demographic and epidemiological data suggest that 12-14 newborns each year have CF in Croatia .
There is an international consensus on the criteria for diagnosing CF. The first guideline was published in 1998, and the Croatian Society of Paediatric Gastroenterology, Hepatology and Nutrition (CSPGHN) of the Croatian Medical Association (CMA) published “A protocol for the Diagnosis of Cystic Fibrosis” in 2004 . Over the years, international guidelines have been updated in accordance with new findings, with the Consensus Guidelines from the Cystic Fibrosis Foundation (published in 2017) considered the current standard . Throughout their evolution, guidelines have retained the same basic starting points: in a patient showing clinical signs of the disease, impaired CFTR protein function should be tested to confirm the diagnosis. Clinical suspicion means that a person has at least one of the characteristic symptoms, tested positive during newborn screening for CF (not yet available in Croatia), or has a close family member with CF . One of the indicators of impaired CFTR protein function is elevated chloride concentration in sweat, as measured using the sweat test (ST) . The ST is therefore the first and most important laboratory test to confirm a CF diagnosis. Two other tests can assess CFTR protein function, but they are not widely used: in vivo measurement of the potential difference across the nasal mucosa, and ex vivo intestinal current measurement on rectal biopsies. Instead of demonstrating impaired CFTR protein function, diagnosis can also be confirmed by genetic analysis, but only by identifying two mutations that undoubtedly cause CF . More than half of patients newly diagnosed with CF in many European countries and around the world are diagnosed due to the newborn screening . Some infants that score positive during newborn screening but otherwise fail to fulfill other diagnostic criteria are described as having „CFTR-related metabolic syndrome (CRMS)” or „CF screen positive - inconclusive diagnosis (CFSPID)” . Since newborn screening is not performed in Croatia, the diagnosis of CF depends solely on the clinician’s awareness of disease symptoms from the neonatal period through adulthood. Regardless of whether screening is implemented or not, and regardless of how extensively genetic analysis is performed, the ST is the main test that reliably confirms disease. Therefore, the ST must be performed correctly and its results interpreted accurately for reliable confirmation of CF diagnosis .
The ST includes a series of procedures to evaluate CFTR protein function in the sweat glands of children and adults: stimulating sweating, collecting the sweat, measuring chloride concentration, and reporting and interpreting results. In CF patients, impaired CFTR protein function leads to elevated chloride concentration in sweat, which has given rise to an informal name of the ST in Croatian as „sweat chloride concentration measurement”. The ST is considered useful for diagnosing CF in 95% of patients. Recent studies show chloride concentration in sweat to be an outcome measure in clinical trials involving CFTR modulators . The preanalytical phase of the ST includes patient preparation, stimulation of sweating and sweat collection by pilocarpine iontophoresis (PI). The sweat glands are stimulated by applying weak current and a solution of pilocarpine nitrate soaked onto filter paper or gauze, or applied as a gel onto the subject’s forearm. Then sweat is collected onto chloride-free filter paper or gauze, or into a commercial capillary tube system. PI may be performed using a commercial system or the original or modified method of Gibson and Cooke . Before commercial PI systems became available, laboratories or paediatric outpatient clinics would typically use in-house equipment and materials adapted for the Gibson and Cooke method . The analytical phase of the ST includes chloride concentration measurement in the collected sweat using a validated method or conductivity measurement of sweat anions (chlorides, bicarbonates, lactates). Regardless of the analytical method, ST results are expressed in mmol/L. With the conductivity method, the result includes the concentration not only of chloride but also of other sweat anions, so the value represents the molar concentration of pure sodium chloride (NaCl) solution that has the same conductivity as the sweat sample at the same temperature; thus, the result is expressed as NaCl equivalents in mmol/L . According to current guidelines, the conductivity method can be used for screening, but not as a reference method for CF diagnosis . Commercial systems are available at the market for the pre-analytical and analytical phases separately or combined into an all-in-one system like the Nanoduct Neonatal Sweat Analysis System (Nanoduct, ELITechGroup, USA). It has begun to be used at some healthcare institutions in Croatia. It uses PI to stimulate sweating and it relies on the conductivity method to measure anions in sweat . In the postanalytical phase, interpretation of the ST results depends on the indication for testing. The indication may be to confirm or exclude a CF diagnosis or to monitor the response of patients receiving CFTR modulator therapy. A flowchart of the ST is shown in . The aim of the National Guidelines for the Performance of the Sweat Test (NGPST) in the diagnosis of CF is to standardise the ST across healthcare centres in Croatia, ensuring consistent procedural quality as well as accuracy of the results and their interpretation.
International guidelines for the performance of the sweat test Several national guidelines for performing the ST have been published in Europe and worldwide, among which the most notable are the multidisciplinary UK Guidelines of 2004 and then 2014 (hereinafter referred to as the UK guidelines 2nd Edition), which were prepared according to principles of evidence-based medicine . Consensus guidelines from the Cystic Fibrosis Foundation for the diagnosis of CF recommend performing the ST according to validated protocols in order to ensure an acceptable quality of sweat collected as well as acceptable accuracy of the results .
Several national guidelines for performing the ST have been published in Europe and worldwide, among which the most notable are the multidisciplinary UK Guidelines of 2004 and then 2014 (hereinafter referred to as the UK guidelines 2nd Edition), which were prepared according to principles of evidence-based medicine . Consensus guidelines from the Cystic Fibrosis Foundation for the diagnosis of CF recommend performing the ST according to validated protocols in order to ensure an acceptable quality of sweat collected as well as acceptable accuracy of the results .
A survey demonstrated the need for standardising the ST at the national level because of suboptimal practices at medical biochemical laboratories (MBLs) in Croatia . This survey therefore laid a solid foundation for developing the NGPST.
The NGPST are intended primarily for laboratory professionals and nurses involved in sweat testing, regardless of the method used, in order to facilitate standardisation of the ST across healthcare centres in Croatia. The guidelines are also intended for all healthcare professionals, as well as patients, parents/guardians and stakeholders interested in the performance and interpretation of the ST.
The NGPST are focused on standardisation of the ST across healthcare institutions in Croatia. Standardisation of the pre-analytical phase begins with ceasing the use of in-house equipment and materials for PI, which contradict the content and goals of the guidelines. Standardisation of the analytical phase means introducing an analytical instrument to determine sweat chloride using a validated method. Standardisation of the post-analytical phase, means generating a harmonised report of results, continuous user education and quality control. This process of ST standardisation may take time given its potential financial burden on local healthcare centres. Nevertheless, the NGPST may improve ST quality even in centres where introduction of commercial equipment and instruments is delayed. By following the NGPST, centres using in-house instruments may improve their performance of the ST until they can achieve full standardisation. A standardised ST is necessary because it eliminates systematic error during all phases of testing, and it assays sweat chloride accurately in the target population.
The Working Group for Sweat Test Standardisation (WG STS) was established in 2017 in cooperation with the Committee for Scientific and Professional Development on behalf of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) and the Croatian Centre for Quality Assessment in Laboratory Medicine (CROQALM), an external provider of quality control for all MBLs in the country . Since the performance of the sweat test is an interdisciplinary work, a CF health professional joined the WG STS. In most of those MBLs (12/13), ST has been performed using in-house method for PI and mercurimetric titration (MMT) according to the Schales and Schales micro-method to assay chloride concentration in sweat eluate. During guideline development, WG STS members communicated via email and in virtual meetings (teleconferences) to draft this document and to render final decisions that would be published as recommendations in the final version of the document. The final version, was sent to the WG members for approval and was emailed to professional societies and associations for feedback and comments. At the first teleconference in November 2017, it was decided that the NGPST should be based on the UK guidelines 2nd Edition, as they are the only existing evidence-based guidelines . Official permission to use the UK guidelines 2nd Edition in the NGPST was sought and granted by a contact person of the British Multidisciplinary Guideline Development Group, on the condition that instances be clearly marked where the UK guidelines were not applied because of specificities of performing the ST in Croatian healthcare institutions. The NGPST were developed using questionnaires in survey conducted in 2018. The first was an update of a previously published survey covering all ST procedures . The second questionnaire examined the use of electrodes for PI and quality indicators, while the third investigated current power supply instruments. An additional questionnaire referred to the parts of the NGPST deviating from the UK guidelines 2nd Edition because of the ST equipment available for use in Croatian healthcare centres. The replies to all questionnaires are listed in and . The NGPST contain concise recommendations for performing the ST. They have been taken from the UK guidelines 2nd Edition with permission, except where indicated by the phrase “recommendation independent of the UK guidelines 2nd Edition”. In these deviations, the WG STS arrived at consensus recommendations reflecting the particular ST equipment and procedures available in Croatia. More detailed information on the UK guidelines 2nd Edition can be found in the original document . In order to harmonise the NGPST with the latest findings and recommendations about the ST appearing after the UK guidelines 2nd Edition, the WG members searched the MEDLINE database using the PubMed search engine to identify relevant publications from 01 January 2013 to 01 February 2021. Search terms included cystic fibrosis, sweat test, sweat chloride, conductometry and biological variation. The following publications were identified: national guidelines of professional societies , a review , and a cross-sectional study . In addition, the Google search engine was used to search for information available on the ST in Croatian (01 September – 01 December 2019). Where applicable, the authors of the NGPST used the „Appraisal of Guidelines for Research and Evaluation“ (AGREE) tool . The NGPST will be emailed to all WG members, professional societies and associations that took part in their development. An electronic version of the NGPST will be available in Croatian and English on the CSMBLM website.
According to an ECFS survey of 136 centres and laboratories in Europe, more than half offer the ST at prices ranging between 20 and 100 EUR . The Croatian Health Insurance Fund, a national compulsory health insurance reimburses the costs of the ST to contracted healthcare centres in the amount of 55.60 HRK or 7.50 EUR . Since performing the ST according to standardised procedures requires commercial systems and instruments that raise the cost of the test above this reimbursement level, MBLs can expect an additional financial burden. On the other hand, standardisation should ensure a high-quality diagnostic procedure, reducing the rate of false-negative or -positive results as well as the rate of quantity not sufficient (QNS) samples, ultimately reducing the overall healthcare system’s expenses related to CF diagnosis.
Indications for sweat testing (recommendations independent of the UK guidelines 2nd Edition) A primary care physician should refer a patient with suspected CF to a CF specialist for further diagnostic work-up. Once a national CF newborn screening is introduced, newborns with a positive screening result should be referred to a CF centre. Informing the patients and parents/guardians about the sweat test Before testing, the patients and parents/guardians should be informed about the purpose of the ST and about the patient’s indication for the test. MBL should ensure pre-test information about the sweat test (recommendations independent of the UK guidelines 2nd Edition). Pre-test information should give insight into the purpose of the test, patient preparation, the test procedures, associated risks, and contacts details about testing. An example of an ST information leaflet can be found in and can be modified according to the MBL or the clinical needs (recommendations independent of the UK guidelines 2nd Edition). Patient’s suitability for the sweat test ST is recommended for a term infant (newborn) who is two weeks old and weighs more than 2 kg at the time of testing, and who is normally hydrated without significant systemic disease. ST is not recommended before the first 7 days of life, especially the first 48 hours, as tests during this period show unacceptably high rates of false-positive results or PI failure. ST should be delayed in subject who are oedematous or who are receiving topiramate or 9-alpha fludrocortisone. Testing should also be delayed in subjects who show dehydration, malnutrition, unstable clinical condition or eczema at the site of sweat stimulation. ST can be performed in subjects on Flucloxacillin therapy. ST is not recommended in subjects receiving oxygen by an open delivery system, such as a headbox. This recommendation does not apply to a child receiving oxygen through a nasal prong or face mask. The newborn screening for cystic fibrosis and sweat test The National Programme for Rare Diseases 2015-2020, issued by the Ministry of Health of the Republic of Croatia, includes CF and recognises newborn screening for CF as a practice in some European countries . However, it is not part of the newborn screening programme in Croatia. The following recommendations are necessary in light of the importance of the ST in newborn screening for CF: A positive result of newborn screening for CF should be confirmed by an ST involving PI and measurement of chloride concentration in sweat. The ST should be performed by a laboratory/outpatient clinic staff experienced in testing infants younger than 3 months old.
(recommendations independent of the UK guidelines 2nd Edition) A primary care physician should refer a patient with suspected CF to a CF specialist for further diagnostic work-up. Once a national CF newborn screening is introduced, newborns with a positive screening result should be referred to a CF centre.
Before testing, the patients and parents/guardians should be informed about the purpose of the ST and about the patient’s indication for the test. MBL should ensure pre-test information about the sweat test (recommendations independent of the UK guidelines 2nd Edition). Pre-test information should give insight into the purpose of the test, patient preparation, the test procedures, associated risks, and contacts details about testing. An example of an ST information leaflet can be found in and can be modified according to the MBL or the clinical needs (recommendations independent of the UK guidelines 2nd Edition).
ST is recommended for a term infant (newborn) who is two weeks old and weighs more than 2 kg at the time of testing, and who is normally hydrated without significant systemic disease. ST is not recommended before the first 7 days of life, especially the first 48 hours, as tests during this period show unacceptably high rates of false-positive results or PI failure. ST should be delayed in subject who are oedematous or who are receiving topiramate or 9-alpha fludrocortisone. Testing should also be delayed in subjects who show dehydration, malnutrition, unstable clinical condition or eczema at the site of sweat stimulation. ST can be performed in subjects on Flucloxacillin therapy. ST is not recommended in subjects receiving oxygen by an open delivery system, such as a headbox. This recommendation does not apply to a child receiving oxygen through a nasal prong or face mask.
The National Programme for Rare Diseases 2015-2020, issued by the Ministry of Health of the Republic of Croatia, includes CF and recognises newborn screening for CF as a practice in some European countries . However, it is not part of the newborn screening programme in Croatia. The following recommendations are necessary in light of the importance of the ST in newborn screening for CF: A positive result of newborn screening for CF should be confirmed by an ST involving PI and measurement of chloride concentration in sweat. The ST should be performed by a laboratory/outpatient clinic staff experienced in testing infants younger than 3 months old.
Sites for the pilocarpine iontophoresis The flexor surface of either forearm is the recommended site for PI. Other acceptable sites are the upper arm or thigh if both arms are eczematous or otherwise unsuitable for sweat stimulation and collection. Using cleaning solutions containing chloride is not recommended. Performing the ST on only one arm is sufficient except in the event of sample contamination, QNS sample or other abnormality. Pilocarpine iontophoresis Recommendations for in-house PI The European Pharmacopoeia should be followed when preparing solutions for in-house PI (recommendation independent of the UK guidelines 2nd Edition). The power supply must be battery-powered and should include a safety cut-out that limits the amount of current to a maximum of 5 mA. The two electrodes should be made of stainless steel or copper, with a shape and size that allow them to be fastened securely to the subject’s forearm without pain or excessive pressure (recommendation independent of the UK guidelines 2nd Edition). The electrodes should be regularly inspected and cleaned. The recommended electrode size is 3.75 x 3.75 cm (recommendation independent of the UK guidelines 2nd Edition). Pilocarpine nitrate solution (2-5 g/L) should be used to stimulate sweat. The chloride-free filter paper or gauze for sweat stimulation should be larger than the electrodes used for PI (recommendation independent of the UK guidelines 2nd Edition). Electrodes should be assembled with filter paper or gauze soaked in pilocarpine nitrate solution and then fixed to the flexor surface of the forearm in a way that prevents electrode movement and bridging with pilocarpine solution during sweat stimulation (recommendation independent of the UK guidelines 2nd Edition). A maximum current of 4 mA should be applied for at least 3 minutes but no more than 5 minutes. Chloride free filter paper or gauze for sweat collection should be larger than the stimulated area: for example, gauze or filter paper measuring 5 x 5 cm should be used for electrodes measuring 3.75 x 3.75 cm (recommendation independent of the UK guidelines 2nd Edition). Before PI, the filter paper or gauze should be placed in labelled, tightly closed containers and weighed using an analytical balance sensitive to at least 0.0001 g. Every container with a sweat sample should be labelled with a patient’s barcode or permanent ink (recommendation independent of the UK guidelines 2nd Edition). Pre-weighed filter paper or gauze for sweat collection should be placed as soon as possible at the site of PI and fixed in a way that prevents evaporation and contamination of sweat during collection. Sweat collection should take 30 minutes. After sweat collection, the filter paper or gauze should be returned to the same tightly closed and container as for weighing, then delivered to the laboratory if the PI is performed in outpatient clinic (recommendation independent of the UK guidelines 2nd Edition). The filter paper or gauze with collected sweat should be reweighed before chloride analysis (recommendation independent of the UK guidelines 2nd Edition). Sweat samples collected from different sites should not be mixed and analysed (recommendation independent of the UK guidelines 2nd Edition). The user must verify the instrument, equipment and materials used for in-house PI (recommendation independent of the UK guidelines 2nd Edition). Recommendations for PI using commercial equipment The commercial instrument must be battery-powered with an automatic safety cut-out. Pilocarpine gel discs containing pilocarpine nitrate at a concentration of 2-5 g/L should be used to stimulate sweating. Pilocarpine gel discs should not be used if any damage is noticed (recommendation independent of the UK guidelines 2nd Edition). A commercial system including disposable collectors for sweat collection is recommended. Parts of commercial kits should not be combined with in-house instruments, equipment or materials for sweat stimulation or collection. Sample stability and preparation for analysis Sweat collected with a commercial kit can be stored for a maximum of 3 days at 2-8 °C in a tightly closed container. Sweat collected on filter paper or gauze can be kept at 2-8 °C for a maximum of 3 days in a tightly closed container. The weight of collected sweat should be consistent with a request of a minimum sweat secretion rate of 1 g/m 2 /min. Each laboratory, depending on the equipment used, should define a minimum weight of collected sweat that is acceptable for the analytical phase ( : „Calculation of the minimum acceptable sweat weight“) (recommendation independent of the UK guidelines 2nd Edition). Collected sweat weighing less than the defined minimum acceptable weight should not be analysed. Sweat elution longer than 1 hour is not recommended (recommendation independent of the UK guidelines 2nd Edition). Sample collected onto filter paper or gauze should be mixed well before analysis. Sweat analysis Chloride is the analyte of choice when analysing sweat for CF diagnosis. Measurement of osmolality or concentrations of sodium or potassium in sweat is not recommended. Quantitative colorimetry, coulometry and ion-selective electrodes (ISE) are the methods recommended for sweat chloride analysis. Conductivity is not acceptable as a reference method for CF diagnosis. In children younger than 6 months, an ST involving sweat chloride measurement should be performed if a conductivity test gives a negative result. In case of a positive or borderline conductivity result, an ST measuring chloride concentration should be performed in children older than 6 months and adults. Recommendation for chloride analysis (recommendations independent of the UK guidelines 2nd Edition) Mercurimetric titration according to the Schales and Schales micro-method is an acceptable analytical technique for chloride analysis in sweat or eluted sweat only if the technique has been validated in that matrix. Mercury nitrate solution used in the MMT must be safely removed from MBLs according to laboratory waste management standards. Conductivity method The chloride concentrations in sweat determined using the Nanoduct or any other instrument based on conductivity (e.g. Sweat Check Analyser, ELITechGroup, USA) are generally higher than the chloride concentrations determined using analytical methods that measure only chloride, such as coulometry. The authors of the British and Australasian guidelines for the performance of ST consider the conductivity method unacceptable for CF diagnosis . According to the guidelines for the performance of the ST of the American Clinical and Laboratory Standard Institute, a NaCl equivalent of ≥ 50 mmol/L obtained by conductivity should be confirmed by measurement of chloride concentration in sweat . Unpublished data, indicated that the Nanoduct was used in eight of nine paediatric units and one of nine MBLs across five general hospitals, one county hospital, two specialty hospitals and one clinical hospital center covering most counties in Croatia.
The flexor surface of either forearm is the recommended site for PI. Other acceptable sites are the upper arm or thigh if both arms are eczematous or otherwise unsuitable for sweat stimulation and collection. Using cleaning solutions containing chloride is not recommended. Performing the ST on only one arm is sufficient except in the event of sample contamination, QNS sample or other abnormality.
Recommendations for in-house PI The European Pharmacopoeia should be followed when preparing solutions for in-house PI (recommendation independent of the UK guidelines 2nd Edition). The power supply must be battery-powered and should include a safety cut-out that limits the amount of current to a maximum of 5 mA. The two electrodes should be made of stainless steel or copper, with a shape and size that allow them to be fastened securely to the subject’s forearm without pain or excessive pressure (recommendation independent of the UK guidelines 2nd Edition). The electrodes should be regularly inspected and cleaned. The recommended electrode size is 3.75 x 3.75 cm (recommendation independent of the UK guidelines 2nd Edition). Pilocarpine nitrate solution (2-5 g/L) should be used to stimulate sweat. The chloride-free filter paper or gauze for sweat stimulation should be larger than the electrodes used for PI (recommendation independent of the UK guidelines 2nd Edition). Electrodes should be assembled with filter paper or gauze soaked in pilocarpine nitrate solution and then fixed to the flexor surface of the forearm in a way that prevents electrode movement and bridging with pilocarpine solution during sweat stimulation (recommendation independent of the UK guidelines 2nd Edition). A maximum current of 4 mA should be applied for at least 3 minutes but no more than 5 minutes. Chloride free filter paper or gauze for sweat collection should be larger than the stimulated area: for example, gauze or filter paper measuring 5 x 5 cm should be used for electrodes measuring 3.75 x 3.75 cm (recommendation independent of the UK guidelines 2nd Edition). Before PI, the filter paper or gauze should be placed in labelled, tightly closed containers and weighed using an analytical balance sensitive to at least 0.0001 g. Every container with a sweat sample should be labelled with a patient’s barcode or permanent ink (recommendation independent of the UK guidelines 2nd Edition). Pre-weighed filter paper or gauze for sweat collection should be placed as soon as possible at the site of PI and fixed in a way that prevents evaporation and contamination of sweat during collection. Sweat collection should take 30 minutes. After sweat collection, the filter paper or gauze should be returned to the same tightly closed and container as for weighing, then delivered to the laboratory if the PI is performed in outpatient clinic (recommendation independent of the UK guidelines 2nd Edition). The filter paper or gauze with collected sweat should be reweighed before chloride analysis (recommendation independent of the UK guidelines 2nd Edition). Sweat samples collected from different sites should not be mixed and analysed (recommendation independent of the UK guidelines 2nd Edition). The user must verify the instrument, equipment and materials used for in-house PI (recommendation independent of the UK guidelines 2nd Edition). Recommendations for PI using commercial equipment The commercial instrument must be battery-powered with an automatic safety cut-out. Pilocarpine gel discs containing pilocarpine nitrate at a concentration of 2-5 g/L should be used to stimulate sweating. Pilocarpine gel discs should not be used if any damage is noticed (recommendation independent of the UK guidelines 2nd Edition). A commercial system including disposable collectors for sweat collection is recommended. Parts of commercial kits should not be combined with in-house instruments, equipment or materials for sweat stimulation or collection.
The European Pharmacopoeia should be followed when preparing solutions for in-house PI (recommendation independent of the UK guidelines 2nd Edition). The power supply must be battery-powered and should include a safety cut-out that limits the amount of current to a maximum of 5 mA. The two electrodes should be made of stainless steel or copper, with a shape and size that allow them to be fastened securely to the subject’s forearm without pain or excessive pressure (recommendation independent of the UK guidelines 2nd Edition). The electrodes should be regularly inspected and cleaned. The recommended electrode size is 3.75 x 3.75 cm (recommendation independent of the UK guidelines 2nd Edition). Pilocarpine nitrate solution (2-5 g/L) should be used to stimulate sweat. The chloride-free filter paper or gauze for sweat stimulation should be larger than the electrodes used for PI (recommendation independent of the UK guidelines 2nd Edition). Electrodes should be assembled with filter paper or gauze soaked in pilocarpine nitrate solution and then fixed to the flexor surface of the forearm in a way that prevents electrode movement and bridging with pilocarpine solution during sweat stimulation (recommendation independent of the UK guidelines 2nd Edition). A maximum current of 4 mA should be applied for at least 3 minutes but no more than 5 minutes. Chloride free filter paper or gauze for sweat collection should be larger than the stimulated area: for example, gauze or filter paper measuring 5 x 5 cm should be used for electrodes measuring 3.75 x 3.75 cm (recommendation independent of the UK guidelines 2nd Edition). Before PI, the filter paper or gauze should be placed in labelled, tightly closed containers and weighed using an analytical balance sensitive to at least 0.0001 g. Every container with a sweat sample should be labelled with a patient’s barcode or permanent ink (recommendation independent of the UK guidelines 2nd Edition). Pre-weighed filter paper or gauze for sweat collection should be placed as soon as possible at the site of PI and fixed in a way that prevents evaporation and contamination of sweat during collection. Sweat collection should take 30 minutes. After sweat collection, the filter paper or gauze should be returned to the same tightly closed and container as for weighing, then delivered to the laboratory if the PI is performed in outpatient clinic (recommendation independent of the UK guidelines 2nd Edition). The filter paper or gauze with collected sweat should be reweighed before chloride analysis (recommendation independent of the UK guidelines 2nd Edition). Sweat samples collected from different sites should not be mixed and analysed (recommendation independent of the UK guidelines 2nd Edition). The user must verify the instrument, equipment and materials used for in-house PI (recommendation independent of the UK guidelines 2nd Edition).
The commercial instrument must be battery-powered with an automatic safety cut-out. Pilocarpine gel discs containing pilocarpine nitrate at a concentration of 2-5 g/L should be used to stimulate sweating. Pilocarpine gel discs should not be used if any damage is noticed (recommendation independent of the UK guidelines 2nd Edition). A commercial system including disposable collectors for sweat collection is recommended. Parts of commercial kits should not be combined with in-house instruments, equipment or materials for sweat stimulation or collection.
Sweat collected with a commercial kit can be stored for a maximum of 3 days at 2-8 °C in a tightly closed container. Sweat collected on filter paper or gauze can be kept at 2-8 °C for a maximum of 3 days in a tightly closed container. The weight of collected sweat should be consistent with a request of a minimum sweat secretion rate of 1 g/m 2 /min. Each laboratory, depending on the equipment used, should define a minimum weight of collected sweat that is acceptable for the analytical phase ( : „Calculation of the minimum acceptable sweat weight“) (recommendation independent of the UK guidelines 2nd Edition). Collected sweat weighing less than the defined minimum acceptable weight should not be analysed. Sweat elution longer than 1 hour is not recommended (recommendation independent of the UK guidelines 2nd Edition). Sample collected onto filter paper or gauze should be mixed well before analysis.
Chloride is the analyte of choice when analysing sweat for CF diagnosis. Measurement of osmolality or concentrations of sodium or potassium in sweat is not recommended. Quantitative colorimetry, coulometry and ion-selective electrodes (ISE) are the methods recommended for sweat chloride analysis. Conductivity is not acceptable as a reference method for CF diagnosis. In children younger than 6 months, an ST involving sweat chloride measurement should be performed if a conductivity test gives a negative result. In case of a positive or borderline conductivity result, an ST measuring chloride concentration should be performed in children older than 6 months and adults. Recommendation for chloride analysis (recommendations independent of the UK guidelines 2nd Edition) Mercurimetric titration according to the Schales and Schales micro-method is an acceptable analytical technique for chloride analysis in sweat or eluted sweat only if the technique has been validated in that matrix. Mercury nitrate solution used in the MMT must be safely removed from MBLs according to laboratory waste management standards. Conductivity method The chloride concentrations in sweat determined using the Nanoduct or any other instrument based on conductivity (e.g. Sweat Check Analyser, ELITechGroup, USA) are generally higher than the chloride concentrations determined using analytical methods that measure only chloride, such as coulometry. The authors of the British and Australasian guidelines for the performance of ST consider the conductivity method unacceptable for CF diagnosis . According to the guidelines for the performance of the ST of the American Clinical and Laboratory Standard Institute, a NaCl equivalent of ≥ 50 mmol/L obtained by conductivity should be confirmed by measurement of chloride concentration in sweat . Unpublished data, indicated that the Nanoduct was used in eight of nine paediatric units and one of nine MBLs across five general hospitals, one county hospital, two specialty hospitals and one clinical hospital center covering most counties in Croatia.
(recommendations independent of the UK guidelines 2nd Edition) Mercurimetric titration according to the Schales and Schales micro-method is an acceptable analytical technique for chloride analysis in sweat or eluted sweat only if the technique has been validated in that matrix. Mercury nitrate solution used in the MMT must be safely removed from MBLs according to laboratory waste management standards.
The chloride concentrations in sweat determined using the Nanoduct or any other instrument based on conductivity (e.g. Sweat Check Analyser, ELITechGroup, USA) are generally higher than the chloride concentrations determined using analytical methods that measure only chloride, such as coulometry. The authors of the British and Australasian guidelines for the performance of ST consider the conductivity method unacceptable for CF diagnosis . According to the guidelines for the performance of the ST of the American Clinical and Laboratory Standard Institute, a NaCl equivalent of ≥ 50 mmol/L obtained by conductivity should be confirmed by measurement of chloride concentration in sweat . Unpublished data, indicated that the Nanoduct was used in eight of nine paediatric units and one of nine MBLs across five general hospitals, one county hospital, two specialty hospitals and one clinical hospital center covering most counties in Croatia.
Contaminated or evaporated sweat samples should not be analysed (see section on PI above). The analytical method must show a linear response over the range of chloride concentrations in the sweat of healthy and CF subjects. The procedures of the ST must be documented in the quality management system and in accordance with national professional standards (recommendation independent of the UK guidelines 2nd Edition). Internal quality control (IQC) must be applied to every batch of samples. In-house IQC samples can be used (recommendation independent of the UK guidelines 2nd Edition). For every batch of samples, IQC should be conducted at three concentrations: normal level, < 30 mmol/L; borderline level, 30-59 mmol/L; and abnormal level, ≥ 60 mmol/L (recommendation independent of the UK guidelines 2nd Edition). Methods for determining chloride (but not conductivity) should have between-batch imprecision lower than 5% at concentrations of 40-50 mmol/L. Conductivity method should have between-batch imprecision lower than 2% at a concentration of 50 mmol/L. The laboratory must participate in an external quality assessment (EQA) scheme. Records of results of internal and external quality control must be kept as part of medical laboratory documentation in accordance with national regulation (recommendation independent of the UK guidelines 2nd Edition). Since 2015, the CROQALM has provided national EQA for the ST analytical phase based on a separate module in three rounds annually (one sample per round) . All MBLs in Croatia that perform the ST and report results participate in this national EQA. Between 2015 and 2020, the control material was in-house aqueous NaCl solution at a chloride concentration range of 20-100 mmol/L. Since 2021, a commercial quality control system has been in use. The report contains graphical and tabular representations, statistical analysis of the results according to the Tukey method, as well as an interpretation of the reported chloride concentration . Non-physiological results (chloride concentration > 150 mmol/L or conductivity > 170 mmol/L) should be investigated for errors. The laboratory or outpatient clinic should monitor the rates of QNS samples due to insufficient weight or volume of sweat collected. The QNS should not exceed 10% of the tested population, excluding repeat sampling and children younger than 6 months. There should be a target of less than 5% in children older than 6 months. Among children younger than 6 months, the rate of failed sweat collection should not exceed 20% in the tested population. The QNS rate should be expressed as a percentage of total tests and as a percentage per operator, and records of the QNS rate and the associated reports should be maintained. lists quality indicators for the performance of the ST. Testing should be repeated if the result is inconsistent with the subject’s clinical condition and/or with results of genetic testing. lists causes and conditions that can contribute to false positive and false negative ST results . Results should be reviewed with clinicians, in particular repeat sweat collections and positive results.
Chloride concentration by colorimetry, coulometry, ion selective elektrodes (recommendation independent of the UK guidelines 2nd Edition) Normal level ≤ 29 mmol/L Borderline level 30-59 mmol/L Elevated level ≥ 60 mmol/L NaCl equivalents by the conductivity method Normal level < 50 mmol/L Borderline level 50-90 mmol/L Elevated level > 90 mmol/L Biological variation and reference change value (recommendation independent of the UK guidelines 2nd Edition) The laboratory may use data from the biological variation database that are appropriate for the tested population. A laboratory that employs a standardised procedure for PI and sweat chloride analysis may use reference change value in order to evaluate the clinical significance of testing results Normal level ≤ 29 mmol/L Borderline level 30-59 mmol/L Elevated level ≥ 60 mmol/L
Normal level < 50 mmol/L Borderline level 50-90 mmol/L Elevated level > 90 mmol/L
(recommendation independent of the UK guidelines 2nd Edition) The laboratory may use data from the biological variation database that are appropriate for the tested population. A laboratory that employs a standardised procedure for PI and sweat chloride analysis may use reference change value in order to evaluate the clinical significance of testing results .
The report format should include full information for patient identification, date and time of the test and report, the analyte measured, the analyte result, analytical method used, reference ranges and interpretation of the results, as well as a reason if no result is reported. The format of reports from MBLs should be in line with the National Recommendation for Post-analytical Laboratory Work (recommendation independent of the UK guidelines 2nd Edition). The weight or volume of the collected sweat should be stated in the laboratory report (recommendation independent of the UK guidelines 2nd Edition).
Sweat collection and analysis should be performed only by qualified and skilled operators. All levels of operators training should be documented. The appropriate procedures for training staff and assessing their ST skill should be defined. Trained operators should perform a minimum of 10 tests per year in order to maintain testing quality (recommendation independent of the UK guidelines 2nd Edition). An individual holding a master’s degree in medical biochemistry, a specialist in medical biochemistry and laboratory medicine, a medical doctor or a specialist in laboratory medicine should be responsible for training and evaluating staff who perform ST in the MBL as well as for supervising the entire workflow, including quality assessment and revision of quality indicators, reporting of results and discussions with specialists. Every phase of the ST in the MBL should be controlled by a qualified specialist, including analysis of unexpected results and implementation of corrective actions (recommendation independent of the UK guidelines 2nd Edition). If the ST is partially or completely performed outside the MBL, an individual holding a master’s degree in medical biochemistry, a specialist in medical biochemistry and laboratory medicine, a medical doctor or a specialist in laboratory medicine should evaluate the frequency of quality control assessment (including revision of quality indicators) and unexpected results as well as implementation of corrective actions (recommendation independent of the UK guidelines 2nd Edition). The doctor or nurse who supervises staff performing ST fully or in part outside the MBL should keep records on personnel training, evaluation and competence. The supervisor should verify that the ST is performed in accordance with written instructions, and that test results and quality control assessments (including rates of QNS and failed tests per operator) are fully documented (recommendation independent of the UK guidelines 2nd Edition).
The NGPST developed through the joint work of the CSMBLM and CSPGHN will be evaluated by the Committee for Professional Concerns of the Croatian Chamber of Medical Biochemists and it will subject of the public disscusion by the CSMBLM members, prior to publication.
|
Excellent clinical and radiological outcomes after arthroscopic reduction and double row‐suture bridge for large‐sized greater tuberosity fractures of the humerus | e5e0c8b0-a11f-4ce2-a0b9-d5e586748f5d | 11948161 | Surgery[mh] | Greater tuberosity fractures of the proximal humerus represent approximately 20% of all proximal humeral fractures, with only 5% requiring surgical intervention . The presence of greater tuberosity fracture and dislocation of the anterior shoulder joint has been reported in 5%–57% of cases [ , , ]. Traditionally, surgical intervention involved open reduction and internal fixation, but recent studies have shown promising results with an arthroscopic approach [ , , , , , ]. Arthroscopic surgery offers several advantages, including easy visualization, the ability to mobilize fracture fragments, and a suitable field of view for addressing intraarticular lesions or bipolar lesions of the greater tuberosity and glenoid fossa . This makes it particularly beneficial for fixing small fracture fragments . Additionally, many greater tuberosity fractures are accompanied by partial‐thickness rotator cuff tears, and arthroscopic surgery allows for concurrent rotator cuff repair, adding to its versatility [ , , , ]. However, arthroscopic fixation also has its challenges. It requires a long learning curve, and the surgeon's experience and skill level significantly impact the surgical outcome. Moreover, the operation time is generally longer than that for open surgery . Despite these challenges, previous studies have reported good results with arthroscopic reduction and fixation for small‐sized greater tuberosity fractures . Although surgical techniques for arthroscopically managing large‐sized fractures have been described , the clinical outcomes for large greater tuberosity fractures with fragments >30 mm have not yet been reported. Therefore, this study aimed to assess the radiological and clinical outcomes of arthroscopic reduction and double‐row suture bridge fixation for large greater tuberosity fractures of the proximal humerus. By evaluating these outcomes, this study hypothesized that arthroscopic reduction and double‐row suture bridge fixation would prove to be a safe, effective and minimally invasive treatment option for managing large greater tuberosity fractures.
This retrospective study was approved by our Institutional Review Board (IRB No. 2022‐06‐009‐001). The requirement for obtaining informed consent was waived. A retrospective analysis was conducted on 52 patients who underwent arthroscopic reduction and double‐row suture bridge fixation for greater tuberosity fractures between February 2014 and February 2021. Arthroscopic reduction and double‐row suture bridge fixation were performed in cases where displacement exceeded 5 mm or 3 mm for patients engaged in hard labour. Inclusion and exclusion criteria Among the 52 patients who underwent arthroscopic reduction and double‐row suture bridge fixation for avulsion and split‐type humeral tuberosity, those with bone fragments >30 mm on three‐dimensional computed tomography and with follow‐up for over 2 years were included. Exclusion criteria were bone fragment size <30 mm, ipsilateral glenohumeral joint arthritis, rheumatoid arthritis, posttraumatic arthritis, history of previous surgery and those receiving worker compensation. The displacement and size of the fracture fragments were measured using three‐dimensional computed tomography prior to surgery. Of the total patients, 30 had large greater tuberosity fractures with a diameter >30 mm. Among these, 15 patients with a postoperative follow‐up period of >24 months were selected for the study. Surgical procedure All surgical procedures were performed by two surgeons (S. H. K. and Y. D. J.), with the patients in a beach chair position under general anaesthesia. The joint was inspected using an arthroscope inserted through a standard posterior viewing portal. A trans‐cuff portal was then created, and two double‐loaded suture anchors were inserted on the medial side of the fracture site. For fractures involving the entire greater tuberosity, an anchor was placed in the articular cartilage area nearest to the fracture. All suture strands were passed through the intact rotator cuff tendon using the shuttle relay technique. The arthroscope was then moved to the subacromial space, where a bursectomy was performed to improve visibility. The fracture site was identified, and the haematoma was removed using a motorized shaver. With the strands passing through the rotator cuff confirmed, fracture site reduction was achieved using a Freer elevator and switching stick. Medial knot tying was avoided, and the knotless technique was employed in all cases. Next, the location for the lateral anchor was selected to ensure maintained reduction. Two lateral anchors were inserted just posterior to the bicipital groove of the proximal humerus using a double‐row suture bridge technique, providing a buttress for the fracture fragment (Figures and ). Finally, arthroscopy confirmed that the reduction was maintained during internal and external rotation of the humerus. Outcome assessment All outcome assessments were performed by two fellowship‐trained shoulder surgeons. Radiological outcomes were assessed by evaluating fragment step‐off on immediate postoperative radiographs to confirm anatomic reduction. Intraobserver reliability and interobserver reliability were assessed. The timing of bone union was confirmed, with radiological union defined as complete cortical bridging between the humeral head and greater tuberosity observed on radiographs. For functional outcomes, the range of motion, Visual Analog Scale (VAS) score, American Shoulder and Elbow Surgeons (ASES) score and University of California‐Los Angeles (UCLA) score were evaluated at the last follow‐up. Active shoulder range of motion was measured using a goniometer for forward flexion and external rotation at the side, while internal rotation was identified based on the vertebral level corresponding to the patient's thumb placement (1–12 for thoracic vertebrae, 13–17 for lumbar vertebrae and 18 for the sacrum). Postoperative complications were also assessed. Statistical analysis All statistical analyses were performed using SPSS v.25.0 software (IBM Corp.). The paired t test or Wilcoxon signed‐rank test was used to compare pre‐ and postoperative clinical scores, depending on the results of the Kolmogorov–Smirnov normality test. Statistical significance was set at p < 0.05. The intraclass correlation coefficient was used to assess intraobserver reliability and interobserver reliability. The Intraclass correlation coefficient is typically classified as follows: scores above 0.8 denote excellent agreement, scores between 0.6 and 0.8 indicate substantial agreement, scores between 0.4 and 0.6 reflect moderate agreement, scores between 0.2 and 0.4 suggest fair agreement and scores between 0.1 and 0.2 represent slight agreement . A power analysis was conducted using ASES scores to evaluate the adequacy of the sample size of 15 patients. A paired design will be used to test whether the paired difference in distributions (δ) is different from 0 (H0: δ = 0 versus H1: δ ≠ 0). The comparison will be made using a two‐sided, paired‐difference Wilcoxon Signed‐Rank test, with a Type I error rate (α) of 0.05. The underlying standard deviation of the paired difference distribution is assumed to be 6.8. To detect a paired mean difference of 49 with a sample size of 15 pairs, the power is 1. The power was computed using PASS 2023, version 23.0.3.
Among the 52 patients who underwent arthroscopic reduction and double‐row suture bridge fixation for avulsion and split‐type humeral tuberosity, those with bone fragments >30 mm on three‐dimensional computed tomography and with follow‐up for over 2 years were included. Exclusion criteria were bone fragment size <30 mm, ipsilateral glenohumeral joint arthritis, rheumatoid arthritis, posttraumatic arthritis, history of previous surgery and those receiving worker compensation. The displacement and size of the fracture fragments were measured using three‐dimensional computed tomography prior to surgery. Of the total patients, 30 had large greater tuberosity fractures with a diameter >30 mm. Among these, 15 patients with a postoperative follow‐up period of >24 months were selected for the study.
All surgical procedures were performed by two surgeons (S. H. K. and Y. D. J.), with the patients in a beach chair position under general anaesthesia. The joint was inspected using an arthroscope inserted through a standard posterior viewing portal. A trans‐cuff portal was then created, and two double‐loaded suture anchors were inserted on the medial side of the fracture site. For fractures involving the entire greater tuberosity, an anchor was placed in the articular cartilage area nearest to the fracture. All suture strands were passed through the intact rotator cuff tendon using the shuttle relay technique. The arthroscope was then moved to the subacromial space, where a bursectomy was performed to improve visibility. The fracture site was identified, and the haematoma was removed using a motorized shaver. With the strands passing through the rotator cuff confirmed, fracture site reduction was achieved using a Freer elevator and switching stick. Medial knot tying was avoided, and the knotless technique was employed in all cases. Next, the location for the lateral anchor was selected to ensure maintained reduction. Two lateral anchors were inserted just posterior to the bicipital groove of the proximal humerus using a double‐row suture bridge technique, providing a buttress for the fracture fragment (Figures and ). Finally, arthroscopy confirmed that the reduction was maintained during internal and external rotation of the humerus.
All outcome assessments were performed by two fellowship‐trained shoulder surgeons. Radiological outcomes were assessed by evaluating fragment step‐off on immediate postoperative radiographs to confirm anatomic reduction. Intraobserver reliability and interobserver reliability were assessed. The timing of bone union was confirmed, with radiological union defined as complete cortical bridging between the humeral head and greater tuberosity observed on radiographs. For functional outcomes, the range of motion, Visual Analog Scale (VAS) score, American Shoulder and Elbow Surgeons (ASES) score and University of California‐Los Angeles (UCLA) score were evaluated at the last follow‐up. Active shoulder range of motion was measured using a goniometer for forward flexion and external rotation at the side, while internal rotation was identified based on the vertebral level corresponding to the patient's thumb placement (1–12 for thoracic vertebrae, 13–17 for lumbar vertebrae and 18 for the sacrum). Postoperative complications were also assessed.
All statistical analyses were performed using SPSS v.25.0 software (IBM Corp.). The paired t test or Wilcoxon signed‐rank test was used to compare pre‐ and postoperative clinical scores, depending on the results of the Kolmogorov–Smirnov normality test. Statistical significance was set at p < 0.05. The intraclass correlation coefficient was used to assess intraobserver reliability and interobserver reliability. The Intraclass correlation coefficient is typically classified as follows: scores above 0.8 denote excellent agreement, scores between 0.6 and 0.8 indicate substantial agreement, scores between 0.4 and 0.6 reflect moderate agreement, scores between 0.2 and 0.4 suggest fair agreement and scores between 0.1 and 0.2 represent slight agreement . A power analysis was conducted using ASES scores to evaluate the adequacy of the sample size of 15 patients. A paired design will be used to test whether the paired difference in distributions (δ) is different from 0 (H0: δ = 0 versus H1: δ ≠ 0). The comparison will be made using a two‐sided, paired‐difference Wilcoxon Signed‐Rank test, with a Type I error rate (α) of 0.05. The underlying standard deviation of the paired difference distribution is assumed to be 6.8. To detect a paired mean difference of 49 with a sample size of 15 pairs, the power is 1. The power was computed using PASS 2023, version 23.0.3.
The study included 15 patients (eight men and seven women) with a mean age of 53.9 ± 11.5 years. The average follow‐up period was 57.7 ± 23.1 months. The mean fracture fragment size was 32.5 ± 2.4 mm, and the mean displacement was 5.1 ± 1.6 mm. Concomitant injuries included shoulder joint dislocation in six of 15 patients (40%) and Bankart lesions in four of 15 patients (27%, with no bony Bankart lesions observed). Supraspinatus partial‐thickness tear rate was nine of 15 patients (60%), with one patient also involving an infraspinatus partial‐thickness tear. The subscapularis tear rate was one of 15 patients (6.7%). No full‐thickness rotator cuff tears were observed. Brachial plexus neuropathy occurred in one of 15 patients (6.7%), with the affected patient recovering 6 months postinjury. Radiological outcomes revealed seven patients of avulsion‐type greater tuberosity fractures and eight patients of split‐type fractures, with seven of the 15 patients having comminuted fractures. A step‐off of <3 mm immediately after arthroscopic reduction and double‐row suture bridge fixation were observed in 13 of 15 patients (86.7%), and the mean union time was 3 months postoperatively. All measurements were excellent intraobserver and interobserver reliability (Table ). All final postoperative range of motion and functional outcome scores showed significant improvement compared to preoperative scores, with particularly notable increases in ASES and UCLA scores ( p < 0.001) (Table ). No complications were reported, including cases of nonunion, malunion, or complications related to suture anchors.
Excellent radiological and clinical outcomes were observed with arthroscopic reduction and double‐row suture bridge fixation, even for large greater tuberosity fractures, according to the main findings of our study. Notably, 86.7% of patients achieved a step‐off of <3 mm immediately after surgery, and all showed significant improvements in postoperative functional outcomes. Importantly, no complications were reported. The surgical indications for greater tuberosity fractures are still a topic of debate. Generally, surgery is considered for displacements of >5 or >3 mm in professional athletes and workers. Conversely, conservative treatment is recommended for depression‐type fractures . The necessity of surgical intervention for displacements between 3 and 5 mm remains controversial, as even a 3‐mm displacement can alter rotator cuff biomechanics and impact functional outcomes . In this study, surgical treatment was performed for displacements of 3 mm or more, particularly in patients engaged in hard labour. The rotator cuff attaches to the greater tuberosity of the humerus, and in high‐energy trauma, a large greater tuberosity fracture can occur due to the strong contraction of the rotator cuff . The deforming forces from the pull of the rotator cuff muscles should be considered when reducing fracture fragments . Most greater tuberosity fractures involve the supraspinatus and infraspinatus facets, leading to displacement in the posterosuperior direction . Therefore, reduction can be achieved by passing sutures through the intact rotator cuff area. Previous studies using arthroscopy primarily focused on relatively small greater tuberosity fractures (< 20 mm) . For larger bone fragments, arthroscopic‐assisted plate fixation is sometimes employed . However, our study uniquely addresses larger fractures, with a minimum fracture size of 30 mm and an average size of 32.7 mm. This study is the first to report on arthroscopic surgical outcomes for greater tuberosity fractures with a diameter of ≥30 mm. In this study, 86.7% of patients had a step‐off of the greater tuberosity fragments within 3 mm immediately after surgery, as confirmed radiographically. While complete anatomical reduction might not always be achieved with arthroscopic surgery compared with open surgery, bone union was successfully achieved in all patients without any radiological complications. Ji et al. reported postoperative residual superior and/or posterior displacement of 0.1 and 0.2 mm, respectively, after arthroscopic surgery in patients with greater tuberosity fractures . However, their study excluded patients with large greater tuberosity fractures. Our study specifically focused on these larger fractures and demonstrated successful anatomical reduction. Previous studies reported postoperative union durations ranging from 8 to 20 weeks . In our study, bone union was achieved in all patients within an average follow‐up period of 3 months. This result is faster than the average union time reported for open reduction surgery and is comparable to the results of a previous study using arthroscopic reduction surgery . Compared with open surgery, arthroscopic surgery minimizes disruption to the soft‐tissue envelope, which accelerates the bone healing cascade and results in rapid bone union. Additionally, it reduces postoperative fibrosis and facilitates rehabilitation, enabling rapid functional recovery . Several studies have demonstrated good functional outcomes with arthroscopic surgical treatment of greater tuberosity fractures. Ji et al. reported successful results by combining metal internal fixation with arthroscopic suturing . However, only two studies have directly compared the outcomes of open reduction surgery and arthroscopic surgery for humeral greater tuberosity fractures . The study found that although arthroscopy had a longer surgical time (95.3 vs. 61.5 min), it resulted in a better range of motion and higher ASES scores. However, in this study, patients with isolated displaced greater tuberosity fractures underwent open reduction when the displacement was >1 cm or the fragment size exceeded 3 cm, whereas arthroscopic surgery was performed for displacements <1 cm or fragment size <3 cm. This discrepancy in surgical indications indicates that the study by Liao et al. cannot be considered an independent cohort study. In their study, patients with fragments 3 cm or larger treated with open reduction had a forward flexion of 137° and an ASES score of 87.4 after a minimum of 2 years. In contrast, our study exclusively performed arthroscopic surgery on greater tuberosity fractures >3 cm. At the final follow‐up, we observed superior outcomes, with forward flexion of 166° and an ASES score of 92.8 compared with the results reported by Liao et al. . Similarly, Yoon et al. achieved an ASES score of 92.6 at 2 years postoperatively using minimally invasive open reduction and internal fixation using a screw and washer for a large greater tuberosity fracture (33 ± 6 mm) . These findings suggest that arthroscopic fixation could be a viable alternative for treating large‐sized greater tuberosity fractures. A systematic review reported a complication rate of approximately 15% following greater tuberosity fracture surgery . Potential complications included stiffness, continued pain, heterotopic ossification, anchor protrusion/pullout, unplanned implant removal, loss of reduction and malreduction . In contrast, our study found no cases of surgery‐related complications, indicating that arthroscopic surgery may have a high safety potential. However, the small sample size in our study limits its clinical significance, and further large cohort studies are necessary to confirm these findings. This study had some limitations. First, this was a retrospective study with a small sample size, with only 15 patients followed for more than 2 years. Second, this case series did not allow for comparisons with open reduction interventions. Third, the absence of complications and the favourable outcomes reported may be attributable to the small sample size. Future research should include randomized controlled studies to compare the outcomes of arthroscopic surgery and open reduction based on fracture size.
Arthroscopic reduction and double‐row suture bridge fixation for large‐sized greater tuberosity fractures is safe and shows good fracture reduction and excellent clinical outcomes. Therefore, this surgical method can be considered an alternative to open reduction for large greater tuberosity fractures.
Sang‐Hun Ko and Young Dae Jeon : Conceptualization. Young Dae Jeon : Methodology. Ki‐Bong Park : Validation. Sangheon Oh : Formal analysis. Jaemin Oh : Investigation. Jaemin Oh : Data curation. Jaemin Oh and Young Dae Jeon: Writing—original draft preparation. Young Dae Jeon : Writing—review & editing.
The authors declare no conflict of interest.
This study was approved by the Institutional Review Board of Ulsan University Hospital (IRB No. 2022‐06‐009‐001). The requirement for informed consent was waived due to the retrospective study design.
|
Patient perspectives of diabetes care in primary care networks in Singapore: a mixed-methods study | dd124037-c9c7-4a9f-a642-49d8cad3b378 | 10734143 | Patient-Centered Care[mh] | Type 2 diabetes (T2D) is a prominent chronic condition, projected to affect 783 million people worldwide by 2045 . Poor patient outcomes can be mitigated by providing high quality care comprising effective management and care integration . However, implementation of care remains a challenge for many healthcare systems poorly designed for coordinated chronic care delivery [ – ]. Primary care is able to provide integrated first-contact and accessible care for T2D patients, thanks to longitudinal and holistic interactions with patients and their families . Recent health policy developments in Singapore offer an excellent opportunity to examine specific primary care arrangements in the delivery of T2D care. Primary care is provided by 1,800 private general practitioner (GP) clinics and 23 public polyclinics . Majority of GP clinics are single-handed practices , while polyclinics are large team-based practices. Patients receive government subsidies for polyclinic care, which have been extended in 2012 to certain primary care practices. The Primary Care Networks (PCNs) formed in 2018 are networks of GPs organised into teams with nurses and care coordinators to deliver chronic disease management by providing ancillary services (diabetic retinal and foot screening and counselling) and care coordination. Patients with chronic conditions could use government subsidies such as Community Health Assist Scheme (CHAS) and their savings (MediSave) , thereby reducing cash payment in the PCN clinics. There were 10 PCNs with 607 clinics in 2021, organised following three types . The first type is called the GP-led PCN, formed and coordinated by partnering single-handed GPs who helm both the clinical and administrative leadership roles. There are five PCNs comprising 200 clinics under the GP-led type. The second PCN type called the Group PCN is led by two large GP corporate groups comprising 82 clinics. The third PCN type called the Cluster PCN is a partnership between single-handed GPs and the regional health clusters that included polyclinics . Under the Cluster type, there are three PCNs with 325 clinics. In the Group and Cluster types, the PCN clinical leader is a GP, while the administrative leadership role is assumed by the corporate groups or cluster with whom the GPs have partnered with. The clinical leader oversees the clinical governance and development of the PCNs, while the administrative leader manages the administration in the PCNs . A large majority of GP clinics in the PCNs are single-handed clinics, including those who belonged to the Group or Cluster types. After these clinics joined the PCNs, majority of them received more access to diabetes nurse services but its use could be different based on different PCN types. Only two Cluster type PCNs were organised by geographical boundaries. All other PCNs were generally located across the country. The PCNs were not organised using specific patient or clinic characteristics. The Chronic Care Model (CCM) is a framework that supports high-quality chronic disease management that is planned, coordinated, patient-centred [ – ], and effective in improving patients’ clinical outcomes [ – ]. Over the years, the CCM has been implemented in the polyclinics for management of chronic conditions [ – ]. However, it is not known if the PCNs contain elements of CCM in providing diabetes care for their patients. Patient engagement is an important indicator of effectiveness in chronic disease management . Thus it is crucial to investigate patients’ views of care that may differ from their providers [ – ]. Yet, this is not known with respect to T2D care in the PCNs, thereby necessitating research in this area. Therefore, the aims of this study are: 1) to examine the quality of care in the PCN clinics as defined by the CCM from the patients’ perspective, and 2) to explore its determinants including the provision of care services, individual patient factors and the different PCN types.
Design A cross-sectional convergent mixed-method design was used for the quantitative and qualitative studies (Fig. ). Data collection and analysis of the studies were performed concurrently and independently to support the rigorous application of the two methods . Integration was performed at study design and data analysis using the CCM as guidance. The findings from the quantitative and qualitative studies were subsequently triangulated to derive integrated results and interpretations that expanded the understanding of the quantitative results , and provided an in-depth knowledge of the participants’ perspectives . This deeper understanding of the trends and patterns ensured generalizability and transferability of the results. Convergence was performed at the data summary level and not at the individual patient level. We sent emails to the all PCN clinics explaining the study. For the PCN GPs who agreed, we recruited their patients for a survey from the clinic waiting areas or clinic lists between August 2021 and January 2022 following convenience sampling. Patients were eligible to participate in the study if they were 21 years and older, if they had a diagnosis of T2D identified by the GP, and if they had no cognitive impairment. Participants completed the survey of the quantitative arm using either paper surveys or an online link. We purposively sampled participants from the quantitative study ensuring maximal variation by considering patients’ age, gender, ethnicity, and the PCN type of their clinic. We conducted individual interviews in English over the telephone without video function due to constraints during the COVID-19 pandemic and patients’ preference. The principal investigator LHG, a family physician with qualitative research training, conducted the interviews using a semi-structured interview guide comprising open-ended questions which were used to prompt participants to share their views on these general issues (Additional file ). The interview questions were created in parallel with the Patient Assessment of Chronic Illness Care (PACIC) questionnaire to enable collection of contextual information related to the CCM concepts of chronic care delivery from the qualitative data. If patients did not understand any questions during the interviews, we rephrased the questions or asked follow-up questions that helped with obtaining focused answers. LHG introduced herself as a family physician who was interested to hear the participants’ views on how they received their diabetes care in the PCNs, to understand and identify any areas of care delivery that were done well and areas that needed improvement. Participants were advised that they need not answer questions that they felt uncomfortable with and that they could give their views freely without concerns that their care would be affected by what they said during the study. Interviews lasted 60 minutes on average and were audiotaped. Field notes were taken by LHG and reflexive notes written following each interview. Interviews were stopped upon reaching data saturation at the 24 th interview. Patients voluntarily gave their written informed consent and were reimbursed S$20 per arm for their participation. Ethics approval was obtained from the National University of Singapore Institutional Review Board (Reference Code LS-19-298). Measurements A survey captured patients’ sociodemographic data and medical needs (age, gender, ethnicity, years of education, number of comorbid conditions, and use of cash payment), service-related variables (length of consultation with GP, number of nurse services received, and number of diabetes medications) and the PCN types. All information was self-reported by the patients, and not collected from clinic records. Quality of PCN care was examined using the Patient Assessment of Chronic Illness Care (PACIC) in the English language . The PACIC is a questionnaire that captures patients’ perceptions of CCM-based services that they could be expected to observe . The PACIC contains 20 items reflecting the 5 subscales of patient-centred care: patient activation, delivery system design/decision support, goal setting/tailoring, problem-solving/contextual counselling, and follow-up/coordination (Additional file ). Each item is scored on a 5-point Likert scale, ranging from 1 (almost never) to 5 (almost always). Each subscale was scored by averaging responses for items within that subscale. The PACIC summary score was the average of all responses to the 20 items. Higher scores indicate the extent to which patients reported having received CCM-based services. Content validation of the PACIC was performed and resulted with minor adaptations (Cronbach’s alpha of 0.93). The necessary sample size for the PACIC survey was calculated to be 309 given the population of patients with T2D in the PCNs (35,667) with a margin of error of 0.1 and 95% confidence level . Analysis Bivariate analyses were performed between the PACIC summary scores and potential determinants using Pearson’s or Spearman’s correlation for continuous variables, independent T-tests for categorical binary variables, and one-way ANOVA or Kruskal-Wallis test for categorical variables with more than two categories. Variables with a p -value <.1 in these analyses or have clinical or conceptual relevance were entered in linear regression models in a stepwise manner, starting with the service-related variables, then including patients’ characteristics, and finally PCN types. Between each step, independent variables with p -value >.1 were removed from the model. Continuous variables have been standardized before entering them into the regression models. Variance inflation factors remained < 2 in all models. Statistical significance was set at α < 0.05. We used complete case analysis. The analyses were conducted using the Statistical Package for the Social Sciences (SPSS, Version 28, IBM Corp., Armonk, NY, USA) and R statistical software (Version 3.6.1). Qualitative interviews were transcribed verbatim. Each transcript was independently coded by a primary coder (LHG) who identified and organised the codes into a codebook. Coding was performed using an inductive approach guided by thematic analysis involving familiarisation of data, generating codes, generating themes, reviewing themes, and defining and naming themes. The transcripts were closely followed and coded multiple times to capture the original meaning of the data. Two other coders CJRS and MAL independently coded 13 and 11 transcripts respectively, to ensure concordance with the codebook. Initial codes were checked for duplicates and similarities. Similar codes were grouped under sub-themes and further aggregated into themes guided by the CCM framework. LHG, CJRS and MAL discussed the meaning of the codes, subthemes, and themes until consensus reached for the final list. Thematic saturation was achieved during analysis. All researchers reviewed and agreed on the final list of codes, subthemes and themes. The data was managed using NVivo software (1.7.1 Release), a qualitative data analysis software . Using mixed methods, quantitative and qualitative results were compared side-by-side in a joint comparison table using the CCM as guidance. Key concepts were developed to answer the research questions. Integration was classified as confirming or disconfirming depending on whether the quantitative or qualitative results confirmed or contradicted the key concepts, and as expanded if the results expanded the understanding of the key concepts .
A cross-sectional convergent mixed-method design was used for the quantitative and qualitative studies (Fig. ). Data collection and analysis of the studies were performed concurrently and independently to support the rigorous application of the two methods . Integration was performed at study design and data analysis using the CCM as guidance. The findings from the quantitative and qualitative studies were subsequently triangulated to derive integrated results and interpretations that expanded the understanding of the quantitative results , and provided an in-depth knowledge of the participants’ perspectives . This deeper understanding of the trends and patterns ensured generalizability and transferability of the results. Convergence was performed at the data summary level and not at the individual patient level. We sent emails to the all PCN clinics explaining the study. For the PCN GPs who agreed, we recruited their patients for a survey from the clinic waiting areas or clinic lists between August 2021 and January 2022 following convenience sampling. Patients were eligible to participate in the study if they were 21 years and older, if they had a diagnosis of T2D identified by the GP, and if they had no cognitive impairment. Participants completed the survey of the quantitative arm using either paper surveys or an online link. We purposively sampled participants from the quantitative study ensuring maximal variation by considering patients’ age, gender, ethnicity, and the PCN type of their clinic. We conducted individual interviews in English over the telephone without video function due to constraints during the COVID-19 pandemic and patients’ preference. The principal investigator LHG, a family physician with qualitative research training, conducted the interviews using a semi-structured interview guide comprising open-ended questions which were used to prompt participants to share their views on these general issues (Additional file ). The interview questions were created in parallel with the Patient Assessment of Chronic Illness Care (PACIC) questionnaire to enable collection of contextual information related to the CCM concepts of chronic care delivery from the qualitative data. If patients did not understand any questions during the interviews, we rephrased the questions or asked follow-up questions that helped with obtaining focused answers. LHG introduced herself as a family physician who was interested to hear the participants’ views on how they received their diabetes care in the PCNs, to understand and identify any areas of care delivery that were done well and areas that needed improvement. Participants were advised that they need not answer questions that they felt uncomfortable with and that they could give their views freely without concerns that their care would be affected by what they said during the study. Interviews lasted 60 minutes on average and were audiotaped. Field notes were taken by LHG and reflexive notes written following each interview. Interviews were stopped upon reaching data saturation at the 24 th interview. Patients voluntarily gave their written informed consent and were reimbursed S$20 per arm for their participation. Ethics approval was obtained from the National University of Singapore Institutional Review Board (Reference Code LS-19-298).
A survey captured patients’ sociodemographic data and medical needs (age, gender, ethnicity, years of education, number of comorbid conditions, and use of cash payment), service-related variables (length of consultation with GP, number of nurse services received, and number of diabetes medications) and the PCN types. All information was self-reported by the patients, and not collected from clinic records. Quality of PCN care was examined using the Patient Assessment of Chronic Illness Care (PACIC) in the English language . The PACIC is a questionnaire that captures patients’ perceptions of CCM-based services that they could be expected to observe . The PACIC contains 20 items reflecting the 5 subscales of patient-centred care: patient activation, delivery system design/decision support, goal setting/tailoring, problem-solving/contextual counselling, and follow-up/coordination (Additional file ). Each item is scored on a 5-point Likert scale, ranging from 1 (almost never) to 5 (almost always). Each subscale was scored by averaging responses for items within that subscale. The PACIC summary score was the average of all responses to the 20 items. Higher scores indicate the extent to which patients reported having received CCM-based services. Content validation of the PACIC was performed and resulted with minor adaptations (Cronbach’s alpha of 0.93). The necessary sample size for the PACIC survey was calculated to be 309 given the population of patients with T2D in the PCNs (35,667) with a margin of error of 0.1 and 95% confidence level .
Bivariate analyses were performed between the PACIC summary scores and potential determinants using Pearson’s or Spearman’s correlation for continuous variables, independent T-tests for categorical binary variables, and one-way ANOVA or Kruskal-Wallis test for categorical variables with more than two categories. Variables with a p -value <.1 in these analyses or have clinical or conceptual relevance were entered in linear regression models in a stepwise manner, starting with the service-related variables, then including patients’ characteristics, and finally PCN types. Between each step, independent variables with p -value >.1 were removed from the model. Continuous variables have been standardized before entering them into the regression models. Variance inflation factors remained < 2 in all models. Statistical significance was set at α < 0.05. We used complete case analysis. The analyses were conducted using the Statistical Package for the Social Sciences (SPSS, Version 28, IBM Corp., Armonk, NY, USA) and R statistical software (Version 3.6.1). Qualitative interviews were transcribed verbatim. Each transcript was independently coded by a primary coder (LHG) who identified and organised the codes into a codebook. Coding was performed using an inductive approach guided by thematic analysis involving familiarisation of data, generating codes, generating themes, reviewing themes, and defining and naming themes. The transcripts were closely followed and coded multiple times to capture the original meaning of the data. Two other coders CJRS and MAL independently coded 13 and 11 transcripts respectively, to ensure concordance with the codebook. Initial codes were checked for duplicates and similarities. Similar codes were grouped under sub-themes and further aggregated into themes guided by the CCM framework. LHG, CJRS and MAL discussed the meaning of the codes, subthemes, and themes until consensus reached for the final list. Thematic saturation was achieved during analysis. All researchers reviewed and agreed on the final list of codes, subthemes and themes. The data was managed using NVivo software (1.7.1 Release), a qualitative data analysis software . Using mixed methods, quantitative and qualitative results were compared side-by-side in a joint comparison table using the CCM as guidance. Key concepts were developed to answer the research questions. Integration was classified as confirming or disconfirming depending on whether the quantitative or qualitative results confirmed or contradicted the key concepts, and as expanded if the results expanded the understanding of the key concepts .
Characteristics of participants in quantitative study Participants recruited were sampled from all 10 PCNs (Additional file ). A total of 343 patients from 81 PCN GP clinics (13.3% of out of 607 clinics) participated in the quantitative study. Out of the 81 clinics, 42 (51.9%) were from the GP-led type (200 clinics in total), 15 (18.5%) from the Group type (82 clinics in total), and 24 (29.6%) from the Cluster type (325 clinics in total). Most of the participants received care at GP-led type clinics ( n =197; 57.4%). Other participants received care from either Group type clinics ( n =49; 14.3%), or Cluster type clinics ( n =97; 28.3%) (Table ). Participants in Cluster type clinics were older than in the other two types (mean 57.6 years vs 53.6 years for GP-led type and 53.5 years for Group type) (η2=0.03, 95% CI [0.003, 0.07], p =.006). Most participants were of Chinese ethnicity (73.2%), and the ethnicity distribution was significantly different between PCN types (Cramer’s V=0.17, p =.005). Participants received a median of 13 years of education (IQR 10-15, range 0-15) with no difference between PCN types. All participants had a median of one comorbid condition (IQR 1-2, range 0-4). Participants in the Cluster type had a median of two conditions as compared to the other types which had one condition. Most participants made cash payments (64.1%). More participants used cash payment in the GP-led type than in the other types (68% vs 44.9% for Group type and 66% for Cluster type) (Cramer’s V = 0.17, p =.009). Data on the length of GP consultations was missing for three participants due to participants skipping this question (Table ). This accounted for 0.09% missing data. No other data was missing. The median length of GP consultation was 15 minutes (IQR 10-20, range 4-60). Participants used a median of two nurse services (IQR 1-4; range 0-8) across the clinics. There were significantly more nurse services used by patients in the GP-led type than in the other two types (three vs two for both Group and Cluster types) (η2=0.04, 95% CI [0.007, 0.08], p =.002). The median number of diabetes medications consumed by patients was one (IQR 1-2, range 0-5). Patient reported quality of care The mean PACIC summary score was 3.21 (SD 0.75). The delivery system design/decision support subscale attained the highest mean score of 3.81 (SD 0.76), followed by patient activation subscale of 3.44 (SD 1.04), problem-solving/contextual counselling subscale of 3.36 (SD 0.93), goal setting/tailoring subscale of 3.10 (SD 0.83), and follow-up/coordination subscale of 2.71 (SD 0.90) (Table and Additional file ). Correlation analysis between PACIC summary scores and potential determinants was performed (Additional file ). Length of GP consultation ( r s = 0.19, 95% CI [0.08, 0.29], p <.001), number of nurse services ( r s = 0 .12, 95% CI [0.02, 0.23], p =.025), and number of diabetes medications ( r s = 0.15, 95% CI [0.04, 0.25], p =.006) were found to be positively correlated with PACIC summary scores, while age ( r = -0.25, 95% CI [-0.35, -0.15], p <.001) was negatively correlated. Other variables were not significantly correlated with PACIC summary scores. Chinese ethnicity (Cohen’s d = -0.33, 95% CI [-0.57, -0.09], p =.009) was found to be associated with lower PACIC summary scores (Additional file ). Other variables were not associated with PACIC summary scores. In the multivariate analysis (Table ), the length of GP consultations was positively associated with higher PACIC summary scores ( p =.008) in all three models. The number of diabetes medication was positively associated with higher PACIC summary scores in the first model ( p =.032), but not in subsequent models. Age was associated with lower PACIC scores ( p <.001). PCN type was not associated with PACIC summary scores. All three models were significant. The service-related variables accounted for 3% of the variance in PACIC summary scores in Model 1 (adjusted R 2 =0.03). Adding patients’ characteristics in Model 2 increased the model fit (adjusted R 2 =0.09), yet adding PCN type in Model 3 did not, despite excluding non-significant co-variates (adjusted R 2 =0.08). Qualitative study The sample comprised 24 participants with fair representation from each PCN type (11 from GP-led type, six from Group type, and seven from Cluster type). The participants’ median age was 55 years (IQR 42.3-66.8, range 24-75) and an equal distribution of males ( n =12) and females ( n =12). Ten participants were of Chinese ethnicity. Participants had a median 13 years of education (IQR 10-15, range 10-15), and had a median of one comorbid condition (IQR 0-2, range 0-3). Eleven participants (45.8%) used cash payment. The median length of their GP consultation was 15 minutes (IQR 11.3-20, range 10-45). Participants received a median of three nurse services (IQR 2-5, range 0-7), and consumed a median of one medication (IQR 1-2, range 0-4). Themes and subthemes of patient experiences with diabetes care Five themes and 18 subthemes with representative quotes covering the patients’ experiences of their diabetes care received at the PCN clinics were identified (Additional file ). The five themes were: Theme 1 Team-based diabetes services provided by PCNs (with two subthemes), Theme 2 PCN features that were favoured by patients (with five subthemes), Theme 3 Opportunity for PCNs to collaborate with community partners (with three subthemes), Theme 4 Financial aspects of PCN care (with three subthemes), and Theme 5 Enhancement that PCNs should consider (with five subthemes). Theme 1: Team-based diabetes services provided by PCNs Patients appreciated that they received convenient nurse ancillary and care coordinator services at the clinics. “Dr S told me a month back, like there would be a workshop and are you interested? I said yeah, because I haven't been able to do my eye test because of this COVID measures for last year, so it’s good that I can get it done here.” (P24) Theme 2: PCN features that were favoured by patients Patients valued that they saw the same doctor for their diabetes, highlighting how rapport brought confidence about the prescribed treatment. “Because I’m used to the doc, she knows my condition, what medicine to give me. Then she share so many things with me, right? How to improve my condition and advise me, check my blood test. Because she knows my condition, then I know the doctor can help me or not.” (P2) Patients liked that they spent sufficient time with their GPs. “She (the GP) doesn't talk to you in a hurry, in a hurried manner. She takes time to listen to me and yeah, just to know ... any current discomfort or anything that I need to find out from her, she's readily available for me.” (P15) Patients felt they were treated holistically by their GPs when discussing treatment care plans and when taught problem-solving skills. “He (the GP) said whether I can go for a brisk walk around the park, like running, jogging, whatever I’m at the park. If free, I can do it anytime I want, that kind of thing.” (P14) Patients felt actively engaged and supported by their GPs and nurses to set goals for their diabetes. “If the (blood glucose) is high, they (the GPs) try to check with me, what I have been doing for the past, like my diet. The way they approach the patient, something like that, which I feel comfortable.” (P6) Patients appreciated the convenient access to the clinics. “I mean it's (PCN clinic) also at the most convenient location, because ultimately, that's primary care… it's got to be easy access.” (P17) Theme 3: Opportunity for PCNs to collaborate with community partners Patients felt that GPs and polyclinics could collaborate to deliver diabetes care by having shared medical records and polyclinic-subsidised medications. “Maybe medication-wise, it's possible for the GP to take the cheaper ones from the polyclinic to give [to] those who can’t really afford (the medications).” (P21) Most patients were not referred to community programmes such as support groups or exercise classes for their diabetes, nor were they aware of such programmes. “When I went to the clinic, the doctor recommended me to this diabetes association. But I live in Jurong (west of Singapore), and most of the activities, they have it at east side, like Bedok. Yeah, so it’s not convenient. After a while, I didn’t renew my membership.” (P6) Theme 4: Financial aspects of PCN care Patients found that PCN care was affordable by using the Community Health Assist Scheme (CHAS) subsidy and their savings (MediSave). “I see him (the GP) every two months. So, it ranges from $60 to $130, but it is subsidised by CHAS. After my CHAS finishes, I use MediSave. So, cash payment I pay quite little, around $20 to $40, which to me is affordable.” (P23) However, there were worries that the medication costs would increase with time. “She (the GP) was doing a different brand and one box was $50. Two months of fenofibrate only added up to $11 (from the polyclinic). I can see her for monitoring for blood tests. But when it comes to taking medications every day and it’s for life. To sustain this cost, it’s just not worth it (to see the GP).” (P8) Patients felt that more subsidies for medications would help them stay with their GPs. “If they (government) reduce the price to 50% or even 25%, it actually helps a lot. Subsidise the medication, that's most expensive.” (P18) Theme 5: Enhancement that PCNs should consider Patients identified specific areas for improvement in the PCNs. “But the (paper) record is very thick, like visiting this doctor for the last 10 years. Yeah, not the most efficient way to manage a patient. The cloud storage is the way towards the future. When you're traveling and something happened, at least you can send this report to the doctor there.” (P11) “The nurse is always going all over (to different clinics). If you can have everything under one roof, that would be good.” (P17) Integration of quantitative and qualitative results The Patient Assessment of Chronic Illness Care (PACIC) subscales were compared with the subthemes in a joint comparison table (Table ). There were eight subthemes that confirmed the key concepts with the quantitative results, five subthemes that disconfirmed the key concepts, three subthemes that expanded the understanding of a key concept, and five subthemes that were not covered by any PACIC scales. All subthemes were corroborated by patients’ quotes (Additional file ). The eight subthemes that confirmed the key concepts were: 1.1 Nurse ancillary services provided, 1.2 Care coordination and follow-up provided, 2.1 Follow up by same GP, 2.2 Adequate consultation time with GP, 2.3 Patient-centred care received, 2.4 Engaged and supported by GPs, 2.5 Convenient access to PCN care, and 3.3 Referral to community programmes. The five disconfirming subthemes were: 1.2 Care coordination and follow-up provided, 2.3 Patient-centred care received, 3.3 Referral to community programmes, 5.4 Increase self-care information in patient education, and 5.5 Enable more allied health services. The three expanded subthemes were : 3.1 Shared care with polyclinics, 3.2 Subsidised medications from polyclinics, and 5.2 Increase access to nurse services. Lastly, the five subthemes that were not integrated were: 4.1 Affordable PCN fees, 4.2 Rising medical costs, 4.3 More government subsidises needed, 5.1 Increase physical space in PCN clinics, and 5.3 Increase use of electronic medical records. Lastly, five key concepts were derived after integration of quantitative and qualitative results: i) Patient Activation was sometimes received, ii) Delivery system design/decision support was sometimes received, iii) Goal setting/tailoring was sometimes received, iv) Problem-solving/contextual counselling was sometimes received, and v) Follow-up/coordination was generally not received.
Participants recruited were sampled from all 10 PCNs (Additional file ). A total of 343 patients from 81 PCN GP clinics (13.3% of out of 607 clinics) participated in the quantitative study. Out of the 81 clinics, 42 (51.9%) were from the GP-led type (200 clinics in total), 15 (18.5%) from the Group type (82 clinics in total), and 24 (29.6%) from the Cluster type (325 clinics in total). Most of the participants received care at GP-led type clinics ( n =197; 57.4%). Other participants received care from either Group type clinics ( n =49; 14.3%), or Cluster type clinics ( n =97; 28.3%) (Table ). Participants in Cluster type clinics were older than in the other two types (mean 57.6 years vs 53.6 years for GP-led type and 53.5 years for Group type) (η2=0.03, 95% CI [0.003, 0.07], p =.006). Most participants were of Chinese ethnicity (73.2%), and the ethnicity distribution was significantly different between PCN types (Cramer’s V=0.17, p =.005). Participants received a median of 13 years of education (IQR 10-15, range 0-15) with no difference between PCN types. All participants had a median of one comorbid condition (IQR 1-2, range 0-4). Participants in the Cluster type had a median of two conditions as compared to the other types which had one condition. Most participants made cash payments (64.1%). More participants used cash payment in the GP-led type than in the other types (68% vs 44.9% for Group type and 66% for Cluster type) (Cramer’s V = 0.17, p =.009). Data on the length of GP consultations was missing for three participants due to participants skipping this question (Table ). This accounted for 0.09% missing data. No other data was missing. The median length of GP consultation was 15 minutes (IQR 10-20, range 4-60). Participants used a median of two nurse services (IQR 1-4; range 0-8) across the clinics. There were significantly more nurse services used by patients in the GP-led type than in the other two types (three vs two for both Group and Cluster types) (η2=0.04, 95% CI [0.007, 0.08], p =.002). The median number of diabetes medications consumed by patients was one (IQR 1-2, range 0-5).
The mean PACIC summary score was 3.21 (SD 0.75). The delivery system design/decision support subscale attained the highest mean score of 3.81 (SD 0.76), followed by patient activation subscale of 3.44 (SD 1.04), problem-solving/contextual counselling subscale of 3.36 (SD 0.93), goal setting/tailoring subscale of 3.10 (SD 0.83), and follow-up/coordination subscale of 2.71 (SD 0.90) (Table and Additional file ). Correlation analysis between PACIC summary scores and potential determinants was performed (Additional file ). Length of GP consultation ( r s = 0.19, 95% CI [0.08, 0.29], p <.001), number of nurse services ( r s = 0 .12, 95% CI [0.02, 0.23], p =.025), and number of diabetes medications ( r s = 0.15, 95% CI [0.04, 0.25], p =.006) were found to be positively correlated with PACIC summary scores, while age ( r = -0.25, 95% CI [-0.35, -0.15], p <.001) was negatively correlated. Other variables were not significantly correlated with PACIC summary scores. Chinese ethnicity (Cohen’s d = -0.33, 95% CI [-0.57, -0.09], p =.009) was found to be associated with lower PACIC summary scores (Additional file ). Other variables were not associated with PACIC summary scores. In the multivariate analysis (Table ), the length of GP consultations was positively associated with higher PACIC summary scores ( p =.008) in all three models. The number of diabetes medication was positively associated with higher PACIC summary scores in the first model ( p =.032), but not in subsequent models. Age was associated with lower PACIC scores ( p <.001). PCN type was not associated with PACIC summary scores. All three models were significant. The service-related variables accounted for 3% of the variance in PACIC summary scores in Model 1 (adjusted R 2 =0.03). Adding patients’ characteristics in Model 2 increased the model fit (adjusted R 2 =0.09), yet adding PCN type in Model 3 did not, despite excluding non-significant co-variates (adjusted R 2 =0.08).
The sample comprised 24 participants with fair representation from each PCN type (11 from GP-led type, six from Group type, and seven from Cluster type). The participants’ median age was 55 years (IQR 42.3-66.8, range 24-75) and an equal distribution of males ( n =12) and females ( n =12). Ten participants were of Chinese ethnicity. Participants had a median 13 years of education (IQR 10-15, range 10-15), and had a median of one comorbid condition (IQR 0-2, range 0-3). Eleven participants (45.8%) used cash payment. The median length of their GP consultation was 15 minutes (IQR 11.3-20, range 10-45). Participants received a median of three nurse services (IQR 2-5, range 0-7), and consumed a median of one medication (IQR 1-2, range 0-4).
Five themes and 18 subthemes with representative quotes covering the patients’ experiences of their diabetes care received at the PCN clinics were identified (Additional file ). The five themes were: Theme 1 Team-based diabetes services provided by PCNs (with two subthemes), Theme 2 PCN features that were favoured by patients (with five subthemes), Theme 3 Opportunity for PCNs to collaborate with community partners (with three subthemes), Theme 4 Financial aspects of PCN care (with three subthemes), and Theme 5 Enhancement that PCNs should consider (with five subthemes). Theme 1: Team-based diabetes services provided by PCNs Patients appreciated that they received convenient nurse ancillary and care coordinator services at the clinics. “Dr S told me a month back, like there would be a workshop and are you interested? I said yeah, because I haven't been able to do my eye test because of this COVID measures for last year, so it’s good that I can get it done here.” (P24) Theme 2: PCN features that were favoured by patients Patients valued that they saw the same doctor for their diabetes, highlighting how rapport brought confidence about the prescribed treatment. “Because I’m used to the doc, she knows my condition, what medicine to give me. Then she share so many things with me, right? How to improve my condition and advise me, check my blood test. Because she knows my condition, then I know the doctor can help me or not.” (P2) Patients liked that they spent sufficient time with their GPs. “She (the GP) doesn't talk to you in a hurry, in a hurried manner. She takes time to listen to me and yeah, just to know ... any current discomfort or anything that I need to find out from her, she's readily available for me.” (P15) Patients felt they were treated holistically by their GPs when discussing treatment care plans and when taught problem-solving skills. “He (the GP) said whether I can go for a brisk walk around the park, like running, jogging, whatever I’m at the park. If free, I can do it anytime I want, that kind of thing.” (P14) Patients felt actively engaged and supported by their GPs and nurses to set goals for their diabetes. “If the (blood glucose) is high, they (the GPs) try to check with me, what I have been doing for the past, like my diet. The way they approach the patient, something like that, which I feel comfortable.” (P6) Patients appreciated the convenient access to the clinics. “I mean it's (PCN clinic) also at the most convenient location, because ultimately, that's primary care… it's got to be easy access.” (P17) Theme 3: Opportunity for PCNs to collaborate with community partners Patients felt that GPs and polyclinics could collaborate to deliver diabetes care by having shared medical records and polyclinic-subsidised medications. “Maybe medication-wise, it's possible for the GP to take the cheaper ones from the polyclinic to give [to] those who can’t really afford (the medications).” (P21) Most patients were not referred to community programmes such as support groups or exercise classes for their diabetes, nor were they aware of such programmes. “When I went to the clinic, the doctor recommended me to this diabetes association. But I live in Jurong (west of Singapore), and most of the activities, they have it at east side, like Bedok. Yeah, so it’s not convenient. After a while, I didn’t renew my membership.” (P6) Theme 4: Financial aspects of PCN care Patients found that PCN care was affordable by using the Community Health Assist Scheme (CHAS) subsidy and their savings (MediSave). “I see him (the GP) every two months. So, it ranges from $60 to $130, but it is subsidised by CHAS. After my CHAS finishes, I use MediSave. So, cash payment I pay quite little, around $20 to $40, which to me is affordable.” (P23) However, there were worries that the medication costs would increase with time. “She (the GP) was doing a different brand and one box was $50. Two months of fenofibrate only added up to $11 (from the polyclinic). I can see her for monitoring for blood tests. But when it comes to taking medications every day and it’s for life. To sustain this cost, it’s just not worth it (to see the GP).” (P8) Patients felt that more subsidies for medications would help them stay with their GPs. “If they (government) reduce the price to 50% or even 25%, it actually helps a lot. Subsidise the medication, that's most expensive.” (P18) Theme 5: Enhancement that PCNs should consider Patients identified specific areas for improvement in the PCNs. “But the (paper) record is very thick, like visiting this doctor for the last 10 years. Yeah, not the most efficient way to manage a patient. The cloud storage is the way towards the future. When you're traveling and something happened, at least you can send this report to the doctor there.” (P11) “The nurse is always going all over (to different clinics). If you can have everything under one roof, that would be good.” (P17)
Patients appreciated that they received convenient nurse ancillary and care coordinator services at the clinics. “Dr S told me a month back, like there would be a workshop and are you interested? I said yeah, because I haven't been able to do my eye test because of this COVID measures for last year, so it’s good that I can get it done here.” (P24)
Patients valued that they saw the same doctor for their diabetes, highlighting how rapport brought confidence about the prescribed treatment. “Because I’m used to the doc, she knows my condition, what medicine to give me. Then she share so many things with me, right? How to improve my condition and advise me, check my blood test. Because she knows my condition, then I know the doctor can help me or not.” (P2) Patients liked that they spent sufficient time with their GPs. “She (the GP) doesn't talk to you in a hurry, in a hurried manner. She takes time to listen to me and yeah, just to know ... any current discomfort or anything that I need to find out from her, she's readily available for me.” (P15) Patients felt they were treated holistically by their GPs when discussing treatment care plans and when taught problem-solving skills. “He (the GP) said whether I can go for a brisk walk around the park, like running, jogging, whatever I’m at the park. If free, I can do it anytime I want, that kind of thing.” (P14) Patients felt actively engaged and supported by their GPs and nurses to set goals for their diabetes. “If the (blood glucose) is high, they (the GPs) try to check with me, what I have been doing for the past, like my diet. The way they approach the patient, something like that, which I feel comfortable.” (P6) Patients appreciated the convenient access to the clinics. “I mean it's (PCN clinic) also at the most convenient location, because ultimately, that's primary care… it's got to be easy access.” (P17)
Patients felt that GPs and polyclinics could collaborate to deliver diabetes care by having shared medical records and polyclinic-subsidised medications. “Maybe medication-wise, it's possible for the GP to take the cheaper ones from the polyclinic to give [to] those who can’t really afford (the medications).” (P21) Most patients were not referred to community programmes such as support groups or exercise classes for their diabetes, nor were they aware of such programmes. “When I went to the clinic, the doctor recommended me to this diabetes association. But I live in Jurong (west of Singapore), and most of the activities, they have it at east side, like Bedok. Yeah, so it’s not convenient. After a while, I didn’t renew my membership.” (P6)
Patients found that PCN care was affordable by using the Community Health Assist Scheme (CHAS) subsidy and their savings (MediSave). “I see him (the GP) every two months. So, it ranges from $60 to $130, but it is subsidised by CHAS. After my CHAS finishes, I use MediSave. So, cash payment I pay quite little, around $20 to $40, which to me is affordable.” (P23) However, there were worries that the medication costs would increase with time. “She (the GP) was doing a different brand and one box was $50. Two months of fenofibrate only added up to $11 (from the polyclinic). I can see her for monitoring for blood tests. But when it comes to taking medications every day and it’s for life. To sustain this cost, it’s just not worth it (to see the GP).” (P8) Patients felt that more subsidies for medications would help them stay with their GPs. “If they (government) reduce the price to 50% or even 25%, it actually helps a lot. Subsidise the medication, that's most expensive.” (P18)
Patients identified specific areas for improvement in the PCNs. “But the (paper) record is very thick, like visiting this doctor for the last 10 years. Yeah, not the most efficient way to manage a patient. The cloud storage is the way towards the future. When you're traveling and something happened, at least you can send this report to the doctor there.” (P11) “The nurse is always going all over (to different clinics). If you can have everything under one roof, that would be good.” (P17)
The Patient Assessment of Chronic Illness Care (PACIC) subscales were compared with the subthemes in a joint comparison table (Table ). There were eight subthemes that confirmed the key concepts with the quantitative results, five subthemes that disconfirmed the key concepts, three subthemes that expanded the understanding of a key concept, and five subthemes that were not covered by any PACIC scales. All subthemes were corroborated by patients’ quotes (Additional file ). The eight subthemes that confirmed the key concepts were: 1.1 Nurse ancillary services provided, 1.2 Care coordination and follow-up provided, 2.1 Follow up by same GP, 2.2 Adequate consultation time with GP, 2.3 Patient-centred care received, 2.4 Engaged and supported by GPs, 2.5 Convenient access to PCN care, and 3.3 Referral to community programmes. The five disconfirming subthemes were: 1.2 Care coordination and follow-up provided, 2.3 Patient-centred care received, 3.3 Referral to community programmes, 5.4 Increase self-care information in patient education, and 5.5 Enable more allied health services. The three expanded subthemes were : 3.1 Shared care with polyclinics, 3.2 Subsidised medications from polyclinics, and 5.2 Increase access to nurse services. Lastly, the five subthemes that were not integrated were: 4.1 Affordable PCN fees, 4.2 Rising medical costs, 4.3 More government subsidises needed, 5.1 Increase physical space in PCN clinics, and 5.3 Increase use of electronic medical records. Lastly, five key concepts were derived after integration of quantitative and qualitative results: i) Patient Activation was sometimes received, ii) Delivery system design/decision support was sometimes received, iii) Goal setting/tailoring was sometimes received, iv) Problem-solving/contextual counselling was sometimes received, and v) Follow-up/coordination was generally not received.
Summary Patients with T2D perceived that they sometimes received integrated care based on the Chronic Care Model (CCM) in the Singapore Primary Care Networks (PCNs) which varied across domains but not across PCN types. Patients were satisfied with the nurse services provided, the good continuity of care provided, having sufficient consultation time with their GPs, as well as the patient-centred approach to care involving goal setting and problem-solving, the engagement and support provided, and the convenient access to services. However, referral to community programmes and follow-up/coordination was seldom done. While patients found PCN care affordable, concerns about rising medical costs prompted suggestions to provide more medication subsidies. Specific recommendation for improvement were also made. Strengths and limitations This study is the first to our knowledge to describe how patients with T2D perceive diabetes care in the Singapore PCNs. We recruited widely across the PCNs to ensure representativeness and we achieved a negligible amount of missing data. Our mixed-method design enabled us to triangulate our findings and investigate in detail the areas of care in need of improvement. Our study has a number of limitations. The observational and cross-sectional design weakens the case for causality in the observed associations between PCN types and clinic features, and perceived quality of care in the quantitative study data. The data was obtained by patients’ self-reporting that could have recall bias . Random sampling, rather than convenience sampling, would have also made the case for the generalisability of the findings more compelling. Nonetheless, our study PACIC summary scores were similar to those in studies from other countries, such as the US [ , , ], Australia , Philippines , and Taiwan . We were unable to obtain information about non-PCN clinics or about other non-participating patients in the PCN clinics to compare their characteristics with those of our participants. The lack of such information could have affected the generalisability of the quantitative results and the transferability of the qualitative results. Nevertheless, we were able to compare our study participants with local data. Our study had more males which concurred with the local population where more males have diabetes . Additionally, the ethnicity ratio reflected the Singapore population where Chinese is the predominant ethnicity. There was a disbalance in our sampling across PCN types, which would potentially limit the generalisation of survey findings. However, adjustment for relevant variables in the stepwise regression reduced the impact of this limitation. Finally, although patients were selected based on a diagnosis of T2D, questions in the PACIC are not specific of care for diabetes. Hence, patients' responses could also be influenced by care received for other chronic conditions. Again, triangulation of the quantitative and the qualitative results ensured a degree of specificity that might have been otherwise missing. Comparison with existing literature We found that PCNs provided quality of care that varied according to the domain of interest or subscales. The subscale scores attained in our study showed similar trends with studies from Denmark , Philippines , and Switzerland : patient activation, delivery system design/decision support, and problem-solving/contextual counselling subscales attained higher scores, while goal setting/tailoring and follow-up/coordination subscales attained lower scores. Managing people with diabetes care involves complex delivery system designs and coordination . Thus, delivery system design and decision support were also the two most commonly implemented elements in clinical practice [ – ]. In contrast, goal setting/tailoring and follow-up/coordination were often under-used in chronic disease management . Delivery system design/decision support attained the highest score and was confirmed by four subthemes. Firstly, patients received additional nurse ancillary services at the clinics, a feature of collaborative team-based care [ , – ]. Secondly, patients were satisfied that they saw the same doctor for diabetes [ – ]. Rapport built with their GPs over time could have favourably influenced their perception of care. Continuity of care with doctors was related to patient satisfaction , lower mortality , greater adherence to medications , and reduced healthcare utilization . Thirdly, patients appreciated sufficient consultation time with their doctor, a feature often lacking in primary care . Our results were similar to a 2014 local study that 72% of GPs spent 15.8 minutes with their patients . Sufficient consultation with physicians enabled effective communication in relation to their diabetes , greater patient activation or involvement in their care [ – ], increased satisfaction with care , higher levels of enablement , and was more patient-centred . Fourthly, patients liked the convenient access to the clinics by the extended opening hours , the convenience of walking to the clinics from home, and the acceptable waiting time, which had comparable findings to two local studies . In contrast, there was a disconfirming subtheme that patients noticed the lack of allied health services in the PCNs. Although dietitians and pharmacists contributed to improved outcomes in the care of diabetes patients, and their roles recommended in diabetes management guidelines , allied health services were not well integrated in primary care . Presently the nurses in the PCNs have assumed the role for nutrition advice in the absence of a dietitian. There were three subthemes that expanded our understanding of the issues under delivery system design/decision support. The first two related to working closer to the public polyclinics. Patients suggested that polyclinics and PCN clinics have shared care or shared services since polyclinics have the ancillary nurse services under one roof on a daily basis , in contrast to the PCN clinics with less frequent services. Patients also proposed that subsidised medications available to polyclinic patients be extended to PCN patients. Lastly, the subtheme on increasing access to the nurse services suggested that they might be presently inadequate to meet some patients’ needs [ – ]. Our patients gave high scores for the patient activation subscale which had confirming evidence from the patient-centred care demonstrated by their GPs through listening to their complaints of illness, discussing treatment plans and care goals, and problem-solving . Additionally, there was confirming evidence that patients felt engaged and supported by their GPs in actualising their treatment plans. However, one subtheme disconfirmed this key concept with patients requesting for more self-care information from their GPs and nurses as advised in diabetes guidelines . This subtheme suggested that the present education might not be fully addressing patients’ self-management support needs. Under problem-solving/contextual counselling, the subtheme of patient-centred care supported the key concept that patients sometimes received care in this domain in the PCNs. The confirming quote from the subtheme suggested that the GPs and nurses understood the patients’ perspectives, provided pertinent information to facilitate their autonomy in making decisions for treatment , and involved them in decision-making and solving problems when managing their diabetes [ , , ]. In contrast, the disconfirming quote suggested that the patient sometimes disregarded their GP’s advice when the problem-solving did not consider their preferences or context. There were care gaps identified in the goal setting/tailoring subscale. Goal setting underpins patient’s context and values, and improves their care experiences . Under the confirming subtheme of patient-centred care, many patients perceived that their GPs considered their preferences in goal setting and tailored the health advice to their situations. However, no written copy of their treatment care plan was given to the patients which might have reduced the effectiveness of the goal setting. The subtheme referral to community programmes disconfirmed the key concept that advocated for tapping on community programmes to support patients in setting goals for their diabetes. There were subthemes that confirmed that follow-up/coordination was generally not received by patients. Referrals to community programmes to support for patients after clinic visits in maintaining a healthy lifestyle might be suboptimal and under-utilised due to unawareness about their existence, lack of accessibility or inconvenience. However, some patients disconfirmed this key concept whereby they have received care coordination for scheduling clinic appointments from the PCN care coordinators. Lastly, there were subthemes that were not integrated with the quantitative results but remained important to understanding patients’ perspectives about PCN care. Patients’ financial concerns about medication costs and asking for government subsidises were key factors that could influence adherence and access. A local study showed that having a new diagnosis of diabetes with other common comorbid conditions such as hypertension and hyperlipidaemia incurred the most costs for patients from new anti-diabetic medications and extra tests for monitoring and screening . Medication costs for diabetes patients constituted 43% of the total direct medical burden in the US in 2017 comprising $15 billion for insulin, $15.9 billion for other antidiabetic agents, and $71.2 billion in excess use of other prescription medications . Increasing patient sharing for medication costs was significantly associated with a decrease in adherence which in turn associated with poorer health outcomes . In contrast, ensuring affordable medication costs could remove the barrier to access to standard diabetes care . The patients’ observations that their GPs’ clinics were too small to accommodate extra nurse services have concordance from studies showing how design and layout of team spaces in primary care clinics affected how team members worked together . However, lack of physical space could be overcome with digital technologies and services such as telemedicine . Additionally, patients’ calls that their GPs use electronic medical records instead of paper records were supported by evidence showing that chronic patients benefitted most from electronic medical records that contained decision support tools for their physicians, communication tools that informed them of their treatment, and reporting and tracking capabilities that informed them of their progress . This study showed that the PCN types were not associated with the PACIC summary scores. Some aspects of quality, such as administrative and/or information technology support might not be observed by patients. Additionally, majority of the PCN clinics were single-handed practices and likely to be similar in their structure and operations despite joining different PCN types. Lastly, we found that younger people experienced more integrated care in our study in contrast with other studies that did not find age as an association [ , , , , ]. Older people with chronic conditions have increased and often unmet health and social needs , requiring more medical care and support services . Therefore, older people in this study could have perceived that more could be done for them to feel adequately supported in diabetes care. The younger adults in our study could have perceived a higher quality of care from the PCNs, based on their more positive perceptions of themselves as young people . Implications for policy, practice, and research As patients suggested, PCN GPs might consider collaborating with polyclinics in providing shared care and subsidised medications, using more community programmes, increase clinic space for more nurse services including allied health, increase the use of electronic medical records, and having more self-care education. There was a wider concern about rising medical costs at the GP clinics and a call for more subsidises. All these issues merit consideration for improving the quality of care in PCNs in Singapore, while addressing barriers to the use of available services. Examination of the perspectives of PCN healthcare professionals is granted for corroborating and expanding the understanding of these findings in optimising healthcare delivery in Singapore for people with T2D.
Patients with T2D perceived that they sometimes received integrated care based on the Chronic Care Model (CCM) in the Singapore Primary Care Networks (PCNs) which varied across domains but not across PCN types. Patients were satisfied with the nurse services provided, the good continuity of care provided, having sufficient consultation time with their GPs, as well as the patient-centred approach to care involving goal setting and problem-solving, the engagement and support provided, and the convenient access to services. However, referral to community programmes and follow-up/coordination was seldom done. While patients found PCN care affordable, concerns about rising medical costs prompted suggestions to provide more medication subsidies. Specific recommendation for improvement were also made.
This study is the first to our knowledge to describe how patients with T2D perceive diabetes care in the Singapore PCNs. We recruited widely across the PCNs to ensure representativeness and we achieved a negligible amount of missing data. Our mixed-method design enabled us to triangulate our findings and investigate in detail the areas of care in need of improvement. Our study has a number of limitations. The observational and cross-sectional design weakens the case for causality in the observed associations between PCN types and clinic features, and perceived quality of care in the quantitative study data. The data was obtained by patients’ self-reporting that could have recall bias . Random sampling, rather than convenience sampling, would have also made the case for the generalisability of the findings more compelling. Nonetheless, our study PACIC summary scores were similar to those in studies from other countries, such as the US [ , , ], Australia , Philippines , and Taiwan . We were unable to obtain information about non-PCN clinics or about other non-participating patients in the PCN clinics to compare their characteristics with those of our participants. The lack of such information could have affected the generalisability of the quantitative results and the transferability of the qualitative results. Nevertheless, we were able to compare our study participants with local data. Our study had more males which concurred with the local population where more males have diabetes . Additionally, the ethnicity ratio reflected the Singapore population where Chinese is the predominant ethnicity. There was a disbalance in our sampling across PCN types, which would potentially limit the generalisation of survey findings. However, adjustment for relevant variables in the stepwise regression reduced the impact of this limitation. Finally, although patients were selected based on a diagnosis of T2D, questions in the PACIC are not specific of care for diabetes. Hence, patients' responses could also be influenced by care received for other chronic conditions. Again, triangulation of the quantitative and the qualitative results ensured a degree of specificity that might have been otherwise missing.
We found that PCNs provided quality of care that varied according to the domain of interest or subscales. The subscale scores attained in our study showed similar trends with studies from Denmark , Philippines , and Switzerland : patient activation, delivery system design/decision support, and problem-solving/contextual counselling subscales attained higher scores, while goal setting/tailoring and follow-up/coordination subscales attained lower scores. Managing people with diabetes care involves complex delivery system designs and coordination . Thus, delivery system design and decision support were also the two most commonly implemented elements in clinical practice [ – ]. In contrast, goal setting/tailoring and follow-up/coordination were often under-used in chronic disease management . Delivery system design/decision support attained the highest score and was confirmed by four subthemes. Firstly, patients received additional nurse ancillary services at the clinics, a feature of collaborative team-based care [ , – ]. Secondly, patients were satisfied that they saw the same doctor for diabetes [ – ]. Rapport built with their GPs over time could have favourably influenced their perception of care. Continuity of care with doctors was related to patient satisfaction , lower mortality , greater adherence to medications , and reduced healthcare utilization . Thirdly, patients appreciated sufficient consultation time with their doctor, a feature often lacking in primary care . Our results were similar to a 2014 local study that 72% of GPs spent 15.8 minutes with their patients . Sufficient consultation with physicians enabled effective communication in relation to their diabetes , greater patient activation or involvement in their care [ – ], increased satisfaction with care , higher levels of enablement , and was more patient-centred . Fourthly, patients liked the convenient access to the clinics by the extended opening hours , the convenience of walking to the clinics from home, and the acceptable waiting time, which had comparable findings to two local studies . In contrast, there was a disconfirming subtheme that patients noticed the lack of allied health services in the PCNs. Although dietitians and pharmacists contributed to improved outcomes in the care of diabetes patients, and their roles recommended in diabetes management guidelines , allied health services were not well integrated in primary care . Presently the nurses in the PCNs have assumed the role for nutrition advice in the absence of a dietitian. There were three subthemes that expanded our understanding of the issues under delivery system design/decision support. The first two related to working closer to the public polyclinics. Patients suggested that polyclinics and PCN clinics have shared care or shared services since polyclinics have the ancillary nurse services under one roof on a daily basis , in contrast to the PCN clinics with less frequent services. Patients also proposed that subsidised medications available to polyclinic patients be extended to PCN patients. Lastly, the subtheme on increasing access to the nurse services suggested that they might be presently inadequate to meet some patients’ needs [ – ]. Our patients gave high scores for the patient activation subscale which had confirming evidence from the patient-centred care demonstrated by their GPs through listening to their complaints of illness, discussing treatment plans and care goals, and problem-solving . Additionally, there was confirming evidence that patients felt engaged and supported by their GPs in actualising their treatment plans. However, one subtheme disconfirmed this key concept with patients requesting for more self-care information from their GPs and nurses as advised in diabetes guidelines . This subtheme suggested that the present education might not be fully addressing patients’ self-management support needs. Under problem-solving/contextual counselling, the subtheme of patient-centred care supported the key concept that patients sometimes received care in this domain in the PCNs. The confirming quote from the subtheme suggested that the GPs and nurses understood the patients’ perspectives, provided pertinent information to facilitate their autonomy in making decisions for treatment , and involved them in decision-making and solving problems when managing their diabetes [ , , ]. In contrast, the disconfirming quote suggested that the patient sometimes disregarded their GP’s advice when the problem-solving did not consider their preferences or context. There were care gaps identified in the goal setting/tailoring subscale. Goal setting underpins patient’s context and values, and improves their care experiences . Under the confirming subtheme of patient-centred care, many patients perceived that their GPs considered their preferences in goal setting and tailored the health advice to their situations. However, no written copy of their treatment care plan was given to the patients which might have reduced the effectiveness of the goal setting. The subtheme referral to community programmes disconfirmed the key concept that advocated for tapping on community programmes to support patients in setting goals for their diabetes. There were subthemes that confirmed that follow-up/coordination was generally not received by patients. Referrals to community programmes to support for patients after clinic visits in maintaining a healthy lifestyle might be suboptimal and under-utilised due to unawareness about their existence, lack of accessibility or inconvenience. However, some patients disconfirmed this key concept whereby they have received care coordination for scheduling clinic appointments from the PCN care coordinators. Lastly, there were subthemes that were not integrated with the quantitative results but remained important to understanding patients’ perspectives about PCN care. Patients’ financial concerns about medication costs and asking for government subsidises were key factors that could influence adherence and access. A local study showed that having a new diagnosis of diabetes with other common comorbid conditions such as hypertension and hyperlipidaemia incurred the most costs for patients from new anti-diabetic medications and extra tests for monitoring and screening . Medication costs for diabetes patients constituted 43% of the total direct medical burden in the US in 2017 comprising $15 billion for insulin, $15.9 billion for other antidiabetic agents, and $71.2 billion in excess use of other prescription medications . Increasing patient sharing for medication costs was significantly associated with a decrease in adherence which in turn associated with poorer health outcomes . In contrast, ensuring affordable medication costs could remove the barrier to access to standard diabetes care . The patients’ observations that their GPs’ clinics were too small to accommodate extra nurse services have concordance from studies showing how design and layout of team spaces in primary care clinics affected how team members worked together . However, lack of physical space could be overcome with digital technologies and services such as telemedicine . Additionally, patients’ calls that their GPs use electronic medical records instead of paper records were supported by evidence showing that chronic patients benefitted most from electronic medical records that contained decision support tools for their physicians, communication tools that informed them of their treatment, and reporting and tracking capabilities that informed them of their progress . This study showed that the PCN types were not associated with the PACIC summary scores. Some aspects of quality, such as administrative and/or information technology support might not be observed by patients. Additionally, majority of the PCN clinics were single-handed practices and likely to be similar in their structure and operations despite joining different PCN types. Lastly, we found that younger people experienced more integrated care in our study in contrast with other studies that did not find age as an association [ , , , , ]. Older people with chronic conditions have increased and often unmet health and social needs , requiring more medical care and support services . Therefore, older people in this study could have perceived that more could be done for them to feel adequately supported in diabetes care. The younger adults in our study could have perceived a higher quality of care from the PCNs, based on their more positive perceptions of themselves as young people .
As patients suggested, PCN GPs might consider collaborating with polyclinics in providing shared care and subsidised medications, using more community programmes, increase clinic space for more nurse services including allied health, increase the use of electronic medical records, and having more self-care education. There was a wider concern about rising medical costs at the GP clinics and a call for more subsidises. All these issues merit consideration for improving the quality of care in PCNs in Singapore, while addressing barriers to the use of available services. Examination of the perspectives of PCN healthcare professionals is granted for corroborating and expanding the understanding of these findings in optimising healthcare delivery in Singapore for people with T2D.
Patients with T2D in Singapore perceived that the PCNs provide integrated diabetes care that is consistent with the Chronic Care Model, particularly in the areas of patient activation, delivery system design/decision support, goal setting/tailoring, and problem-solving/contextual counselling. Follow-up/coordination will benefit from additional efforts.
Additional file 1. Interview guide. Additional file 2. Definitions of PACIC Subscale Constructs. Additional file 3. Sampling strategy for patients in quantitative study. Additional file 4. Observed scores for Patient Assessment of Chronic Illness Care (PACIC) subscales and items. Additional file 5. Correlation analysis with PACIC summary scores. Additional file 6. Analysis of associations with PACIC Summary Scores. Additional file 7. Representative quotes from patients about diabetes care in PCNs, organised into themes and subthemes. Additional file 8. Patient joint comparison table showing integration analysis, quantitative results, and qualitative results.
|
Effects of reductive soil disinfestation combined with different types of organic materials on the microbial community and functions | 09483f32-85ed-4841-917e-b243feb00ab6 | 10846035 | Microbiology[mh] | Due to economic interests and land resource shortages, highly intensive agriculture has become prevalent in the contemporary world ( ). Unfortunately, continuous monoculture and over-fertilization frequently jeopardize highly intensive agricultural production systems, which inevitably lead to soil degradation and soilborne disease outbreaks ( ). Severe soilborne diseases have become a worldwide problem, causing considerable economic losses and sabotaging agricultural sustainability ( , ). According to statistics, more than 70% of medicinal plants have the above problems, such as Panax ginseng C. A. Mey. ( ), Salvia miltiorrhiza Bunge ( ), Panax notoginseng (Burk.) F. H. Chen ( ), which seriously limit the growth of medicinal plants and cause a lot of economic losses. Therefore, it is imperative to find an efficient method for maintaining the medicinal plants’ sustainable development under highly intensive production systems. Over the last century, soil microbiologists have developed a variety of methods to combat soilborne diseases ( ). A pre-planting soil management practice called reductive soil disinfestation (RSD) was independently developed in 2000 by scientists from Japan and the Netherlands ( , ). RSD creates anaerobic soil conditions by applying easily decomposed organic materials to the soil, adding water for irrigation, and covering plastic films, which soil microorganisms change from “aerobic” to “anaerobic,” and produce organic acids, hydrogen sulfide, ammonia, and other substances to inhibit the growth of soilborne pathogens ( , ). A number of soilborne plant pathogens have been demonstrated to be inactivated by RSD treatments, including Fusarium wilt ( ), verticillium wilt ( ), and damping-off disease ( ). Meanwhile, RSD practice can also restore soil degradation caused by acidification and secondary salinization ( ). Medicinal plants are prone to failure when repeatedly replanted in the same soil due to changes in the soil microbial community, accumulation of self-toxic substances, nutritional imbalances, and deterioration of physicochemical properties ( , ). It is thus possible to reduce medicinal plant replanting failure by implementing RSD practices. Different types of organic materials can have different effects on RSD practices ( ). Currently, organic materials are mainly divided into two categories: liquid and easily degradable materials (like molasses and ethanol) and solid agricultural wastes (like crop straw and livestock manure) ( , ). A study discovered that using wheat bran in RSD treatment could effectively inhibit spinach fusarium wilt, and the incidence could be reduced to 21.1% ( ). Additionally, molasses is often applied in combination with livestock and poultry manure as a carbon source for RSD practice to inhibit fungi and nematodes ( , ). This is because different types of organic materials can stimulate the reorganization of different microbiota through anaerobic degradation, especially the core microbiota, whose antagonism, competition, and parasitism can better inhibit pathogenic bacteria ( ). Meanwhile, soil microorganisms play a crucial role in soil ecosystems, serving a variety of functions including controlling diseases, retaining nutrients, decomposing organic matter, and forming humus ( , ). It is possible to improve the effectiveness of RSD practices by using organic materials. Nevertheless, it remains unclear how different types of organic materials will affect soil ecosystem services provided by the soil microbial community. Thus, it is particularly imperative to explore the selection of organic materials during the RSD process in order to assess changes in soil microbial communities, soil enzyme activities, and soil nutrients. Organic matter selection is crucial for disease control. Here, we hypothesized that (i) the effect of RSD incorporated with liquid easily degradable compounds and solid agricultural waste combination on the soil community composition, structure, and diversity is different; (ii) organic materials with different decomposition characteristics have different effects on soil functional components (nutrient cycling and disease inhibition); and (iii) anaerobic environment may indirectly affect plant growth by changing soil microecological environment. As a means of testing these hypotheses, we used dried perilla (PF), alfalfa (MS) (solid agricultural waste), acetic acid (AA), and ethanol (EA) (liquid degradable compounds) as the representatives of organic materials, respectively. Pot experiments with five treatments were conducted in a ginseng monoculture soil seriously infected by diseases. Research was conducted to determine how RSD-related treatment affected soil microbial community diversity, soil enzyme activity, soil nutrient content, and the growth of replanted seedlings.
Soil sampling and experimental design The soil that was used in this study was collected from Baixi Forest Farm (44˚05′N, 127˚67′E) in Fusong County, Baishan City, Jilin Province, China. The sampling depth was 20–30 cm. The World Reference Base for Soil Resources classified this soil as alfisol ( ). The soil has been continuously planted with ginseng for the past 6 years, and soil samples have been collected after the harvest. The soil’s properties were as follows: pH 5.18; electrical conductivity (EC), 175.45 μS·cm −1 ; organic matter (OM), 105.89 g∙kg −1 ; available nitrogen (AN), 189.26 mg∙kg −1 ; available potassium (AK), 0.19 mg∙kg −1 ; and available phosphorus (AP), 9.02 mg∙kg −1 . Perilla ( Perilla frutescens Britt.) and alfalfa ( Medicago sativa L.) were collected from local forests, and ethanol and acetic acid were purchased from Beijing Chemical Co., Ltd (Beijing, China). These materials are easily and widely available, with perilla and alfalfa being less expensive than ethanol and acetic acid. Prior to starting the experiment, the perilla and alfalfa were crushed (particle size < 5 mm). Pots (15 × 15 × 12 cm, without drainage holes) were filled with 2 kg of soil, and treatments were conducted as follows: (i) CK, un-treated soil (without substrate), (ii) RSD, soil that had been incorporated with 2% perilla [substrate/soil ratio (wt/wt), the same below), (iii) 2% alfalfa, (iv) 1% ethanol, and (v) 0.25% acetic acid, and then irrigated to saturation and covered with white transparent plastic film (thickness 0.04 mm). There were three replications of these treatments, which were established randomly. The amount of organic material added refers to the amount of liquid and solid added in previous studies ( , , ). The plastic films were removed after 21 days of incubation, with the average day and night temperature in the soil at 30°C–40°C measured using a thermometer. After treatment, three soil cores were collected from each replicate and mixed as a biological replicate. The soil samples (five treatments × three replicates) were sieved and divided into two subsamples, one subsample was stored at −20°C for physicochemical, enzyme activities, and DNA analyses, and another subsample was air-dried for seed germination and seedling growth analysis. Three hundred seeds of ginseng were surface sterilized by soaking them in 0.3% carbendazim for 2 hours before sowing. Ginseng seeds were seeded in RSD-treated soil with 20 seeds per pot of soil, and each treatment was repeated three times. The pots were incubated in the incubator using a completely random grouping design. Based on the plants with new leaves after 15 days, the survival rate of ginseng seedlings was calculated. In order to eliminate the impact of seed differences, ginseng was then thinned to five plants per pot (leaving healthy ginseng seedlings of similar size) and harvested after 5 months. The roots were carefully washed under running water, and then deionized water was used to remove attached soil particles. After dividing the plants into root and shoot subsamples, the length and fresh weight of the ginseng root were measured. Soil physicochemical properties’ analysis Soil pH and EC were measured by a pH meter (PHSJ-3F, Shanghai, China) and conductivity meter (DDS-307A, Shanghai, China) at a soil-water ratio of 1:5 (wt/vol), respectively. Soil OM was measured by the potassium dichromate external heating method ( ). AP was extracted with NaHCO 3 solution, and then molybdenum-antimony colorimetry was performed ( ). AN was measured using the alkali-hydrolyzed diffusing method ( ). AK was extracted with 1 mol·L −1 NH 4 OAc and then determined by inductively coupled plasma optical emission spectrometer (ICO-OES, iCAP 7400 DUO, Thermofisher, USA) ( ). Soil enzyme activity analysis The activities of soil sucrase (SC), urease (UE), acid phosphatase (ACP), and catalase (CAT) were measured using a kit produced by the company Solarbio (Shanghai, China). SC was measured by the colorimetric method of 3, 5-dinitrosalicylic acid, and the activity was defined as 1 mg of reducing sugar produced per gram of soil per day at 37 °C. UE was measured by indophenol blue colorimetry, and the activity was defined as 1 µg NH 3 -N produced per gram of soil per day. ACP was measured by the disodium phenyl phosphate colorimetry, and the activity was defined as 1 nmol phenol release per gram of soil per day at 37°C as one enzyme activity. CAT was measured by colorimetric method, and the activity was defined as catalytic degradation of 1 µmol H 2 O 2 per gram of air-dried soil sample per day. Soil DNA extraction and PCR amplification Total DNA was extracted from 0.5 g of soil samples using an E.Z.N.A. Soil DNA Kit. We used a NanoDrop 2000 spectrophotometer after extracting DNA to determine its quality and concentration. Agarose gel electrophoresis was used to validate the DNA’s integrity. The 16S rRNA gene V4-V5 and ITS region were used to determine bacteria and fungi abundances, respectively. Polymerase chain reaction (PCR) was conducted using primers 338F/806R and ITS1F/ITS2R for amplification. The protocols described by Zhan et al. ( ) and Tan et al. ( ) were used for the amplification of bacterial 16S rRNA and fungal ITS genes and analysis of PCR product purity. Miseq sequencing and data processing The diversity and composition of the microbial community were measured using the Illumina Miseq PE300 platform (Illumina, USA) after purification. High-throughput sequencing results have been uploaded to NCBI (SRA bacterial accession number: PRJNA886401 ; fungal accession number: PRJNA886407 ). FLASH (version 1.2.11) was used to merge raw sequences generated by MiSeq paired-end sequencing. UPARSE (version 11) was used to cluster quality-filtered bacterial and fungal sequences into operational taxonomic units (OTUs) with 97% sequence similarity, respectively. Representative sequences were taxonomically classified using the Ribosomal Database Project and then according to the Silva database (16S rRNA, version 138) and Unite database (internal transcribed spacers (ITS), version 8.0), with a confidence threshold of 70% for both bacteria and fungi. Microbial functional predictions and data analysis Bacterial and fungal communities were predicted using the FAPROTAX and FUNGuild databases. To avoid overinterpreting fungal functional groups, only functional groups with “probable” and “highly probable” confidence levels were retained, and functional groups with “possible” confidence levels were deleted. Principal coordinate analysis was used to compare microbial communities and functional dissimilarities among treatments based on the Bray-Curtis distance. Linear discriminant analysis (LDA) effect size (LEfSe) was used to identify taxonomic microbial taxa among different treatments. The significance level of microbials at the taxonomic level was LDA > 4 and P < 0.05. Microbial networks were constructed using Cytoscape (Version 3.91) software and visualized using Gephi (Version 0.92). Correlation coefficients | r | < 0.6 and P > 0.05 of the correlation R matrix were removed. Heat map correlation analysis was used to visualize the relationships between dominant genera, microbial functions, and environmental factors. Experimental data were organized using Microsoft Excel. IBM SPSS 21.0 (SPSS Inc., USA) and R (Version 4.1.2) were used to perform all statistical analyses. CK treatment and RSD-related treatment differences were compared using Fisher’s LSD post hoc test. GraphPad Prism (Version 8.01) and Majorbio platform are used to create all graphics.
The soil that was used in this study was collected from Baixi Forest Farm (44˚05′N, 127˚67′E) in Fusong County, Baishan City, Jilin Province, China. The sampling depth was 20–30 cm. The World Reference Base for Soil Resources classified this soil as alfisol ( ). The soil has been continuously planted with ginseng for the past 6 years, and soil samples have been collected after the harvest. The soil’s properties were as follows: pH 5.18; electrical conductivity (EC), 175.45 μS·cm −1 ; organic matter (OM), 105.89 g∙kg −1 ; available nitrogen (AN), 189.26 mg∙kg −1 ; available potassium (AK), 0.19 mg∙kg −1 ; and available phosphorus (AP), 9.02 mg∙kg −1 . Perilla ( Perilla frutescens Britt.) and alfalfa ( Medicago sativa L.) were collected from local forests, and ethanol and acetic acid were purchased from Beijing Chemical Co., Ltd (Beijing, China). These materials are easily and widely available, with perilla and alfalfa being less expensive than ethanol and acetic acid. Prior to starting the experiment, the perilla and alfalfa were crushed (particle size < 5 mm). Pots (15 × 15 × 12 cm, without drainage holes) were filled with 2 kg of soil, and treatments were conducted as follows: (i) CK, un-treated soil (without substrate), (ii) RSD, soil that had been incorporated with 2% perilla [substrate/soil ratio (wt/wt), the same below), (iii) 2% alfalfa, (iv) 1% ethanol, and (v) 0.25% acetic acid, and then irrigated to saturation and covered with white transparent plastic film (thickness 0.04 mm). There were three replications of these treatments, which were established randomly. The amount of organic material added refers to the amount of liquid and solid added in previous studies ( , , ). The plastic films were removed after 21 days of incubation, with the average day and night temperature in the soil at 30°C–40°C measured using a thermometer. After treatment, three soil cores were collected from each replicate and mixed as a biological replicate. The soil samples (five treatments × three replicates) were sieved and divided into two subsamples, one subsample was stored at −20°C for physicochemical, enzyme activities, and DNA analyses, and another subsample was air-dried for seed germination and seedling growth analysis. Three hundred seeds of ginseng were surface sterilized by soaking them in 0.3% carbendazim for 2 hours before sowing. Ginseng seeds were seeded in RSD-treated soil with 20 seeds per pot of soil, and each treatment was repeated three times. The pots were incubated in the incubator using a completely random grouping design. Based on the plants with new leaves after 15 days, the survival rate of ginseng seedlings was calculated. In order to eliminate the impact of seed differences, ginseng was then thinned to five plants per pot (leaving healthy ginseng seedlings of similar size) and harvested after 5 months. The roots were carefully washed under running water, and then deionized water was used to remove attached soil particles. After dividing the plants into root and shoot subsamples, the length and fresh weight of the ginseng root were measured.
Soil pH and EC were measured by a pH meter (PHSJ-3F, Shanghai, China) and conductivity meter (DDS-307A, Shanghai, China) at a soil-water ratio of 1:5 (wt/vol), respectively. Soil OM was measured by the potassium dichromate external heating method ( ). AP was extracted with NaHCO 3 solution, and then molybdenum-antimony colorimetry was performed ( ). AN was measured using the alkali-hydrolyzed diffusing method ( ). AK was extracted with 1 mol·L −1 NH 4 OAc and then determined by inductively coupled plasma optical emission spectrometer (ICO-OES, iCAP 7400 DUO, Thermofisher, USA) ( ).
The activities of soil sucrase (SC), urease (UE), acid phosphatase (ACP), and catalase (CAT) were measured using a kit produced by the company Solarbio (Shanghai, China). SC was measured by the colorimetric method of 3, 5-dinitrosalicylic acid, and the activity was defined as 1 mg of reducing sugar produced per gram of soil per day at 37 °C. UE was measured by indophenol blue colorimetry, and the activity was defined as 1 µg NH 3 -N produced per gram of soil per day. ACP was measured by the disodium phenyl phosphate colorimetry, and the activity was defined as 1 nmol phenol release per gram of soil per day at 37°C as one enzyme activity. CAT was measured by colorimetric method, and the activity was defined as catalytic degradation of 1 µmol H 2 O 2 per gram of air-dried soil sample per day.
Total DNA was extracted from 0.5 g of soil samples using an E.Z.N.A. Soil DNA Kit. We used a NanoDrop 2000 spectrophotometer after extracting DNA to determine its quality and concentration. Agarose gel electrophoresis was used to validate the DNA’s integrity. The 16S rRNA gene V4-V5 and ITS region were used to determine bacteria and fungi abundances, respectively. Polymerase chain reaction (PCR) was conducted using primers 338F/806R and ITS1F/ITS2R for amplification. The protocols described by Zhan et al. ( ) and Tan et al. ( ) were used for the amplification of bacterial 16S rRNA and fungal ITS genes and analysis of PCR product purity.
The diversity and composition of the microbial community were measured using the Illumina Miseq PE300 platform (Illumina, USA) after purification. High-throughput sequencing results have been uploaded to NCBI (SRA bacterial accession number: PRJNA886401 ; fungal accession number: PRJNA886407 ). FLASH (version 1.2.11) was used to merge raw sequences generated by MiSeq paired-end sequencing. UPARSE (version 11) was used to cluster quality-filtered bacterial and fungal sequences into operational taxonomic units (OTUs) with 97% sequence similarity, respectively. Representative sequences were taxonomically classified using the Ribosomal Database Project and then according to the Silva database (16S rRNA, version 138) and Unite database (internal transcribed spacers (ITS), version 8.0), with a confidence threshold of 70% for both bacteria and fungi.
Bacterial and fungal communities were predicted using the FAPROTAX and FUNGuild databases. To avoid overinterpreting fungal functional groups, only functional groups with “probable” and “highly probable” confidence levels were retained, and functional groups with “possible” confidence levels were deleted. Principal coordinate analysis was used to compare microbial communities and functional dissimilarities among treatments based on the Bray-Curtis distance. Linear discriminant analysis (LDA) effect size (LEfSe) was used to identify taxonomic microbial taxa among different treatments. The significance level of microbials at the taxonomic level was LDA > 4 and P < 0.05. Microbial networks were constructed using Cytoscape (Version 3.91) software and visualized using Gephi (Version 0.92). Correlation coefficients | r | < 0.6 and P > 0.05 of the correlation R matrix were removed. Heat map correlation analysis was used to visualize the relationships between dominant genera, microbial functions, and environmental factors. Experimental data were organized using Microsoft Excel. IBM SPSS 21.0 (SPSS Inc., USA) and R (Version 4.1.2) were used to perform all statistical analyses. CK treatment and RSD-related treatment differences were compared using Fisher’s LSD post hoc test. GraphPad Prism (Version 8.01) and Majorbio platform are used to create all graphics.
Soil physicochemical properties After anaerobic treatment, soil pH was significantly ( P < 0.05) higher in all RSD-related treatments than in the CK treatment, with MS treatment having the highest soil pH ( ). Soil EC was significantly ( P < 0.05) lower in all RSD-related treatments than in the CK treatment, and the soil EC of PF treatment was the lowest, but there was no significant difference between EA and MS treatments ( ). Compared to the CK treatment, the soil OM content was significantly ( P < 0.05) increased in PF, MS, and EA treatments, whereas the soil OM content of AA treatment significantly decreased ( ). Soil AN content was significantly ( P < 0.05) higher in all RSD-related treatments than in the CK treatment, and the soil AN content of MS treatment was the highest, whereas there were no significant differences among the PF, EA, and AA treatments ( ). Soil AK content in MS and PF treatments was significantly ( P < 0.05) higher than that in the CK treatment, while it was increased slightly in the EA and AA treatments, with no significant differences compared to the CK treatment ( ). Compared to the CK treatment, the soil AP content was significantly ( P < 0.05) increased in the PF treatment, whereas the soil AP content of MS, EA, and AA treatments significantly decreased ( ). Soil enzyme activity After anaerobic treatment, soil UE activity in PF, MS, and EA treatments was significantly ( P < 0.05) higher than that in the CK treatment, while it was increased slightly in the AA treatment, with no significant differences compared to the CK treatment ( ). Soil SC activity was significantly ( P < 0.05) higher in RSD-related treatments than in the CK treatment, with the AA treatment having the highest soil SC activity ( ). Compared to the CK treatment, soil ACP activity was significantly ( P < 0.05) increased in MS, EA, and AA treatments, whereas soil ACP activity of PF treatment significantly decreased ( ). The soil CAT activity was significantly ( P < 0.05) higher in MS and AA treatments ( P < 0.05) than in CK treatment, while it was increased slightly in the PF and EA treatments, with no significant differences compared to the CK treatment ( ). Soil microbial community and functional diversities We generated 1,755,718 high-quality bacterial 16S rRNA gene sequences and 1,636,295 high-quality fungal ITS sequences from 15 soil samples in five different treatments using Miseq sequencing. The sequences were clustered into 16,422 and 3981 OTUs with 97% sequence similarity to bacteria and fungi, respectively. Overall, all RSD treatments had a significant ( P < 0.05) impact on both bacterial and fungal diversity indices ( ). Bacterial richness, diversity, and evenness index were higher in the PF treatment than in the other treatments, and the richness and diversity indexes in the PF treatment were significantly ( P < 0.05) higher than those in the CK treatment, but there were no significant differences in the evenness index ( ). In contrast, the fungal richness index in all RSD treatments was significantly ( P < 0.05) lower than that in the CK treatment, and the lowest richness index was found in the MS treatment, but the diversity and evenness indexes in the EA treatment were significantly ( P < 0.05) higher than those in CK treatment ( ). Similarly, RSD treatments significantly altered the microbial community and functions, according to principal coordinate analysis ( ). The bacterial community structures in soil treated with PF, EA, and AA were grouped together and separated from soil treated with MS, but the fungal community structures in soil treated with EA and AA were grouped together and separated from soil treated with MS and PF ( ). The potential function of bacteria in soil treated with MS and AA was grouped together and separated from soil treated with PF and EA, but the potential function of fungi in soil treated with PF and MS was grouped together and separated from soil treated with EA and AA ( ). Soil microbial community compositions and potential function prediction RSD-related treatments significantly modulated the compositions of soil bacterial and fungal communities, with modulations varying between bacterial and fungal communities ( ). Actinobacteriota and Proteobacteria were the dominant bacterial phyla across all soil samples. In four treatments, more than 45% of the sequences classified belong to these phyla ( ). Ascomycota was the predominant fungal phylum in all the soil samples accounting for more than 75% of the total sequences across all the soils ( ). In general, the bacterial communities of the five soils were similar at the genus level, but some specific genera had different relative abundances ( ). For instance, the relative abundance of Arthrobacter , Terrabacter , and Gemmatimonas was significantly increased by RSD-related treatments. Obviously, the soil fungal community responded to RSD-related treatments more strongly than the bacterial community ( ). In RSD-treated soil, fungal communities differed both in composition and relative abundance of dominant fungal species from those in CK treatments. Specifically, the relative abundance of Fusarium decreased significantly in the RSD-related treatments and was only 1.33%–11.48% of that in the CK treatment. Naganishia and Blastococcus responded similarly to RSD-treated treatments as Fusarium . Meanwhile, RSD treatment had a more substantial effect on bacterial functional groups than on fungal functional groups. We found 40 bacterial functional groups and 3 fungal trophic modes (pathotrophs, symbiotrophs, and saprophytes). In RSD-treated soil, the relative abundances of bacterial functional groups related to carbon (C), nitrogen (N), sulfur (S), and hydrogen (H) cycling, such as ureolysis, aromatic compound degradation, nitrogen fixation, hydrocarbon degradation, nitrate reduction, nitrate respiration, dark oxidation of sulfur compounds, dark thiosulfate oxidation, nitrite respiration, and fungal functional groups associated with dung saprotrophy, endophyte, and undefined saprotroph were significantly enriched, compared to CK treatment ( ). Notably, the relative abundances of bacterial manganese oxidation, iron respiration, fungal plant pathogens, plant saprotroph, animal endosymbiont, and soil saprotroph were decreased in RSD-treated soil ( ). Microbial biomarker taxa and networks’ analysis LEfSe analysis revealed that RSD treatments significantly altered bacterial communities from the phyla to the genus level, and that each RSD treatment harbored distinct biomarkers ( ). For instance, the taxa Gammaproteobacteria , Burkholderiales , Xanthobacteraceae , Azospirillales , Azospirillum , Sordariomycetes , Coniochaetaceae , and Lasiosphaeriaceae were significantly enriched in PF soil, Proteobacteria , Alphaproteobacteria , Beijerinckiaceae , Caulobacteraceae , Corynebacteriales , Streptomyces , Phenylobacterium , Rhizobiaceae , Rhodococcus , Streptosporangiales , Chaetomiaceae , Sordariales , Ascomycota , Hypocreaceae , Microvirga , Nocardia , Trichoderma , Rhizobiales , and Neocosmospora were significantly enriched in MS soil, Sphingomonas , Intrasporangiaceae , Sinomonas , Parcubacteria , Oxalobacteraceae , Eurotiales , Phaffomycetaceae , Aspergillaceae , Penicillium , Talaromyces , Trichocomaceae , Pleosporales , Dothideomycetes , Leotiomycetes , Thielavia , and Didymella were significantly enriched in EA soil, whereas Micrococcaceae , Arthrobacter , Actinobacteria , Propionibacteriales , Xanthomonadaceae , Terrabacter , Geotrichum , Nocardioides , Nocardioidaceae , Saccharomycetes , Psathyrellaceae , Lysobacter , and Candida were considerably enriched in AA soil ( ). We found that there were noticeable differences between CK and RSD-related treatments in the bacterial and fungal community networks and their topological characteristics ( ; ). In the correlation network study, bacteria were grouped into five clusters, while fungi were grouped into eight clusters, indicating stronger interactions between fungal communities than bacterial communities ( ). Meanwhile, in RSD-related soil, the number of nodes and edges, network heterogeneity, and network centralization of bacterial networks were greater in PF soil, while the number of nodes and edges, characteristic path length, and network heterogeneity of fungal networks were lessened in all RSD-related treatments ( ). In the co-occurrence networks, nodes in the regions of connectors (0.24%), module hubs (0.65%), and network hubs (0%) were all identified as keystone species because they all had Zi 2.5 or Pi 0.62 ( ). Relationships between soil properties, enzyme activity, functionality and microbiomes The relative abundance of the most dominant genera in RSD-treated soil was significantly ( P < 0.05) correlated with soil properties, enzyme activity, and functional groups ( ). For bacteria, the relative abundances of Devosia , Chthoniobacter , and Microvirga were significantly ( P < 0.05) positively correlated with pH, AN content, CAT activity, aromatic compound degradation, aromatic hydrocarbon degradation, hydrocarbon degradation, ligninolysis, nitrate reduction, and dark oxidation of sulfur compounds, and negatively correlated with manganese oxidation, chloroplasts, and iron respiration ( ). Blastococcus and Sphingomonas relative abundances were significantly ( P < 0.05) positively correlated with manganese oxidation and chloroplasts, and negatively correlated with pH, AN content, CAT activity, aromatic compound degradation, and nitrate reduction ( ). Sinomonas and Gemmatimonas relative abundances were significantly ( P < 0.05) positively correlated with methanol oxidation, methylotrophy, and phototrophy but negatively correlated with nitrite respiration ( ). However, Flavisolibacter , Terrabacter, and Arthrobacter relative abundances were significantly ( P < 0.05) positively correlated with SC activity, methanol oxidation, methylotrophy, phototrophy, cyanobacteria, oxygenic photoautotrophy, and photoautotrophy ( ). For fungi, Saitozyma , Cladophialophora , and Coniochaeta relative abundances were significantly ( P < 0.05) positively correlated with AP content, animal pathogen, wood saprotroph, endophyte, and fungal parasite, and negatively correlated with AN content, ACP activity, and CAT activity ( ). Didymella , Talaromyces , and Penicillium relative abundances were significantly ( P < 0.05) positively correlated with animal pathogen, plant pathogen, and animal endosymbiont, and negatively correlated with pH, AK content, and CAT activity ( ). Pseudeurotium and Thielavia relative abundances were significantly ( P < 0.05) positively correlated with ACP activity and undefined saprotroph, and negatively correlated with AP content, fungal parasite, and orchid mycorrhiza ( ). Chaetomium and Neocosmospora relative abundances were significantly ( P < 0.05) positively correlated with CAT activity, and negatively correlated with dung saprotroph and animal pathogen ( ). Candida and Cyberlindnera relative abundances were significantly ( P < 0.05) positively correlated with pH, AN content, and ACP activity, and negatively correlated with AP content and wood saprotroph ( ). Additionally, Fusarium relative abundances were significantly ( P < 0.05) positively correlated with EC, plant pathogen, plant saprotroph, animal endosymbiont, and ericoid mycorrhiza, and negatively correlated with OM content, AN content, UE activity, and orchid mycorrhiza ( ). Seed germination and seedling growth RSD-related treatment significantly ( P < 0.05) promoted ginseng seed germination and seedling growth. The MS treatment had the highest seed germination rate, root weight, and root length, which were 1.64 times, 4.97 times, and 12.42 times that of the CK treatment, respectively ( ). At the same time, ginseng’s germination rate, root weight, and root length were MS > PF > EA >AA, which indicated that the combination of reductive soil disinfestation and solid agricultural waste had a stronger effect on ginseng seed germination and seedling growth than the combination of reductive soil disinfestation and liquid easily degradable compounds ( ).
After anaerobic treatment, soil pH was significantly ( P < 0.05) higher in all RSD-related treatments than in the CK treatment, with MS treatment having the highest soil pH ( ). Soil EC was significantly ( P < 0.05) lower in all RSD-related treatments than in the CK treatment, and the soil EC of PF treatment was the lowest, but there was no significant difference between EA and MS treatments ( ). Compared to the CK treatment, the soil OM content was significantly ( P < 0.05) increased in PF, MS, and EA treatments, whereas the soil OM content of AA treatment significantly decreased ( ). Soil AN content was significantly ( P < 0.05) higher in all RSD-related treatments than in the CK treatment, and the soil AN content of MS treatment was the highest, whereas there were no significant differences among the PF, EA, and AA treatments ( ). Soil AK content in MS and PF treatments was significantly ( P < 0.05) higher than that in the CK treatment, while it was increased slightly in the EA and AA treatments, with no significant differences compared to the CK treatment ( ). Compared to the CK treatment, the soil AP content was significantly ( P < 0.05) increased in the PF treatment, whereas the soil AP content of MS, EA, and AA treatments significantly decreased ( ).
After anaerobic treatment, soil UE activity in PF, MS, and EA treatments was significantly ( P < 0.05) higher than that in the CK treatment, while it was increased slightly in the AA treatment, with no significant differences compared to the CK treatment ( ). Soil SC activity was significantly ( P < 0.05) higher in RSD-related treatments than in the CK treatment, with the AA treatment having the highest soil SC activity ( ). Compared to the CK treatment, soil ACP activity was significantly ( P < 0.05) increased in MS, EA, and AA treatments, whereas soil ACP activity of PF treatment significantly decreased ( ). The soil CAT activity was significantly ( P < 0.05) higher in MS and AA treatments ( P < 0.05) than in CK treatment, while it was increased slightly in the PF and EA treatments, with no significant differences compared to the CK treatment ( ).
We generated 1,755,718 high-quality bacterial 16S rRNA gene sequences and 1,636,295 high-quality fungal ITS sequences from 15 soil samples in five different treatments using Miseq sequencing. The sequences were clustered into 16,422 and 3981 OTUs with 97% sequence similarity to bacteria and fungi, respectively. Overall, all RSD treatments had a significant ( P < 0.05) impact on both bacterial and fungal diversity indices ( ). Bacterial richness, diversity, and evenness index were higher in the PF treatment than in the other treatments, and the richness and diversity indexes in the PF treatment were significantly ( P < 0.05) higher than those in the CK treatment, but there were no significant differences in the evenness index ( ). In contrast, the fungal richness index in all RSD treatments was significantly ( P < 0.05) lower than that in the CK treatment, and the lowest richness index was found in the MS treatment, but the diversity and evenness indexes in the EA treatment were significantly ( P < 0.05) higher than those in CK treatment ( ). Similarly, RSD treatments significantly altered the microbial community and functions, according to principal coordinate analysis ( ). The bacterial community structures in soil treated with PF, EA, and AA were grouped together and separated from soil treated with MS, but the fungal community structures in soil treated with EA and AA were grouped together and separated from soil treated with MS and PF ( ). The potential function of bacteria in soil treated with MS and AA was grouped together and separated from soil treated with PF and EA, but the potential function of fungi in soil treated with PF and MS was grouped together and separated from soil treated with EA and AA ( ).
RSD-related treatments significantly modulated the compositions of soil bacterial and fungal communities, with modulations varying between bacterial and fungal communities ( ). Actinobacteriota and Proteobacteria were the dominant bacterial phyla across all soil samples. In four treatments, more than 45% of the sequences classified belong to these phyla ( ). Ascomycota was the predominant fungal phylum in all the soil samples accounting for more than 75% of the total sequences across all the soils ( ). In general, the bacterial communities of the five soils were similar at the genus level, but some specific genera had different relative abundances ( ). For instance, the relative abundance of Arthrobacter , Terrabacter , and Gemmatimonas was significantly increased by RSD-related treatments. Obviously, the soil fungal community responded to RSD-related treatments more strongly than the bacterial community ( ). In RSD-treated soil, fungal communities differed both in composition and relative abundance of dominant fungal species from those in CK treatments. Specifically, the relative abundance of Fusarium decreased significantly in the RSD-related treatments and was only 1.33%–11.48% of that in the CK treatment. Naganishia and Blastococcus responded similarly to RSD-treated treatments as Fusarium . Meanwhile, RSD treatment had a more substantial effect on bacterial functional groups than on fungal functional groups. We found 40 bacterial functional groups and 3 fungal trophic modes (pathotrophs, symbiotrophs, and saprophytes). In RSD-treated soil, the relative abundances of bacterial functional groups related to carbon (C), nitrogen (N), sulfur (S), and hydrogen (H) cycling, such as ureolysis, aromatic compound degradation, nitrogen fixation, hydrocarbon degradation, nitrate reduction, nitrate respiration, dark oxidation of sulfur compounds, dark thiosulfate oxidation, nitrite respiration, and fungal functional groups associated with dung saprotrophy, endophyte, and undefined saprotroph were significantly enriched, compared to CK treatment ( ). Notably, the relative abundances of bacterial manganese oxidation, iron respiration, fungal plant pathogens, plant saprotroph, animal endosymbiont, and soil saprotroph were decreased in RSD-treated soil ( ).
LEfSe analysis revealed that RSD treatments significantly altered bacterial communities from the phyla to the genus level, and that each RSD treatment harbored distinct biomarkers ( ). For instance, the taxa Gammaproteobacteria , Burkholderiales , Xanthobacteraceae , Azospirillales , Azospirillum , Sordariomycetes , Coniochaetaceae , and Lasiosphaeriaceae were significantly enriched in PF soil, Proteobacteria , Alphaproteobacteria , Beijerinckiaceae , Caulobacteraceae , Corynebacteriales , Streptomyces , Phenylobacterium , Rhizobiaceae , Rhodococcus , Streptosporangiales , Chaetomiaceae , Sordariales , Ascomycota , Hypocreaceae , Microvirga , Nocardia , Trichoderma , Rhizobiales , and Neocosmospora were significantly enriched in MS soil, Sphingomonas , Intrasporangiaceae , Sinomonas , Parcubacteria , Oxalobacteraceae , Eurotiales , Phaffomycetaceae , Aspergillaceae , Penicillium , Talaromyces , Trichocomaceae , Pleosporales , Dothideomycetes , Leotiomycetes , Thielavia , and Didymella were significantly enriched in EA soil, whereas Micrococcaceae , Arthrobacter , Actinobacteria , Propionibacteriales , Xanthomonadaceae , Terrabacter , Geotrichum , Nocardioides , Nocardioidaceae , Saccharomycetes , Psathyrellaceae , Lysobacter , and Candida were considerably enriched in AA soil ( ). We found that there were noticeable differences between CK and RSD-related treatments in the bacterial and fungal community networks and their topological characteristics ( ; ). In the correlation network study, bacteria were grouped into five clusters, while fungi were grouped into eight clusters, indicating stronger interactions between fungal communities than bacterial communities ( ). Meanwhile, in RSD-related soil, the number of nodes and edges, network heterogeneity, and network centralization of bacterial networks were greater in PF soil, while the number of nodes and edges, characteristic path length, and network heterogeneity of fungal networks were lessened in all RSD-related treatments ( ). In the co-occurrence networks, nodes in the regions of connectors (0.24%), module hubs (0.65%), and network hubs (0%) were all identified as keystone species because they all had Zi 2.5 or Pi 0.62 ( ).
The relative abundance of the most dominant genera in RSD-treated soil was significantly ( P < 0.05) correlated with soil properties, enzyme activity, and functional groups ( ). For bacteria, the relative abundances of Devosia , Chthoniobacter , and Microvirga were significantly ( P < 0.05) positively correlated with pH, AN content, CAT activity, aromatic compound degradation, aromatic hydrocarbon degradation, hydrocarbon degradation, ligninolysis, nitrate reduction, and dark oxidation of sulfur compounds, and negatively correlated with manganese oxidation, chloroplasts, and iron respiration ( ). Blastococcus and Sphingomonas relative abundances were significantly ( P < 0.05) positively correlated with manganese oxidation and chloroplasts, and negatively correlated with pH, AN content, CAT activity, aromatic compound degradation, and nitrate reduction ( ). Sinomonas and Gemmatimonas relative abundances were significantly ( P < 0.05) positively correlated with methanol oxidation, methylotrophy, and phototrophy but negatively correlated with nitrite respiration ( ). However, Flavisolibacter , Terrabacter, and Arthrobacter relative abundances were significantly ( P < 0.05) positively correlated with SC activity, methanol oxidation, methylotrophy, phototrophy, cyanobacteria, oxygenic photoautotrophy, and photoautotrophy ( ). For fungi, Saitozyma , Cladophialophora , and Coniochaeta relative abundances were significantly ( P < 0.05) positively correlated with AP content, animal pathogen, wood saprotroph, endophyte, and fungal parasite, and negatively correlated with AN content, ACP activity, and CAT activity ( ). Didymella , Talaromyces , and Penicillium relative abundances were significantly ( P < 0.05) positively correlated with animal pathogen, plant pathogen, and animal endosymbiont, and negatively correlated with pH, AK content, and CAT activity ( ). Pseudeurotium and Thielavia relative abundances were significantly ( P < 0.05) positively correlated with ACP activity and undefined saprotroph, and negatively correlated with AP content, fungal parasite, and orchid mycorrhiza ( ). Chaetomium and Neocosmospora relative abundances were significantly ( P < 0.05) positively correlated with CAT activity, and negatively correlated with dung saprotroph and animal pathogen ( ). Candida and Cyberlindnera relative abundances were significantly ( P < 0.05) positively correlated with pH, AN content, and ACP activity, and negatively correlated with AP content and wood saprotroph ( ). Additionally, Fusarium relative abundances were significantly ( P < 0.05) positively correlated with EC, plant pathogen, plant saprotroph, animal endosymbiont, and ericoid mycorrhiza, and negatively correlated with OM content, AN content, UE activity, and orchid mycorrhiza ( ).
RSD-related treatment significantly ( P < 0.05) promoted ginseng seed germination and seedling growth. The MS treatment had the highest seed germination rate, root weight, and root length, which were 1.64 times, 4.97 times, and 12.42 times that of the CK treatment, respectively ( ). At the same time, ginseng’s germination rate, root weight, and root length were MS > PF > EA >AA, which indicated that the combination of reductive soil disinfestation and solid agricultural waste had a stronger effect on ginseng seed germination and seedling growth than the combination of reductive soil disinfestation and liquid easily degradable compounds ( ).
RSD incorporated with solid agricultural waste can reorganize soil microbial communities and inhibit soilborne pathogens’ growth The soil microbial community is essential for nutrient cycling, plant growth, and disease prevention and control ( , ). Numerous studies have shown that RSD treatment significantly changes soil microbial communities and helps to resist pathogen invasion ( , ). In agreement with these studies, we found that RSD treatment significantly decreased the relative abundance of known soilborne pathogen Fusarium and increased the relative abundance of known disease-suppressive agents Arthrobacter , Streptomyces , Terrabacter , and Gemmatimonas . Members of the genera Gemmatimonas and Arthrobacter , for example, inhibit pathogenic microorganism growth by producing lipopeptides and bacteriocins ( , ). Meanwhile, Streptomyces has the ability to degrade autotoxic substances and has been found to successfully control Fusarium oxysporum and Botrytis cinerea -caused soilborne diseases ( ). Additionally, the type and amount of the organic materials have a greater effect on the generation of anaerobic conditions and the production of volatile organic compounds toxic to plant pathogens ( , ). For example, RSD incorporated with maize straw can effectively inhibit Artemisia selengensis root rot pathogens and the inhibition efficiency can reach 90% when the application rate is 2% ( ). Besides, previous studies have found that highly and moderately labile C sources increase the rate of attainment of anaerobicity, a condition that is critical for the success of RSD ( , ). In this study, perilla and alfalfa were mainly composed of lignin, cellulose, and hemicellulose, which are more difficult to decompose and utilize than ethanol and acetic acid. Therefore, the rest of the C sources would continuously stimulate the aerobic microorganisms and improve soil microbial activity after RSD treatment. We also observed that PF and MS treatments significantly increased bacterial diversity while decreasing fungal diversity because bacteria were dominated by anaerobic or facultative anaerobic taxa, whereas most fungi lived in aerobic environments ( ). Consequently, RSD treatment increased bacteria diversity and richness by occupying ecological niches and inhibiting fungal pathogen activity. Interestingly, EA and AA treatments decreased bacterial diversity but increased fungal diversity. Considering that ethanol and acetic acid are both pure chemicals, it is likely that the microbiome succession was primarily determined by the type of materials used under anaerobic conditions ( ). Although RSD treatment with different carbon sources alters microbial community richness, it also cultivates a unique core microbiota. For instance, the taxon Azospirillum was significantly enriched in PF soil, the taxon Chaetomiaceae was significantly enriched in MS soil, the taxon Penicillium was significantly enriched in EA soil, and the taxon Lysobacter was significantly enriched in AA soil. Studies have shown that such core microbiomes have a variety of potential beneficial effects in maintaining soil health and plant growth, such as antibiotic production, heterologous material degradation, heavy metal adsorption, and root colonization ( , , ). Chaetomium produces cellulase and chaetomin, which can inhibit pathogenic fungi growth and promote the activation of soil nutrients ( , ). Penicillium can degrade harmful substances in soil, and its secondary metabolites can significantly inhibit some pathogens ( ). The microbial networks of healthy soils are thought to be more complex than those of diseased soils, indicating that the microbial network’s characteristics are crucial for predicting plant health status ( , ). Bacteria were found to be grouped into five clusters in this study, whereas fungi were found to be grouped into eight clusters, indicating stronger interactions between fungal communities than bacterial communities. Interestingly, the number of nodes and edges, characteristic path length, and network heterogeneity of fungal networks were lessened in all RSD-related treatments. This is because the presence of antifungal compounds resulting from the decomposition of various organic substrates during RSD treatment may inhibit fungal taxa growth ( , ). RSD incorporated with solid agricultural waste combination can restore degraded soil and improve soil functions RSD had effects on soil properties, enzyme activities, and functions in addition to improving the microbial community and resisting soilborne pathogens invasion ( , ). Acidification and salinization are two major characteristics of soil degradation ( ). The EC value, which is a common indicator of soil salinity in the current study, was significantly reduced in RSD-related treatments, with the lowest value observed in the PF treatment, which is consistent with previous reports ( ). Similarly, pH values of acidified soils were generally increased after RSD treatment, which was consistent with our results, resulting from the consumption of H + through denitrification and other redox reactions under anaerobic conditions ( ). Additionally, decomposable carbon sources and irrigation affect enzyme activities and macronutrients, and their transformation varies among different studies. We found that the contents of OM, AN, and AK and the activities of UE, SC, and ACP were increased in PF and MS treatments, which may be directly due to the anaerobic degradation of perilla and alfalfa, or indirectly due to the enhancement of nutrient cycling. Soil functional diversity plays a crucial role in microbial behavior ( ). We found that RSD treatment affected bacterial functional groups more significantly than fungal functional groups. This may be because the bacterial microbiota is strongly stimulated during the decomposition of organic materials, which leads to an increase in functional diversity and is consistent with our observations that RSD-related treatments more strongly favored bacterial community diversity ( , ). RSD-related treatments also increased microbial functions related to nutrient cycling, endophytes, and dung saprotrophs while decreasing fungal plant pathogens. The previously reported increase in OM, AN, and AK contents, as well as UE, SC, and ACP activities during RSD treatment, may be attributable to these increased nutrient cycling functions, which include ureolysis, nitrogen fixation, nitrate reduction, nitrogen respiration, and aromatic compound degradation ( , ). Linking reassembled soil microbiomes to soil properties, enzyme activities, and microbial functions The soil’s abiotic environment is critical for reorganizing the microbial community, improving the soil environment, and promoting seedling growth ( ). Studies suggested that the primary variables influencing soil microbial communities are EC, pH, OM, UE, SC, and ACP ( , ). Soil EC, pH, OM, AN, AK, and AP contents and UE, SC, ACP, and CAT activities were found to be significantly related to microbial taxa in this study. Research has shown that a slight change in soil pH (1.5 units) can have a 50% impact on bacterial activity ( ). Soil pH was found to be significantly positively correlated with Devosia , Chthoniobacter , Microvirga , Candida , and Cyberlindnera , and significantly negatively correlated with Blastococcus , Sphingomonas , Didymella , and Talaromyces . These demonstrated that pH was an important factor in microbial community transformation and was strongly related to microbial richness and diversity ( ). Fusarium wilt, caused by Fusarium oxysporum , is among the most dangerous soilborne diseases and can affect plants of a wide range ( ). According to numerous studies, RSD practices can effectively decrease soilborne pathogens ( , ), which is consistent with our study that Fusarium abundance was significantly decreased in all RSD treatments. Similarly, there was a significant positive correlation between soil EC and Fusarium abundance, supporting the earlier findings. Interestingly, the decomposability characteristics of added organic materials may jointly determine the degree to which microbial activity can be improved through community changes, as was the case when soil OM was found to be significantly negatively correlated with Fusarium ( ). Previous research found that RSD enhances soil nutrient cycling functions by regulating the relative abundance of specific taxa within core microbiomes ( ). The majority of the dominant genera that showed a noticeable increase in RSD-treated soils were found to be closely related to the aforementioned nutrient cycling functions, such as Terrabacter, Flavisolibacter , and Arthrobacter, and methanol oxidation, methylotrophy, phototrophy, cyanobacteria, oxygenic photoautotrophy, and photoautotrophy were positively correlated. Flavisolibacter is an important iron-reducing agent, converting Fe 3+ to Fe 2+ both directly and indirectly under anaerobic conditions, and Fe 2+ accumulation significantly inhibits plant pathogen growth ( , ). Sphingomonas is a microorganism that has good environmental adaptability and tolerance and can degrade polycyclic aromatic hydrocarbons and phenols ( ). Sphingomonas was positively correlated with manganese oxidation and chloroplasts but negatively correlated with aromatic compound degradation or nitrate reduction. In addition, Devosia , Chthoniobacter , and Microvirga have been identified as important decomposers of various C and H compounds, thereby stimulating the soil nutrient cycle. Maintaining soil health is recognized to be a prerequisite for successful replanting failure alleviation; however, the primary indicator for evaluating soil health is whether replant seedlings survive ( ). Our results indicated that RSD-related treatment significantly promoted ginseng seed germination and seedling growth. Consistent with previous studies, the MS treatment had the highest seed germination rate, root weight, and root length, which were 1.64 times, 4.97 times, and 12.42 times higher than the CK treatment, respectively ( ). This was most likely due to RSD’s beneficial effects on soil chemical and microbial properties, which reversed some of the previous cropping systems’ negative effects on plant growth. Additionally, the effect of strong reduction soil sterilization combined with solid agricultural waste on ginseng seed germination and seedling growth was stronger than that of strong reduction soil sterilization combined with liquid easily degradable compounds. Differences in the chemistry (like degradability) and quantity (like availability) of carbon sources in the organic materials used could explain this disparity. Conclusions By reorganizing microbial communities and repairing the soil environment, reductive soil disinfestation can significantly reduce plant replant failure in combination with liquid degradable compounds and solid agricultural waste. However, the effect of RSD combined with solid agricultural wastes is better than that of liquid easily degradable compounds. In particular, increased bacterial diversity and abundance of beneficial taxa and decreased fungal diversity and pathogenic taxa rebalance the soil microbiome. Meanwhile, RSD treatment increased microbial functions related to the cycling of carbon, sulfur, nitrogen, and hydrogen while decreasing plant pathogen function. The majority of the dominant genera that increased significantly in RSD-treated soils were closely related to the aforementioned nutrient cycling functions, including Gemmatimonas , Streptomyces , Arthrobacter , and Terrabacter , many of which are also known to be disease-suppressive agents. Furthermore, RSD-related treatment also changed soil properties and enzyme activities, especially increased soil pH, AN, AK, S-SC, and S-CAT contents and activities and decreased soil EC values. Importantly, RSD-related treatments also significantly promoted seed germination and seedling growth. Thus, RSD practice using various organic residues may improve soil quality and microbial community structure, inhibit the proliferation of pathogenic bacteria, and contribute to the growth of replanted crops, making it a potential agricultural practice. However, the effect of RSD combined with solid agricultural wastes is better than that of liquid easily degradable compounds, which may be related to the stability and long duration of action of solid agricultural wastes.
The soil microbial community is essential for nutrient cycling, plant growth, and disease prevention and control ( , ). Numerous studies have shown that RSD treatment significantly changes soil microbial communities and helps to resist pathogen invasion ( , ). In agreement with these studies, we found that RSD treatment significantly decreased the relative abundance of known soilborne pathogen Fusarium and increased the relative abundance of known disease-suppressive agents Arthrobacter , Streptomyces , Terrabacter , and Gemmatimonas . Members of the genera Gemmatimonas and Arthrobacter , for example, inhibit pathogenic microorganism growth by producing lipopeptides and bacteriocins ( , ). Meanwhile, Streptomyces has the ability to degrade autotoxic substances and has been found to successfully control Fusarium oxysporum and Botrytis cinerea -caused soilborne diseases ( ). Additionally, the type and amount of the organic materials have a greater effect on the generation of anaerobic conditions and the production of volatile organic compounds toxic to plant pathogens ( , ). For example, RSD incorporated with maize straw can effectively inhibit Artemisia selengensis root rot pathogens and the inhibition efficiency can reach 90% when the application rate is 2% ( ). Besides, previous studies have found that highly and moderately labile C sources increase the rate of attainment of anaerobicity, a condition that is critical for the success of RSD ( , ). In this study, perilla and alfalfa were mainly composed of lignin, cellulose, and hemicellulose, which are more difficult to decompose and utilize than ethanol and acetic acid. Therefore, the rest of the C sources would continuously stimulate the aerobic microorganisms and improve soil microbial activity after RSD treatment. We also observed that PF and MS treatments significantly increased bacterial diversity while decreasing fungal diversity because bacteria were dominated by anaerobic or facultative anaerobic taxa, whereas most fungi lived in aerobic environments ( ). Consequently, RSD treatment increased bacteria diversity and richness by occupying ecological niches and inhibiting fungal pathogen activity. Interestingly, EA and AA treatments decreased bacterial diversity but increased fungal diversity. Considering that ethanol and acetic acid are both pure chemicals, it is likely that the microbiome succession was primarily determined by the type of materials used under anaerobic conditions ( ). Although RSD treatment with different carbon sources alters microbial community richness, it also cultivates a unique core microbiota. For instance, the taxon Azospirillum was significantly enriched in PF soil, the taxon Chaetomiaceae was significantly enriched in MS soil, the taxon Penicillium was significantly enriched in EA soil, and the taxon Lysobacter was significantly enriched in AA soil. Studies have shown that such core microbiomes have a variety of potential beneficial effects in maintaining soil health and plant growth, such as antibiotic production, heterologous material degradation, heavy metal adsorption, and root colonization ( , , ). Chaetomium produces cellulase and chaetomin, which can inhibit pathogenic fungi growth and promote the activation of soil nutrients ( , ). Penicillium can degrade harmful substances in soil, and its secondary metabolites can significantly inhibit some pathogens ( ). The microbial networks of healthy soils are thought to be more complex than those of diseased soils, indicating that the microbial network’s characteristics are crucial for predicting plant health status ( , ). Bacteria were found to be grouped into five clusters in this study, whereas fungi were found to be grouped into eight clusters, indicating stronger interactions between fungal communities than bacterial communities. Interestingly, the number of nodes and edges, characteristic path length, and network heterogeneity of fungal networks were lessened in all RSD-related treatments. This is because the presence of antifungal compounds resulting from the decomposition of various organic substrates during RSD treatment may inhibit fungal taxa growth ( , ).
RSD had effects on soil properties, enzyme activities, and functions in addition to improving the microbial community and resisting soilborne pathogens invasion ( , ). Acidification and salinization are two major characteristics of soil degradation ( ). The EC value, which is a common indicator of soil salinity in the current study, was significantly reduced in RSD-related treatments, with the lowest value observed in the PF treatment, which is consistent with previous reports ( ). Similarly, pH values of acidified soils were generally increased after RSD treatment, which was consistent with our results, resulting from the consumption of H + through denitrification and other redox reactions under anaerobic conditions ( ). Additionally, decomposable carbon sources and irrigation affect enzyme activities and macronutrients, and their transformation varies among different studies. We found that the contents of OM, AN, and AK and the activities of UE, SC, and ACP were increased in PF and MS treatments, which may be directly due to the anaerobic degradation of perilla and alfalfa, or indirectly due to the enhancement of nutrient cycling. Soil functional diversity plays a crucial role in microbial behavior ( ). We found that RSD treatment affected bacterial functional groups more significantly than fungal functional groups. This may be because the bacterial microbiota is strongly stimulated during the decomposition of organic materials, which leads to an increase in functional diversity and is consistent with our observations that RSD-related treatments more strongly favored bacterial community diversity ( , ). RSD-related treatments also increased microbial functions related to nutrient cycling, endophytes, and dung saprotrophs while decreasing fungal plant pathogens. The previously reported increase in OM, AN, and AK contents, as well as UE, SC, and ACP activities during RSD treatment, may be attributable to these increased nutrient cycling functions, which include ureolysis, nitrogen fixation, nitrate reduction, nitrogen respiration, and aromatic compound degradation ( , ).
The soil’s abiotic environment is critical for reorganizing the microbial community, improving the soil environment, and promoting seedling growth ( ). Studies suggested that the primary variables influencing soil microbial communities are EC, pH, OM, UE, SC, and ACP ( , ). Soil EC, pH, OM, AN, AK, and AP contents and UE, SC, ACP, and CAT activities were found to be significantly related to microbial taxa in this study. Research has shown that a slight change in soil pH (1.5 units) can have a 50% impact on bacterial activity ( ). Soil pH was found to be significantly positively correlated with Devosia , Chthoniobacter , Microvirga , Candida , and Cyberlindnera , and significantly negatively correlated with Blastococcus , Sphingomonas , Didymella , and Talaromyces . These demonstrated that pH was an important factor in microbial community transformation and was strongly related to microbial richness and diversity ( ). Fusarium wilt, caused by Fusarium oxysporum , is among the most dangerous soilborne diseases and can affect plants of a wide range ( ). According to numerous studies, RSD practices can effectively decrease soilborne pathogens ( , ), which is consistent with our study that Fusarium abundance was significantly decreased in all RSD treatments. Similarly, there was a significant positive correlation between soil EC and Fusarium abundance, supporting the earlier findings. Interestingly, the decomposability characteristics of added organic materials may jointly determine the degree to which microbial activity can be improved through community changes, as was the case when soil OM was found to be significantly negatively correlated with Fusarium ( ). Previous research found that RSD enhances soil nutrient cycling functions by regulating the relative abundance of specific taxa within core microbiomes ( ). The majority of the dominant genera that showed a noticeable increase in RSD-treated soils were found to be closely related to the aforementioned nutrient cycling functions, such as Terrabacter, Flavisolibacter , and Arthrobacter, and methanol oxidation, methylotrophy, phototrophy, cyanobacteria, oxygenic photoautotrophy, and photoautotrophy were positively correlated. Flavisolibacter is an important iron-reducing agent, converting Fe 3+ to Fe 2+ both directly and indirectly under anaerobic conditions, and Fe 2+ accumulation significantly inhibits plant pathogen growth ( , ). Sphingomonas is a microorganism that has good environmental adaptability and tolerance and can degrade polycyclic aromatic hydrocarbons and phenols ( ). Sphingomonas was positively correlated with manganese oxidation and chloroplasts but negatively correlated with aromatic compound degradation or nitrate reduction. In addition, Devosia , Chthoniobacter , and Microvirga have been identified as important decomposers of various C and H compounds, thereby stimulating the soil nutrient cycle. Maintaining soil health is recognized to be a prerequisite for successful replanting failure alleviation; however, the primary indicator for evaluating soil health is whether replant seedlings survive ( ). Our results indicated that RSD-related treatment significantly promoted ginseng seed germination and seedling growth. Consistent with previous studies, the MS treatment had the highest seed germination rate, root weight, and root length, which were 1.64 times, 4.97 times, and 12.42 times higher than the CK treatment, respectively ( ). This was most likely due to RSD’s beneficial effects on soil chemical and microbial properties, which reversed some of the previous cropping systems’ negative effects on plant growth. Additionally, the effect of strong reduction soil sterilization combined with solid agricultural waste on ginseng seed germination and seedling growth was stronger than that of strong reduction soil sterilization combined with liquid easily degradable compounds. Differences in the chemistry (like degradability) and quantity (like availability) of carbon sources in the organic materials used could explain this disparity.
By reorganizing microbial communities and repairing the soil environment, reductive soil disinfestation can significantly reduce plant replant failure in combination with liquid degradable compounds and solid agricultural waste. However, the effect of RSD combined with solid agricultural wastes is better than that of liquid easily degradable compounds. In particular, increased bacterial diversity and abundance of beneficial taxa and decreased fungal diversity and pathogenic taxa rebalance the soil microbiome. Meanwhile, RSD treatment increased microbial functions related to the cycling of carbon, sulfur, nitrogen, and hydrogen while decreasing plant pathogen function. The majority of the dominant genera that increased significantly in RSD-treated soils were closely related to the aforementioned nutrient cycling functions, including Gemmatimonas , Streptomyces , Arthrobacter , and Terrabacter , many of which are also known to be disease-suppressive agents. Furthermore, RSD-related treatment also changed soil properties and enzyme activities, especially increased soil pH, AN, AK, S-SC, and S-CAT contents and activities and decreased soil EC values. Importantly, RSD-related treatments also significantly promoted seed germination and seedling growth. Thus, RSD practice using various organic residues may improve soil quality and microbial community structure, inhibit the proliferation of pathogenic bacteria, and contribute to the growth of replanted crops, making it a potential agricultural practice. However, the effect of RSD combined with solid agricultural wastes is better than that of liquid easily degradable compounds, which may be related to the stability and long duration of action of solid agricultural wastes.
|
Investigating the bacterial community of gray mangroves ( | 69d4ad91-6dea-4c28-8f89-3f8c55ef8a7a | 11493069 | Microbiology[mh] | Mangrove trees, a unique inter-tidal ecosystem accounting for 60–70% of tropical and sub-tropical coastlines are home to genetically diverse aquatic and terrestrial organisms. They are of great ecological importance as they are involved in protecting coastlines, influencing global climate patterns and facilitating phytoremediation processes in the coastal areas . Mangrove ecosystems are highly productive and biochemically dynamic, characterized by large reservoirs of organic matter, elevated salinity levels, increased rates of nutrient cycling, and regular tidal flooding . Due to high salinity in soils of coastal areas, plants growing in these regions have various salt tolerance strategies compared to terrestrial plants e.g ., Avicennia marina , a gray mangrove also known as “pioneer of mangroves” harbours finger-like respiratory roots (pneumatophores) and leaves with salt glands as a combat strategy against higher salinity . Mangrove ecosystems are rich in microbial diversity, supporting the growth of various microbial communities such as bacteria, fungi, archaea, and protists, which play a crucial role in the cycling of essential nutrients like nitrogen, sulfur, and carbon, thereby maintaining soil chemistry . In these environments, bacteria and fungi dominate, accounting for 91% of the microbial diversity, while algae and protozoa represent only 7% and 2% respectively . The microbial communities in soils under mangrove vegetations are vital for the ecosystem’s maintenance, productivity and conservation, forming mutualistic relationships with mangrove plants as microbes living in roots produce phytohormones, and provide protection against phytopathogens, while receiving carbon metabolites from the plants in return . Sulfate-reducing bacteria, methanogenic archaea, methylotrophic bacteria, nitrogen-fixing bacteria, and phosphate-solubilizing bacteria are the primary bacterial groups found in mangrove ecosystems . Microbial communities inhabiting mangrove ecosystems are of great biotechnological significance as well, due to their possession of genes for producing beneficial antibiotics, enzymes, proteins, and salt-tolerance abilities . Despite their significant ecological importance and biotechnological potential of the microbes they host, mangrove forests are globally threatened by deforestation, rising sea levels and the release of contaminants such as untreated sewage in coastal regions . Therefore, understanding the microbial communities inhabiting the mangrove ecosystems and intricate interactions between these microbial communities and mangroves is essential for conservation and restoration projects. Furthermore, exploring microbial diversity is crucial for understanding their role in maintaining mangroves ecosystem and the biotechnological potential of mangroves microbiota . Coastal areas of Umluj and Al-Wajh, within the Tabuk region of Saudi Arabia are located in the northwestern part of the country along the Red Sea and they are under gray mangrove ( Avicennia marina) vegetation . The mangrove community of Red Sea is dominated by A. marina with the limited presence of Rhizophora mucronata . The mangrove communities of the Red Sea are often characterized by lower and dwarf vegetation . Dwarfing in A. marina in contrast to mangroves reported in other regions is due to limited supply of nutrients (freshwater input), higher salinity, increased level of contaminants and elevated temperature . Therefore, reliance on microbial communities for nutrient acquisition becomes crucial in these environments, highlighting the importance of understanding the distinct microbial populations that develop in response to these specific environmental conditions. Though several studies in recent years have reported on the soil microbiome of various mangrove ecosystems, including the coastal areas of the Red Sea, the microbiome composition of the coastal areas of Umluj and Al-Wajh along the northeastern Red Sea coastline covered with A. marina vegetation has been overlooked . In this study, we planned to decipher the taxonomic diversity of bacterial communities found in the bulk soil and rhizosphere of A. marina , native to the coastal areas of Umluj and Al-Wajh. Specifically, we sought to determine whether the type of vegetation plays a crucial role in recruiting the rhizospheric microbiome and to identify key differences in the bacterial community composition between bulk and rhizosphere soil.
Study and sampling sites The bulk soil and rhizosphere samples were randomly collected from two different areas Umluj and Al-Wajh along the coast of Red sea, in the Tabuk region, Saudi Arabia . These coastal areas are predominantly covered by gray mangrove ( Avicennia marina ) vegetation. The mangrove vegetation in these regions is dwarfed compared to the mangroves reported in other areas . The sampling of the bulk and mangrove rhizospheric soil was performed in in December 2022. Three rhizosphere soil replicates ( A. marina ) and three bulk soil replicates samples were collected. For each replicate of the rhizosphere sample, three to five roots were collected by carefully uprooting A. marina plants, removing the soil loosely attached to the roots by shaking the plants, and then scraping and collecting the soil adhering firmly to the root by a sterile spatula, gathering approximately 10 g of soil (2 g from each root) at a depth of 20 cm. Three bulk soil samples were obtained from each site, approximately 7 m away from each mangrove tree, using a small trowel to dig soil from the top layer (0–15 cm). The collected soil samples were immediately stored in sterile plastic bags, placed in iceboxes, and brought back to the laboratory immediately. The samples were stored at −80 °C until DNA extraction. DNA extractions The total DNA was extracted from the rhizospheric and bulk soil samples. DNeasy ® PowerSoil ® Pro Kit (Qiagen, San Diego, CA, USA) was used to extract the total DNA from soil samples (1.0 g), following the manufacturer’s protocol. The quality and quantity of extracted DNAs were determined using a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and DNAs with A260/280 ratio of 1.8–2.0 were used for subsequent experiments. Integrity and size of DNA were checked by 1% (w/v) agarose gel electrophoresis. DNA samples were stored at −20 °C. 16S rRNA amplicon sequencing The total soil DNA samples from two locations were used for microbial diversity analysis based on 16S rRNA amplicon sequencing. PCR reactions and 16S rRNA libraries were prepared using 16S rRNA fusion primers targeting 16S rRNA V3–V4 region. All PCR products were purified by Agencourt AMPure XP beads, dissolved in elution buffer following the manufacturer’s protocol and eventually labelled to finish library construction. Library size and concentration are detected by Agilent 2100 Bioanalyzer. Qualified libraries were sequenced (300 x 2, pair end sequencing) using the DNBSEQ-G400 platform, a platform that amplifies and sequence with less error rates involving amplification of DNA fragments into DNA nanoballs (DNBs) which are then sequenced (BGI genomics, Shenzhen, China). Taxonomical and functional analyses Bioinformatic analyses were performed using the Fastq files generated after the sequencing runs. Raw data were filtered to obtain the high-quality clean data using iTools Fqtools fqcheck (v.0.25), cutadapt (v.2.6) and readfq (v1.0) (1). Raw data were filtered to generate high-quality clean reads as follows: (1) Reads with average Phred quality scores lower than 20 over a 25 bp sliding window were truncated. Reads were removed if their lengths were less than 75% of their original lengths after truncation, (2) reads contaminated by adapter sequences were removed using a default parameter of 15 bases overlapped by reads and adapter with a maximum of three bases mismatch allowed, (3) reads containing ambiguous bases (N bases) were removed and (4) low complexity reads, defined as reads with 10 consecutive identical bases, were removed. Barcode sequences were removed from pooled libraries, and clean reads were then assigned to corresponding samples based on barcode sequence alignments with no mismatches allowed. The clean reads that can overlap with each other were merged (minimum overlapping length: 15 bp; mismatching ratio of overlapped region ≤0.1; FLASH tool, Fast Length Adjustment of Short reads, v1.2.11) into tags. Clean tags were clustered into OTUs using the UPARSE algorithm. Operational taxonomic units (OTUs) were identified to analyze taxonomic units in phylogenetic and population genetics research: (1) Sequences were clustered into OTUs at a 97% similarity threshold using UPARSE (USEARCH v7.0.1090), (2) chimeric sequences were filtered out using UCHIME v4.2.40, mapped to the gold database (v20110519) for 16S rDNA. All tags were mapped to OTU representative sequences using USEARCH GLOBAL to construct an OTU abundance table. OTU representative sequences were taxonomically classified with the RDP classifier (v2.2) with a sequence identity threshold of 0.6 using Greengene (default): V201305; RDP: Release 11.5 2016-9-30. Species composition and abundance were determined by aligning sequences against taxonomic databases, allowing the calculation of the relative abundance of each taxon in the samples. These analyses provided insights into the microbial community structure. Alpha diversity metrics (such as Shannon and Simpson indices) were calculated to assess the species diversity within each sample. Beta diversity was analyzed using principal component analysis (PCA) based on Bray-Curtis dissimilarity to evaluate differences in microbial community composition between samples using the R (v3.1.1; ) ade package. Functional prediction was performed using PICRUSt2, which infers the functional potential of microbial communities based on their phylogenetic placement. Pathway abundances were predicted and differentially abundant pathways between groups were identified using statistical methods. Spearman’s rank correlation coefficients were calculated for predicted functional pathways with a relative abundance greater than 0.5% and were visualized through heat maps to identify important patterns and relationships among dominant functional pathways. Differential analysis was performed using the Wilcoxon rank-sum test to identify predicted functional pathways with significant differences in relative abundance between groups. p -values and false discovery rates (FDR) were calculated, with significance set at p < 0.05.
The bulk soil and rhizosphere samples were randomly collected from two different areas Umluj and Al-Wajh along the coast of Red sea, in the Tabuk region, Saudi Arabia . These coastal areas are predominantly covered by gray mangrove ( Avicennia marina ) vegetation. The mangrove vegetation in these regions is dwarfed compared to the mangroves reported in other areas . The sampling of the bulk and mangrove rhizospheric soil was performed in in December 2022. Three rhizosphere soil replicates ( A. marina ) and three bulk soil replicates samples were collected. For each replicate of the rhizosphere sample, three to five roots were collected by carefully uprooting A. marina plants, removing the soil loosely attached to the roots by shaking the plants, and then scraping and collecting the soil adhering firmly to the root by a sterile spatula, gathering approximately 10 g of soil (2 g from each root) at a depth of 20 cm. Three bulk soil samples were obtained from each site, approximately 7 m away from each mangrove tree, using a small trowel to dig soil from the top layer (0–15 cm). The collected soil samples were immediately stored in sterile plastic bags, placed in iceboxes, and brought back to the laboratory immediately. The samples were stored at −80 °C until DNA extraction.
The total DNA was extracted from the rhizospheric and bulk soil samples. DNeasy ® PowerSoil ® Pro Kit (Qiagen, San Diego, CA, USA) was used to extract the total DNA from soil samples (1.0 g), following the manufacturer’s protocol. The quality and quantity of extracted DNAs were determined using a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and DNAs with A260/280 ratio of 1.8–2.0 were used for subsequent experiments. Integrity and size of DNA were checked by 1% (w/v) agarose gel electrophoresis. DNA samples were stored at −20 °C.
The total soil DNA samples from two locations were used for microbial diversity analysis based on 16S rRNA amplicon sequencing. PCR reactions and 16S rRNA libraries were prepared using 16S rRNA fusion primers targeting 16S rRNA V3–V4 region. All PCR products were purified by Agencourt AMPure XP beads, dissolved in elution buffer following the manufacturer’s protocol and eventually labelled to finish library construction. Library size and concentration are detected by Agilent 2100 Bioanalyzer. Qualified libraries were sequenced (300 x 2, pair end sequencing) using the DNBSEQ-G400 platform, a platform that amplifies and sequence with less error rates involving amplification of DNA fragments into DNA nanoballs (DNBs) which are then sequenced (BGI genomics, Shenzhen, China).
Bioinformatic analyses were performed using the Fastq files generated after the sequencing runs. Raw data were filtered to obtain the high-quality clean data using iTools Fqtools fqcheck (v.0.25), cutadapt (v.2.6) and readfq (v1.0) (1). Raw data were filtered to generate high-quality clean reads as follows: (1) Reads with average Phred quality scores lower than 20 over a 25 bp sliding window were truncated. Reads were removed if their lengths were less than 75% of their original lengths after truncation, (2) reads contaminated by adapter sequences were removed using a default parameter of 15 bases overlapped by reads and adapter with a maximum of three bases mismatch allowed, (3) reads containing ambiguous bases (N bases) were removed and (4) low complexity reads, defined as reads with 10 consecutive identical bases, were removed. Barcode sequences were removed from pooled libraries, and clean reads were then assigned to corresponding samples based on barcode sequence alignments with no mismatches allowed. The clean reads that can overlap with each other were merged (minimum overlapping length: 15 bp; mismatching ratio of overlapped region ≤0.1; FLASH tool, Fast Length Adjustment of Short reads, v1.2.11) into tags. Clean tags were clustered into OTUs using the UPARSE algorithm. Operational taxonomic units (OTUs) were identified to analyze taxonomic units in phylogenetic and population genetics research: (1) Sequences were clustered into OTUs at a 97% similarity threshold using UPARSE (USEARCH v7.0.1090), (2) chimeric sequences were filtered out using UCHIME v4.2.40, mapped to the gold database (v20110519) for 16S rDNA. All tags were mapped to OTU representative sequences using USEARCH GLOBAL to construct an OTU abundance table. OTU representative sequences were taxonomically classified with the RDP classifier (v2.2) with a sequence identity threshold of 0.6 using Greengene (default): V201305; RDP: Release 11.5 2016-9-30. Species composition and abundance were determined by aligning sequences against taxonomic databases, allowing the calculation of the relative abundance of each taxon in the samples. These analyses provided insights into the microbial community structure. Alpha diversity metrics (such as Shannon and Simpson indices) were calculated to assess the species diversity within each sample. Beta diversity was analyzed using principal component analysis (PCA) based on Bray-Curtis dissimilarity to evaluate differences in microbial community composition between samples using the R (v3.1.1; ) ade package. Functional prediction was performed using PICRUSt2, which infers the functional potential of microbial communities based on their phylogenetic placement. Pathway abundances were predicted and differentially abundant pathways between groups were identified using statistical methods. Spearman’s rank correlation coefficients were calculated for predicted functional pathways with a relative abundance greater than 0.5% and were visualized through heat maps to identify important patterns and relationships among dominant functional pathways. Differential analysis was performed using the Wilcoxon rank-sum test to identify predicted functional pathways with significant differences in relative abundance between groups. p -values and false discovery rates (FDR) were calculated, with significance set at p < 0.05.
16S rRNA based amplicon sequence analysis Bulk and rhizospheric soils under A. marina plants were sampled from two different locations: Ras-Alshabaan-Umluj (U) and Almunibrah-Alwajh (W), along the coasts of the Red Sea in the Tabuk region, Kingdom of Saudi Arabia. Metagenomic DNAs were extracted from the bulk and rhizospheric soils and subjected to 16S rRNA amplicon sequencing. A total of 1,504,842 raw reads (300 x 2) obtained from 11 DNA samples, were analyzed using various bioinformatic tools. First of all, quality control trimming and filtering yielded 1,496,294 good quality reads, averaging 136,026 sequences per sample and the reads were paired-up which generated sequences of average 415–420 bp in length . The paired-up sequence reads exhibiting a similarity of ≥97% were clustered together into an OTU to quantify the abundance of bacteria at every taxonomic level (Phylum to species) in each sample. A total of 6,876 OTUs were obtained from all samples, of which only OTUs with ≥1% abundance in at least one of the samples are included in detailed analyses . Bacterial diversity in the bulk and rhizosphere soils from Umluj and Al-Wajh OTUs based analysis was performed to identify the number of distinct and shared OTUs among different sites. The data showed that 678 OTUs were common to bulk and rhizosphere soils from Umluj, while 795 OTUs were common to bulk and rhizosphere soils from the Al-Wajh location . These data suggest that the Al-Wajh location had more similar community structure between bulk soil and rhizosphere samples as compared to that of Umluj. While comparing the rhizosphere of both locations, the data showed 1,857 OTUs were common to the rhizosphere from Umluj and Al-Wajh. A. marina rhizosphere harbours a higher number of unique OTUs at Al-Wajh (3,011) as compared to the unique OTUs found in the rhizosphere of A. marina at Umluj (1,324). Overall, the number of OTUs found at both locations were higher in the rhizosphere samples as compared to the bulk soils of the respective locations. These variations observed in bacterial communities in terms of the number of OTUs at different locations (Umluj and Al-Wajh) and habitats (bulk soil and rhizosphere) were verified by different diversity indices. Alpha diversity measured by Observed Species index, Chao index, Ace index, Shannon index and Simpson index revealed that diversity was significantly ( p ≤ 0.05) different among the habitats (bulk soil and rhizosphere). The rhizosphere samples of both sites had overall higher bacterial diversity, richness and evenness compared to bulk soil samples . All diversity indices showed significant higher diversity ( p ≤ 0.05) for rhizosphere of Al-Wajh as compared to the bulk soil of the same site. Similarly, in case of Umluj, most of the indices showed higher diversity in rhizosphere as compared to bulk soil, but in this case most diversity indices varied non-significantly ( p > 0.05) except ace index ( p < 0.05) . Comparing the diversity indices for the rhizosphere samples from the two sites revealed that the diversity was much higher in the rhizosphere samples from Al-Wajh as compared to those from Umluj . The similarity and/or dissimilarity of microbial communities at rhizosphere and bulk soil samples from both sites was also measured. Our data showed that samples from bulk soils and rhizospheres from both sites occupied distinct positions ( and ). The microbial community in the bulk soil samples from Umluj was distinct from the microbial community in the rhizosphere samples of the same site . This difference was even more pronounced in the samples from the Al-Wajh site . Similarly, comparing the microbial diversities in the rhizosphere samples of A. marina from the two sites revealed the dissimilarities in their diversity as the distant placement of microbial community structure was observed in rhizosphere samples from Umluj and Al-Wajh . Distribution of bacterial taxa in the bulk and rhizosphere soils from Umluj and Al-Wajh Detailed sequence analysis involving the identification of OTUs into different taxa and the distribution of these taxa in different samples, revealed that the most abundant phyla in both soil and rhizosphere samples from Umluj were Actinobacteria, Proteobacteria, Firmicutes, Bacteriodetes and Acidobacteria, and the most abundant phyla in both soil and rhizosphere samples from Al-Wajh were Firmicutes, Acidobacteria and Chloroflexi . Interestingly, relative abundance of Actinobacteria is lower in the rhizosphere samples as compared to that in their respective bulk soil samples at both Umluj and Al-Wajh sites, and the relative abundance of Firmicutes is lower in the rhizosphere samples from Al-Wajh as compared to that in the respective bulk soil sample . The relative abundance of Proteobacteria, Bacteriodetes, Acidobacteria, Fusobacteria and Chloroflexi are higher in the rhizosphere samples from Umluj as compared to that in the respective bulk soil samples , while the relative abundance of Proteobacteria, Acidobacteria, Bacteriodetes, Nitrospirae and Chloroflexi are higher in the rhizosphere samples from Al-Wajh as compared to that in the respective bulk soil samples . Analyses at the class level revealed that among the top 20 classes present at abundances exceeding 1% in one or more samples, Actinobacteria, Alphaproteobacteria, Betaproteobacteria, Gammaproteobacteria, Deltaproteobacteria, Sphingobacteriia, Clostridia, Anaerolineae, Bacilli, and Acidobacteria contributed significantly to the overall bacterial diversity regardless of the sampling site . Interestingly, the relative abundance of Actinobacteria, Alphaproteobacteria, and Betaproteobacteria was lower in the rhizosphere samples from Umluj compared to the respective bulk soil sample, while the relative abundance of Gammaproteobacteria, Deltaproteobacteria, Sphingobacteriia, Acidobacteria, Bacteroidia, and Nitrospira was higher in the rhizosphere samples from Umluj compared to the bulk soil samples from the same site . However, Bacilli were dominant in both bulk soil and rhizosphere samples from Umluj. Similarly, the relative abundance of Actinobacteria, Betaproteobacteria, Gammaproteobacteria, Bacilli, and Clostridia was higher in the bulk soil samples from Al-Wajh, while the relative abundance of Alphaproteobacteria, Deltaproteobacteria, Anaerolineae, and Nitrospira was higher in the rhizosphere samples from Al-Wajh compared to the bulk soil samples from the same site . Comparing the rhizosphere samples from the same type of plants growing at different sites revealed that several bacterial classes had almost similar relative abundance at both sites. However, the relative abundance of Actinobacteria, Bacilli, Sphingobacteriia, Acidobacteria Gp26, and Bacteroidia was higher in the rhizosphere samples from Umluj compared to those in the rhizosphere samples from Al-Wajh. Conversely, the relative abundance of Deltaproteobacteria, Anaerolineae, and Nitrospira was higher in the rhizosphere samples from Al-Wajh compared to those in the rhizosphere samples from Umluj . To enhance our understanding of the most abundant taxa of bacterial communities found in the bulk and the rhizospheric soils of A. marina from both sites, diversity analysis revealed that composition of bacterial communities varied significantly depending on the site and habitat. The bulk soil samples from Umluj were dominated by Arthrobacter , Blastococcus , and Pseudomonas while the most abundant genera inhabiting the rhizospheric soil of the same site were Propionibacterium, Corynebacterium, Staphylococcus and Acidobacteria Gp10 and GP26 . Of all the dominant genera, only Pseudomonas and Pelagibius occupied similar relative abundance in both habitats at the Umluj site. Similarly, the bulk soil samples from Al-Wajh were dominated by Pseudomonas, Arthrobacter , Anaerococcus , Strenotrophomonas and Propionibacterium while Geminicoccus and Thermodesulfovibrio were abundant in the rhizospheric samples. Geminicoccus , was the only genus, inhabiting both the rhizosphere and the bulk soil samples of Al-Wajh. In general, the relative abundance of Propionibacterium was higher in the rhizosphere and bulk soil samples of Umluj and Al-Wajh respectively. Interestingly, the rhizospheric bacterial community differed at both Umluj and Al-Wajh sites, regardless of the type of vegetation. The relative abundance of Propionibacterium, Staphylococcus, Acidobacteria Gp10, Gp26, Prevotella, Streptococcus and Corynebacterium was higher in the rhizosphere samples of Umluj compared to the rhizosphere samples of Al-Wajh. In contrast, Geminicoccus and Thermodesulfovibrio were the most abundant taxa present in the rhizosphere samples from Al-Wajh. Predicted functional diversity in the bulk and rhizosphere soils from Umluj and Al-Wajh The functional profiles of bacterial communities inhabiting the bulk soil and rhizosphere of A. marina from both sites were predicted using Picrust2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States). Based on the prediction results, bacterial taxa and their functions can be linked to obtain general distribution profiles of community function. Picrust2 predicts the function (including the prediction of KEGG, COG, and MetaCyc metabolic pathways) abundance of microbial communities based on 16S rRNA sequencing profiles . Among functional categories predicted using MetaCyc database in the samples from both sites, amino acid biosynthesis and degradation, nucleoside and nucleotide biosynthesis and degradation, cofactor/prosthetic-group/electron-carrier/vitamin biosynthesis, carbohydrate biosynthesis and degradation, fatty acid and lipid biosynthesis, C1 compound utilization and assimilation, cell structure biosynthesis, TCA cycle, fermentation, and secondary metabolite biosynthesis, were abundant in both bulk soil and rhizosphere samples from both sites . The relative abundance of microbial genes varied depending on the habitat. Differential functional abundance analyses revealed that only a few functions were enriched with a slightly significant difference (slightly significant as the p < 0.1) in the rhizosphere as compared to the bulk soil samples . The functions that were more abundant in the bulk soils included chlorinated compound degradation, nucleoside and nucleotide degradation, carboxylate degradation, Entner−Duodoroff pathways, aromatic compound degradation, cofactor/prosthetic-group/electron-carrier/vitamin biosynthesis, alcohol degradation, cell structure biosynthesis, and the TCA cycle. Interestingly, the functions related to C1 compound utilization and assimilation were more enriched in the rhizosphere samples as compared to the bulk soil.
Bulk and rhizospheric soils under A. marina plants were sampled from two different locations: Ras-Alshabaan-Umluj (U) and Almunibrah-Alwajh (W), along the coasts of the Red Sea in the Tabuk region, Kingdom of Saudi Arabia. Metagenomic DNAs were extracted from the bulk and rhizospheric soils and subjected to 16S rRNA amplicon sequencing. A total of 1,504,842 raw reads (300 x 2) obtained from 11 DNA samples, were analyzed using various bioinformatic tools. First of all, quality control trimming and filtering yielded 1,496,294 good quality reads, averaging 136,026 sequences per sample and the reads were paired-up which generated sequences of average 415–420 bp in length . The paired-up sequence reads exhibiting a similarity of ≥97% were clustered together into an OTU to quantify the abundance of bacteria at every taxonomic level (Phylum to species) in each sample. A total of 6,876 OTUs were obtained from all samples, of which only OTUs with ≥1% abundance in at least one of the samples are included in detailed analyses .
OTUs based analysis was performed to identify the number of distinct and shared OTUs among different sites. The data showed that 678 OTUs were common to bulk and rhizosphere soils from Umluj, while 795 OTUs were common to bulk and rhizosphere soils from the Al-Wajh location . These data suggest that the Al-Wajh location had more similar community structure between bulk soil and rhizosphere samples as compared to that of Umluj. While comparing the rhizosphere of both locations, the data showed 1,857 OTUs were common to the rhizosphere from Umluj and Al-Wajh. A. marina rhizosphere harbours a higher number of unique OTUs at Al-Wajh (3,011) as compared to the unique OTUs found in the rhizosphere of A. marina at Umluj (1,324). Overall, the number of OTUs found at both locations were higher in the rhizosphere samples as compared to the bulk soils of the respective locations. These variations observed in bacterial communities in terms of the number of OTUs at different locations (Umluj and Al-Wajh) and habitats (bulk soil and rhizosphere) were verified by different diversity indices. Alpha diversity measured by Observed Species index, Chao index, Ace index, Shannon index and Simpson index revealed that diversity was significantly ( p ≤ 0.05) different among the habitats (bulk soil and rhizosphere). The rhizosphere samples of both sites had overall higher bacterial diversity, richness and evenness compared to bulk soil samples . All diversity indices showed significant higher diversity ( p ≤ 0.05) for rhizosphere of Al-Wajh as compared to the bulk soil of the same site. Similarly, in case of Umluj, most of the indices showed higher diversity in rhizosphere as compared to bulk soil, but in this case most diversity indices varied non-significantly ( p > 0.05) except ace index ( p < 0.05) . Comparing the diversity indices for the rhizosphere samples from the two sites revealed that the diversity was much higher in the rhizosphere samples from Al-Wajh as compared to those from Umluj . The similarity and/or dissimilarity of microbial communities at rhizosphere and bulk soil samples from both sites was also measured. Our data showed that samples from bulk soils and rhizospheres from both sites occupied distinct positions ( and ). The microbial community in the bulk soil samples from Umluj was distinct from the microbial community in the rhizosphere samples of the same site . This difference was even more pronounced in the samples from the Al-Wajh site . Similarly, comparing the microbial diversities in the rhizosphere samples of A. marina from the two sites revealed the dissimilarities in their diversity as the distant placement of microbial community structure was observed in rhizosphere samples from Umluj and Al-Wajh .
Detailed sequence analysis involving the identification of OTUs into different taxa and the distribution of these taxa in different samples, revealed that the most abundant phyla in both soil and rhizosphere samples from Umluj were Actinobacteria, Proteobacteria, Firmicutes, Bacteriodetes and Acidobacteria, and the most abundant phyla in both soil and rhizosphere samples from Al-Wajh were Firmicutes, Acidobacteria and Chloroflexi . Interestingly, relative abundance of Actinobacteria is lower in the rhizosphere samples as compared to that in their respective bulk soil samples at both Umluj and Al-Wajh sites, and the relative abundance of Firmicutes is lower in the rhizosphere samples from Al-Wajh as compared to that in the respective bulk soil sample . The relative abundance of Proteobacteria, Bacteriodetes, Acidobacteria, Fusobacteria and Chloroflexi are higher in the rhizosphere samples from Umluj as compared to that in the respective bulk soil samples , while the relative abundance of Proteobacteria, Acidobacteria, Bacteriodetes, Nitrospirae and Chloroflexi are higher in the rhizosphere samples from Al-Wajh as compared to that in the respective bulk soil samples . Analyses at the class level revealed that among the top 20 classes present at abundances exceeding 1% in one or more samples, Actinobacteria, Alphaproteobacteria, Betaproteobacteria, Gammaproteobacteria, Deltaproteobacteria, Sphingobacteriia, Clostridia, Anaerolineae, Bacilli, and Acidobacteria contributed significantly to the overall bacterial diversity regardless of the sampling site . Interestingly, the relative abundance of Actinobacteria, Alphaproteobacteria, and Betaproteobacteria was lower in the rhizosphere samples from Umluj compared to the respective bulk soil sample, while the relative abundance of Gammaproteobacteria, Deltaproteobacteria, Sphingobacteriia, Acidobacteria, Bacteroidia, and Nitrospira was higher in the rhizosphere samples from Umluj compared to the bulk soil samples from the same site . However, Bacilli were dominant in both bulk soil and rhizosphere samples from Umluj. Similarly, the relative abundance of Actinobacteria, Betaproteobacteria, Gammaproteobacteria, Bacilli, and Clostridia was higher in the bulk soil samples from Al-Wajh, while the relative abundance of Alphaproteobacteria, Deltaproteobacteria, Anaerolineae, and Nitrospira was higher in the rhizosphere samples from Al-Wajh compared to the bulk soil samples from the same site . Comparing the rhizosphere samples from the same type of plants growing at different sites revealed that several bacterial classes had almost similar relative abundance at both sites. However, the relative abundance of Actinobacteria, Bacilli, Sphingobacteriia, Acidobacteria Gp26, and Bacteroidia was higher in the rhizosphere samples from Umluj compared to those in the rhizosphere samples from Al-Wajh. Conversely, the relative abundance of Deltaproteobacteria, Anaerolineae, and Nitrospira was higher in the rhizosphere samples from Al-Wajh compared to those in the rhizosphere samples from Umluj . To enhance our understanding of the most abundant taxa of bacterial communities found in the bulk and the rhizospheric soils of A. marina from both sites, diversity analysis revealed that composition of bacterial communities varied significantly depending on the site and habitat. The bulk soil samples from Umluj were dominated by Arthrobacter , Blastococcus , and Pseudomonas while the most abundant genera inhabiting the rhizospheric soil of the same site were Propionibacterium, Corynebacterium, Staphylococcus and Acidobacteria Gp10 and GP26 . Of all the dominant genera, only Pseudomonas and Pelagibius occupied similar relative abundance in both habitats at the Umluj site. Similarly, the bulk soil samples from Al-Wajh were dominated by Pseudomonas, Arthrobacter , Anaerococcus , Strenotrophomonas and Propionibacterium while Geminicoccus and Thermodesulfovibrio were abundant in the rhizospheric samples. Geminicoccus , was the only genus, inhabiting both the rhizosphere and the bulk soil samples of Al-Wajh. In general, the relative abundance of Propionibacterium was higher in the rhizosphere and bulk soil samples of Umluj and Al-Wajh respectively. Interestingly, the rhizospheric bacterial community differed at both Umluj and Al-Wajh sites, regardless of the type of vegetation. The relative abundance of Propionibacterium, Staphylococcus, Acidobacteria Gp10, Gp26, Prevotella, Streptococcus and Corynebacterium was higher in the rhizosphere samples of Umluj compared to the rhizosphere samples of Al-Wajh. In contrast, Geminicoccus and Thermodesulfovibrio were the most abundant taxa present in the rhizosphere samples from Al-Wajh.
The functional profiles of bacterial communities inhabiting the bulk soil and rhizosphere of A. marina from both sites were predicted using Picrust2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States). Based on the prediction results, bacterial taxa and their functions can be linked to obtain general distribution profiles of community function. Picrust2 predicts the function (including the prediction of KEGG, COG, and MetaCyc metabolic pathways) abundance of microbial communities based on 16S rRNA sequencing profiles . Among functional categories predicted using MetaCyc database in the samples from both sites, amino acid biosynthesis and degradation, nucleoside and nucleotide biosynthesis and degradation, cofactor/prosthetic-group/electron-carrier/vitamin biosynthesis, carbohydrate biosynthesis and degradation, fatty acid and lipid biosynthesis, C1 compound utilization and assimilation, cell structure biosynthesis, TCA cycle, fermentation, and secondary metabolite biosynthesis, were abundant in both bulk soil and rhizosphere samples from both sites . The relative abundance of microbial genes varied depending on the habitat. Differential functional abundance analyses revealed that only a few functions were enriched with a slightly significant difference (slightly significant as the p < 0.1) in the rhizosphere as compared to the bulk soil samples . The functions that were more abundant in the bulk soils included chlorinated compound degradation, nucleoside and nucleotide degradation, carboxylate degradation, Entner−Duodoroff pathways, aromatic compound degradation, cofactor/prosthetic-group/electron-carrier/vitamin biosynthesis, alcohol degradation, cell structure biosynthesis, and the TCA cycle. Interestingly, the functions related to C1 compound utilization and assimilation were more enriched in the rhizosphere samples as compared to the bulk soil.
Exploring the microbial community composition associated with the roots of A. marina in coastal areas is crucial for understanding their ecological role in maintaining mangrove ecosystems and their potential to sustain plant growth. While several studies have reported the soil microbiome of different mangrove ecosystems in recent years, the coastal areas of the Red Sea, especially the northeastern coastline of Red Sea with A. marina vegetation have largely been overlooked in this regard . We explored the bacterial community at the Umluj and Al-Wajh sites in the Tabuk region using 16S rRNA amplicon sequencing. The bacterial diversity of bulk soil and rhizosphere of A. marina from these sites was compared to identify the key differences in the community composition and to understand the role of A. marina in the recruitment of microbial communities in the rhizosphere. Microbial diversity analysis of the bulk and rhizospheric soil samples from Umluj revealed an abundance of the phyla Actinobacteria, Proteobacteria, Firmicutes, Bacteriodetes and Acidobacteria. In contrast, Firmicutes, Acidobacteria and Chloroflexi were the most abundant phyla found in the bulk soil and rhizospheric soil samples from Al-Wajh, suggesting the impact of location on the overall composition of bacterial communities. The impact of geographical location on the bacterial composition can be associated with the environmental factors affecting the soil composition. Various physiochemical factors of mangrove soil, such as pH, electrical conductivity, salinity and nutrient contents support the growth of a wide range of bacterial communities based on the ecological environment . Other environmental factors, such as urban development, pollution and seasonal changes, are also reported to play a significant role in shaping the bacterial composition in mangrove soil . Marine ecosystems and coastal waters might also impact the overall bacterial community composition of coastal areas. Actinobacteria one of the most dominant phyla and classes at the explored sites, was found in more abundance in bulk soil as compared the rhizosphere soils at both sites. This suggests that it might not be A. marina plant that contributes to the presence of Actinobacteria; instead, marine ecosystems and coastal waters may be the source of these bacteria. Actinobacteria are not only one of the most abundant bacterial taxa in marine microbial communities but also highly diverse in various marine ecosystems . Among Proteobacteria, the other most abundant phyla at the explored sites, the classes Alphaproteobacteria, Gammaproteobacteria, and Deltaproteobacteria were predominant. Alphaproteobacteria and Gammaproteobacteria were distributed without any clear preference for bulk or rhizospheric soils under A. marina vegetation. For example, Gammaproteobacteria were the most abundant class present in the rhizosphere samples from Umluj, whereas the same bacterial class was abundant in the bulk soil samples from Al-Wajh, where Alphaproteobacteria predominated compared to the rhizosphere samples. Previously, both Alphaproteobacteria and Gammaproteobacteria have been reported as predominant taxa in samples from Chinese and Brazilian mangrove ecosystems, with mean annual precipitation, total organic carbon, and total nitrogen identified as key factors influencing their abundance . Deltaproteobacteria were abundant in the rhizosphere samples as compared to the respective bulk soils from both sites, suggesting that this class may be the part of core microbiome of mangrove ecosystems as suggested in previous studies . The prevalence of Deltaproteobacteria in the mangrove ecosystems can be attributed to its metabolic flexibility, which provides a competitive advantage for surviving in fluctuating and harsh environments . Additionally, higher levels of Deltaproteobacteria in the rhizosphere of A. marina might be associated with anaerobic conditions within mangrove sediments, supporting anaerobic microbial communities including sulfur-reducing bacteria . In addition to Deltaproteobacteria, the predominance of classes such as Sphingobacteriia, Anaerolineae, Acidobacteria, Bacteriodetes, Chloroflexi and Nitrospirae in the rhizosphere samples from both sites as compared to the respective bulk soil samples is consistent with the rhizosphere effect . The abundance of Bacteroidetes in the rhizospheric soil compared to bulk soil samples is in agreement with previously reported studies . They have been reported to be abundant in inter-tidal regions, particularly in hydrocarbon-contaminated regions, known for their potential to mineralize high-molecular-weight organic matter . The relative abundances of Chloroflexi and Nitrospirae was higher in the rhizospheric soil samples of Umluj and Al-Wajh, suggesting the potential of organic matter decomposition and nitrogen metabolism in that habitat . Our findings align with another study that used amplicon and metagenome sequencing to reveal distinct microbial communities in different root compartments (nonrhizosphere, rhizosphere, episphere, endosphere) of mangroves . found unique distribution patterns for bacterial and fungal communities due to niche differentiation and root exudation, highlighting the importance of soil-root interfaces in shaping microbial diversity and function in mangrove ecosystems. Root exudates, serving as a rich source of carbon for microbes, play a significant role in shaping the microbial communities around plant roots . Meanwhile communities inhabiting the rhizosphere assist plants in obtaining essential nutrients such as phosphorus, potassium and nitrogen . In our study, the rhizospheric samples from Umluj were dominated by Propionibacterium , Corynebacterium, Staphylococcus and Acidobacteria (Gp10 and Gp26). The presence of Propionibacterium in rhizosphere of A. marina is surprising as its association with the roots of A. marina has not been reported until now. However, the abundance of Corynebacterium in the rhizospheric soil sample of Umluj revealed increased nitrogen cycling possibility in the rhizospheric soil in contrast to bulk soil samples. Two different strains of Corynebacterium ; strain 63K and 12A being associated with roots of Avicennia germinans , have been reported to be involved in the process of denitrification . Another study reported the abundance of Corynebacterium sp. in mangrove sediments and its involvement in the decomposition of mangrove leaf litter . Apart from their degradation potential, they are known for pathogenicity in Rhizophora mangle , a red mangrove . Moreover, the relative abundance of Staphylococcus , a nitrogen-fixing bacteria, in the rhizospheric soil of Umluj is consistent with a previous study that reported the isolations of Staphylococcus from the rhizosphere of mangrove trees . They have been found to actively participate in the process of decomposition . Apart from their role in decomposition, Staphylococcus xylosus , isolated from petroleum contaminated soil, have been reported for their potential to produce biosurfactant. The higher abundance of Acidobacteria (Gp10 and Gp26) in the rhizospheric soil samples of Umluj is also in concordance with a recent report in which Acidobacteria were found abundant in the Pneumatophore-associated soil microbial community of A. marina mangrove . The rhizosphere samples from Al-Wajh as compared to the bulk soil samples from the same site were dominated by Thermodesulfovibrio , a genus of sulfur-reducing bacteria, which revealed the significant role of geochemical properties in shaping the microbial communities around the roots. Thermodesulfovibrio , a thermophilic sulfate-reducer, plays a significant role the biogeochemical processes, particularly reducing sulfates into sulfides . Thermodesulfovibrio yellowstoni , an anaerobic sulfate reducer, was the one of most abundant species in mangrove sediments of Yunxiao Mangrove National Nature Reserve China . Sulfur-reducing bacteria such as Desulfococcus and Desulfosarcina , along with their associated functional genes, have recently been reported in abundance in the rhizosphere of Kandelia obovata , a native mangrove plant of Southern China , probably due to the presence of oxygen and redox potential gradients facilitating sulfate reduction in the rhizosphere. Surprisingly, the abundance of Geminicoccus in the rhizospheric soil samples of Al-Wajh is contradictory as there is no such literature available about the association of Geminicoccus with A. marina roots. However, a recent study has reported the potential of Geminicoccus roseus to produce biosurfactants in oil-contaminated soils . Two different species of Geminicoccus , G. flavidas and G. harenae , inhabiting the desert soil crust are known for their ability to promote plant growth via IAA production . There were certain bacterial genera which showed higher relative abundance in bulk soil samples as compared to their relative abundance in rhizosphere samples from Umluj. These included Pseudomonas and Arthrobacter , suggesting the potential of the microbiota in hydrocarbon degradation and degradation of other toxic substances respectively in these soils . The abundance of Pseudomonas in soils from coastal areas is in consistent with a previous study that reported the isolation of two different strains of Pseudomonas aeruginosa in samples from Great Nicobar . Various species of Pseudomonas have been known for their ability to degrade hydrocarbon and polythene bags . Additionally, a study has reported the capacity of Arthrobacter sp. to degrade Acetaminophen, an anti-inflammatory drug in mangroves sediments. Apart from their role in bioremediation, Arthrobacter has been also reported to play an important role in the process of denitrification in coastal ecosystems . The relative abundance of Blastococcus in the bulk soil of Umluj is also in agreement with the study in which it has been isolated from mangrove soil in Leizhou Peninsula . Another specie of Blastococcus , Blastococcus litoris have been isolated from sea-tidal flat sediments . Blastococcus , an actinobacteria is generally a part of soil microbiome having potential resilience to the presence of heavy metals . Similarly, certain bacterial genera showed higher relative abundance in bulk soil samples as compared to their relative abundance in rhizosphere samples from Al-Wajh. In addition to Arthrobacter and Pseudomonas , the relative abundance of Stenotrophomonas , and Anaerococcus in the bulk soil samples of Al-Wajh suggested a potential bioremediation activity. Interestingly, a study reported the polycyclic aromatic hydrocarbon biodegradation potential of genus Stenotrophomonas isolated from coastal sediments . In addition, another species of Stenotrophomonas, Stenotrophomonas rhizophila , isolated from rhizosphere of Kandelia cande , has been shown to slow down the process of eutrophication (algicidal effect) . The prevalence of bio-degraders in coastal ecosystems is due to a large number of contaminants being released in these areas, including untreated sewage, accidental oil spillage and industrial effluents . The abunadance of Anaerococcus and Propionibacterium in bulk soil samples was surprising as these are commonly human associated Gram-positive bacteria . These bacteria might have been introduced into mangrove ecosystems through various anthropogenic activities . They are usually categorized into two groups as, cutaneous and classical Propionibacterium suggesting a complex interplay between human activities and mangrove ecosystems. Human pathogenic microbes may find their way to such ecosystems through urban runoff and wastewater discharges . Though the difference in the microbial communities at taxonomic level was observed, almost similar patterns of predicted functional genes were noted for both sites based on the Metacyc functional profiling. However minor differences in the abundance of microbial genes were observed on the basis of habitat . The relative abundance of genes for C1 compound utilization and assimilation in the rhizosphere coincides with the presence of Methylobacterium and Methyloceanibacter in samples from Umluj and Al-Wajh respectively. Anaerobic conditions in the mangrove sediments make them hotspot for C1 metabolisms as methanogenic activities will result in methane emissions . Moreover, methanol is also released from roots and leaf litter . The presence of methane and methanol might promote the recruitment of methylotrophs and methanotrophs in oxic layers of mangrove sediments . The absence of oxygen and abundance of organic matter, creates an ideal environment for various groups of microorganisms including methanogens, methanotrophs, acetogens and sulfate reducers which are capable of catabolizing toxic carbon compounds . Similarly, the genes involved in the degradation of chlorinated compounds were abundant in bulk soil as mangrove soils are the major natural sinks for anthropogenic and industrial pollutants with enhanced decomposition potential . These result which are in line with the results of alpha diversity analysis, showing Pseudomonas as a predominant genus in the bulk soil samples of both sites as it has been reported to degrade several chlorinated compounds under anaerobic conditions . While functional potential prediction analyses provide valuable insights, they have limitations, including the percentage of OTU/reads predicted, which can affect the accuracy of the functional annotations. Incorporating metagenomic approaches would provide a more comprehensive understanding of the functional roles of bacterial communities. In addition to niche differentiation and root exudation underscoring the importance of soil-root interfaces in shaping microbial diversity in mangrove ecosystems, seasonal dynamics and ecological parameters like urbanization and chemical pollution greatly influence changes in bacterial diversity and composition in these ecosystems. Umluj and Al-Wajh, both part of the Tabuk region, have unique geographical and environmental features. Umluj is known for its pristine beaches and coral reefs, contributing to its biodiversity and Al-Wajh, on the other hand, has a different set of environmental conditions due to variations in sediment composition and coastal processes including higher anthropogenic influences like fishing, agriculture and pollution. The unique OTUs and microbial community compositions observed in each site might have been influenced by these factors. Similarly, a reduced microbial diversity have been reported in anthropogenically affected mangrove when compared to that of pristine mangrove suggesting an important role of environmental regulators in shaping the composition of microbial communities in mangrove ecosystems . Another factor that could have contributed to the bacterial distribution and functionality in mangrove soils is bioturbation. Burrowing crabs and other crustaceans, which are strongly associated with mangrove trees, counteract the reduced redox potential in mangrove sediments through their burrowing activities, ultimately altering microbial composition and function significantly in mangrove ecosystems . Increased bioturbation enhances the presence of bacteria that are involved in carbon, nitrogen and phosphate cycling in mangrove ecosystem of Red Sea coast thus ultimately improving the plant growth particularly under extreme summer conditions . Additionally, seasonal variation also plays an important role in altering the bacterial diversity of mangrove ecosystem as depicted by another study. During the monsoon the mangrove bacterial community was dominated by Actinobacteria, which shifted to Gammaproteobacteria in summer . Overall, the changes in the carbon, nitrogen and sulfur contents along with changes in pH and salinity could have been attributed to variations in the bacterial community structure in two mangrove ecosystems.
Altogether our study provides a new insight about the differences in the composition of microbial communities in the rhizospheres of gray mangrove vegetations from Umluj and Al-Wajh, within the Tabuk region of Saudi Arabia. We have found that microbial communities inhabiting the rhizosphere samples were significantly different from those inhabiting the bulk soil suggesting the possible role of A. marina in defining the soil microbial community. In addition, the structural composition of bacterial community also depends on several other factors such as geographical location and ecological parameters, as revealed by differences in bacterial communities between soils from Umluj and Al-Wajh despite similar vegetation. Mangrove microbiomes play a crucial role in nutrient cycling processes, which are frequently disrupted by anthropogenic activities. Further research into functional gene analysis and whole metagenome analyses of microbial communities in these mangrove ecosystems is essential for understanding the role of microbes in maintaining mangrove ecosystems and exploring the biotechnological potential of mangrove microbiota. Additionally, studying the impact of chemical pollution on identified microbial communities could provide valuable insights into the adaptability of mangrove ecosystems and contribute to the development of effective conservation strategies.
10.7717/peerj.18282/supp-1 Supplemental Information 1 Ras Alshabaan-Umluj (Umluj), and Almunibrah-Al-Wajh (Al-Wajh) Tabuk region, Saudi Arabia. Figure S1 photo credit: Hanaa Ghabban. Figure S2 photo credit: Doha A. Albalawi.
|
Haematology dimension reduction, a large scale application to regular care haematology data | 5cdb1b78-d660-4511-9e33-5be2d83a4063 | 11823074 | Internal Medicine[mh] | The diagnostic process in a routine healthcare setting increasingly produces data in high volume, dimensionality and in multiple modalities, both structured and unstructured. Examples of these diagnostic data are ‘omics’ data such as transcriptomics, proteomics and metabolomics as well as imaging data, yet routine haemocytometer data of a complete blood count (CBC) can also be considered high-dimensional data. Visualisation of the data in a comprehensive way can be a challenge due to the high dimensionality. More importantly, to help healthcare professionals interpret these data for the benefit of individual patients, integration of the different types of data into integrated diagnostics models is warranted. One of the modelling challenges in the development and deployment of these models is the combination of vast data volumes and their high dimensionality, which may lead to computational performance issues. There is thus a need to ensure feasibility of integrated diagnostics models. One of the ways to achieve this is by using a low-dimensional representation of these data rather than the full dataset. Such a representation can be generated using dimension reduction techniques. Dimension reduction has historically been performed by the use of principal component analysis (PCA). This linear transformation technique assumes normally distributed variables, and is primarily focused on establishing a dimension reduction that is preserving the global structure. Global structure preservation aims at preserving the global patterns in the data, such as obvious clusters that are present in the data, whereas the local structure preservation aims at preserving more intrinsic patterns in the data, i.e. preserving the neighbourhood for each point. Several more recent dimension reduction approaches aim to also preserve local structure. One way to do this is through (non-linear) manifold approximation, which is based on learning the underlying structure of the data, mostly based on nearest neighbours. Some examples for these type of methods are Uniform Manifold Approximation and Projection (UMAP) , Pairwise Controlled Manifold Approximation (PaCMAP) , and TriMap, a triplet-based approach . Applying these methods to high-dimensional biological data has been performed before, including flow cytometry workflows, transcriptomics data, RNA sequencing data, and protein structure analysis among others . However, to the best of our knowledge, comparative work on robustness of dimension reduction on large haematological data has not been performed before. A complete blood count (CBC) assessing red and white blood cells and platelets, is one of the most frequently performed diagnostic procedures. Haemocytometers, on the basis of flow-cytometry, use proprietary algorithms to combine cell characteristics such as size, granularity, lobularity and viability into clinically relevant parameters like hemoglobin levels or white blood cell differentiation patterns. However, next to these parameters currently reported to the clinic, each routine haematology measurement actually encompasses research-only values and raw cell characteristics of red and white blood cells and platelets that are currently not used in clinical care. In the University Medical Center Utrecht (UMCU), Utrecht, the Netherlands, the raw hematology data of over 3 million samples that were measured on Abbott CELL-DYN Sapphire hematology analyzers were stored in the Utrecht Patient Oriented Database (UPOD) since 2005. The full content and extent of the database is described elsewhere . Previous UPOD research shows there is biologically and clinically relevant information hidden in the unreported hematology measurements of these samples . Using dimension reduction methods to enable processing of raw CBC and visualising or combining it into integrated diagnostics models may therefore eventually improve clinical practice. Considering this vast amount of haematological data, and its high number of dimensions, we set out to find a robust approach in reducing the dimensions of the data, so that it can be better processed but also better visualized. By investigating the performance of the dimension reduction methods, we aim to ensure their usability in routine haematological data to improve clinical care, for example in diagnostic pipelines. As a dimension reduction should be a good representation of the original data, we not only compared the preservation of global and local data structure by several current dimension reduction techniques (PCA, UMAP, TriMap, PaCMAP, and Gaussian Random Projection as negative control), but also assessed their ability of preserving any clinical, diagnostic or biological relevancy of the data. Descriptives We extracted all available CBC measurements from the Abbott CELL-DYN Sapphire from 2005 to 2020 from the UPOD. We filtered out samples for cases where a negative age was reported for a sample. We then applied rigorous quality control based on metadata retrieved from the CELL-DYN Sapphire machines, based on in-house knowledge, gained from clinical chemists and data managers. Examples of such quality control included the handling of erroneous measurements, or measurements that were otherwise suspicious. As some of the CBC measurements are only available if the sample was measured in reticulocyte mode, we imputed these missing variables using the miceforest package in Python, based on the Multiple Imputation with Chain Equations (MICE) approach using a gradient boosting approach . In our data, samples were measured in reticulocyte mode by default from 2013 onwards, providing the opportunity to impute missing data before 2013, since these data could be considered Missing At Random (MAR). Considering the possibility that extreme outliers would distort the overall quality of any dimension reduction model, we transformed white blood cell count parameters to log scale. Additionally, we decided to clip the bounds of each parameter to limit the effect of outliers, while preserving the clinical relevance of the samples. A list of the analysed variables that required clipping thresholds can be found in table S1. In addition, we applied score scaling to all variables, so that the mean is 0 and the standard deviation is 1. Dimension reduction Dimension reduction methods One of the most frequently used dimension reduction models historically is PCA, which tries to capture data in linear combinations, using vector decomposition. It creates perpendicular components, meaning that components are not correlated to each other, and using this principle, PCA can reduce the original data into a reduced space by explaining the variance in the original data. This method is very useful when working with collinear features, as these features will be captured in the same components, since they explain the same variance in the original data. For assessing the performance of a PCA, the cumulative explained variance is often used, and this will naturally increase when the number of components are increased. PCA assumes linear relationships between variables, and assumes normally distributed variables. Yet, as the probability exists that the original data might contain non-linear relationships, we decided to use manifold dimension reduction techniques, which are based on the theory that any space can be reduced to lower dimensions based on the shape of the data. In order to achieve this, each data point should be placed in a similar neighbourhood compared to the original space. This makes sure that local structure of the data is better preserved, i.e., that data that is similar in the original space is also similar in the reduced space. Examples of non-linear dimension reduction techniques include Uniform manifold approximation (UMAP), Triplets Manifold Approximation (TriMap), and Pairwise Controlled Manifold Approximation (PaCMAP). In addition to PCA, these methods were used in the current study to capture the large and complex CELL-DYN Sapphire dataset in lower dimension. Finally, we used Gaussian Random Projection (GRP) as a negative control. We will provide a brief overview of these techniques in this section. Although UMAP, PacMAP and TriMap are initialised with PCA by default, the individual components of UMAP, TriMap and PaCMAP have no specific meaning, unlike PCA. For PCA, the additional explained variance diminishes when a higher number of components are used. UMAP UMAP estimates the shape of the data in the higher dimensionality using a weighted graph and then projects the graph onto the lower dimension for dimensionality reduction (see Fig. ). UMAP constructs a high-dimensional graph by extending branches from individual points with a radius r to connect the points to their neighbourhood in high-dimension. These branches then become a graph of various shapes to be projected onto the lower dimension, irrespective of distance between points. The k -nearest neighbours in r can be set, where a low k preserves the local structure, and a higher k preserves the global structure of the original data. Finally, the high-dimensional graph is projected onto a lower dimension using a force-directed graph approach, pulling together points that are close and pushing apart points are further away. This is done based on the weighted connectivity, meaning that points are drawn towards groups of points with which it has multiple connections, rather than points/clusters with singular connections. Clusters are formed based on some threshold, which also depends on the number of nearest neighbours. Increasing the k -nearest neighbours will result in larger groups of interconnected points, at the cost of increased computational complexity. TriMap TriMap is another manifold approach, and is primarily built around triplets constraints . TriMap constructs triplets per point ( i ) and pairs this to [12pt]{minimal} $$n\_inliers$$ n _ i n l i e r s ( j ) according to the distance metric used. For each of these pairings, [12pt]{minimal} $$n\_outliers$$ n _ o u t l i e r s are sampled ( k ) resulting in [12pt]{minimal} $$n\_inliers*n\_outliers$$ n _ i n l i e r s ∗ n _ o u t l i e r s triplets per point ( i , j , k ). Additionally, [12pt]{minimal} $$n\_random$$ n _ r a n d o m triplets are constructed. TriMap then creates a low dimensional representation of the data where the ordering of the distances of these triplets is preserved ( [12pt]{minimal} $$d(i,j) d(i,k)$$ d ( i , j ) ≤ d ( i , k ) ), by weighting the triplets, according to the relative distance of j and k to i (Fig. ). PaCMAP Similarly to TriMap, PaCMAP samples both neighbours and non-neighbours ( Near Pairs and Further Pairs respectively) in order to establish a low-dimensional representation of the original data. Contrary to TriMap, it also focusses on Mid-Near Pairs . Near Pairs are the nearest neighbours based on a scaled distance metric. Mid-near pairs are established by sampling 6 points per observation and then selecting the second-closest point based on distance. The amount of Mid-Near Pairs is set by the [12pt]{minimal} $$MN\_ratio$$ M N _ r a t i o . Finally, Further Pairs are non-neighbours, and the amount of pairs is set using the [12pt]{minimal} $$FP\_ratio$$ F P _ r a t i o . After initializing with PCA, PaCMAP uses a weighted loss function to optimize the low dimensional representation. The loss function is primarily driven by the Near Pairs and Mid-Near Pairs , but gradually is mostly influenced by the Near Pairs . This means that the loss is highly increased if close points in original space are set further away in the reduced space. Gaussian random projection Gaussian Random Projection (GRP) is a dimension reduction technique that is based on the Johnson-Lindenstrauss lemma, which states that any high-dimensional Euclidean space can be reduced onto a lower-dimensional Euclidean space with minimal distortion (at most [12pt]{minimal} $$1+$$ 1 + ϵ ) of the pairwise distance , and a result by Hecht-Nielsen who showed that a random selection of vectors in a high-dimensional space can be considered an orthogonal projection. Gaussian Random Projection does this by projecting original data on a randomly generated matrix with Gaussian distributions. However, the accuracy of the projection and the amount of required components for dimension reduction is highly dependent on the amount of samples and the permitted error ( [12pt]{minimal} $$$$ ϵ ), specifically [12pt]{minimal} $$n\_components 4 ln(n\_samples) / ( ^2/2 - ^2/3)$$ n _ c o m p o n e n t s ≥ 4 l n ( n _ s a m p l e s ) / ( ϵ 2 / 2 - ϵ 2 / 3 ) . This means that GRP can require more components than available dimensions when the number of dimensions is sufficiently low and the number of observations is high. To that end, we included GRP as a negative control for the dimension reduction quality metrics, because we would expect that this method would perform worst when dimension reduction the data to a low number of dimensions ( [12pt]{minimal} $$$$ ≤ 10) because of this constraint, since our data consists of over 3 million samples. Parameter tuning We tuned the amount of neighbours used for UMAP, TriMap, PaCMAP ( [12pt]{minimal} $$n\_neighbours$$ n _ n e i g h b o u r s ). For UMAP and PaCMAP we were interested in the number of neighbours, but for TriMap we were interested in the number of outliers and inliers, since this is important for the construction of triplets in TriMap. Both PCA and GRP do not require any tuning on nearest neighbours, since they are not neighbour-based. Additionally, we also investigated the number of dimensions ( [12pt]{minimal} $$n\_components$$ n _ c o m p o n e n t s ) that were generated by all the dimension reduction methods, as this might increase the amount of information stored in the dimension reduction. For example, in PCA, the amount of total variation explained increases when the amount of components is increased. As computing numerous distinct dimension reductions and their performance is computationally expensive using a nearest-neighbours approach, we also investigated the number of samples we could use for dimension reduction purposes. Distance metrics One important step in the assessment of dimension reduction techniques is the distance metric with which we assess the distances between data points and with which we perform the dimension reduction for the manifold approaches. As mentioned above, the number of dimensions of the reduced data with Euclidean distance is dependent on the number of samples and the permitted distortion ( [12pt]{minimal} $$$$ ϵ ). For a dataset with roughly three million samples, and roughly one hundred dimensions, this means that we are not able to project the data to a lower-dimensional Euclidean space while preserving the distortion [12pt]{minimal} $$0< <1$$ 0 < ϵ < 1 . This practically excludes using the Euclidean distance metric from the perspective of distance preservation, and a fractional distance metric is best suited for the description of distances in high dimensionality ( [12pt]{minimal} $$d>30$$ d > 30 ) . We decided to pursue the Manhattan distance as the simplest expression of the fractional distance. The Manhattan distance is defined as: [12pt]{minimal} $$ _{i=1}^{n }|a_i-b_i|$$ ∑ i = 1 n | a i - b i | . Dimension reduction quality metrics Two main ways that are used for dimension reduction quality metrics are evaluating the global and local structure . Local structure metrics evaluate neighbourhoods of points and how well these are preserved in the reduced data, while global structure metrics evaluate how well the reduced data preserved the relationships between groups of points. In this study, both global and local distance metrics were used to find a balanced representation of the CELL-DYN Sapphire data in lower dimension. The metrics are generally rank-based, since these are insensitive to scaling. One unifying framework for rank-based metrics is the co-ranking matrix (Q-matrix) . The Q-matrix compares the pairwise ranks of the original data versus the reduced data, showing the preservation of local and global distances. Calculating a Q-matrix consists of two steps. Firstly, a ranking of distances between points in both original and reduced data is calculated. Thereafter, a single matrix is constructed combining both rankings, explaining rank preservation in the low-dimensional data. For local preservation measures, we further used the proportion of neighbouring points being preserved (the neighbourhood-kept-ratio), and the trustworthiness score. The neighbourhood-kept-ratio is computed using the number of nearest neighbours [12pt]{minimal} $$(i)$$ N ( i ) for all [12pt]{minimal} $$i$$ i in high-dimensional space and the [12pt]{minimal} $$k$$ k -nearest neighbours [12pt]{minimal} $$'(i)$$ N ′ ( i ) for all [12pt]{minimal} $$i$$ i in low-dimensional space, where [12pt]{minimal} $$i$$ i is each data point. Consequently, [12pt]{minimal} $$(i)$$ N ( i ) and [12pt]{minimal} $$'(i)$$ N ′ ( i ) are compared to see the intersection between their neighbourhoods. The degree of overlap is calculated, and divided by the number of [12pt]{minimal} $$k$$ k to calculate a ratio for each [12pt]{minimal} $$i$$ i . Subsequently, this ratio is divided by the number of samples to get the average neighbourhood preservation. The trustworthiness score ranks neighbourhood points in accordance with how close they are to the observations [12pt]{minimal} $$i$$ i in low- and high- dimensional spaces . If the ranks of neighbourhood points are misaligned in the reduced space, the metric will penalise these shifts, resulting in a lower score. A version of the trustworthiness score was used in this study with help of the Q-matrix framework . For global preservation measures, we used random triplet score and spearman rank correlation. The random triplet score is calculated by retrieving sets of two points [12pt]{minimal} $$(j,k)$$ ( j , k ) at random per [12pt]{minimal} $$i$$ i in the original data to form triplets [12pt]{minimal} $$(i,j,k)$$ ( i , j , k ) . After this, it finds the same set of triplets in the reduced space and calculates the distance from [12pt]{minimal} $$i$$ i to [12pt]{minimal} $$j$$ j ( [12pt]{minimal} $$d_{ij}$$ d ij ) and [12pt]{minimal} $$k$$ k ( [12pt]{minimal} $$d_{ik}$$ d ik ) for both the original and the reduced data. It then orders [12pt]{minimal} $$d_{ij}$$ d ij and [12pt]{minimal} $$d_{ik}$$ d ik based on their distance in both datasets. The degree of order preservation indicates global structure preservation by the dimension reduction method. Five triplets per [12pt]{minimal} $$i$$ i were used in this study. Finally, pairwise distances can be measured using the Spearman rank correlation to assess distance preservation in the reduced data. Another strength of this method is that distance correlation is easily visualized in a graph (e.g., Figure S1 and S2) to assess the correlation of distances between low- and high-dimensional spaces. To compare the different dimension reduction methods with regards to their quality metrics, we performed the quality assessments in 10-fold, and used a T-test for comparison. Preservation of biological representation Cluster preservation Because biological relevance and meaning of the data should be maintained in the dimension reduction, we assessed preservation of biological relevance from four different angles. As a first angle, we studied the preservation of clusters of similar patients in the reduced data. We analyzed both the raw and the reduced data using HDBSCAN and k-means clustering and retrieved information on the preservation of clustering methods after using dimension reduction. For this analysis, we were interested in the number of clusters extracted, the Normalised Mutual Information (NMI) and Adjusted Rand Index (ARI) scores (higher is better). The NMI and ARI scores are ways to report the extent of cluster preservation in the reduced data, by taking the clusters in the original data as ground truth. K-means clustering retrieves a predefined number of clusters ( k ) based on the Euclidean distance towards a cluster centre, and tries to minimize the sum of distances over these k clusters. In practice, this can result in clusters that are of equal size and density, but are unintuitive for interpretation. HDBSCAN is assigning clusters based on the density of the data, and is therefore more suitable to retrieve clusters with varying densities. This increases the possibility of potentially retrieving meaningful clusters. For our analysis, we used a pipeline with a z-score scaler, a dimension reduction method, and a clustering model (HDBSCAN). As a default we used the Manhattan distance, with 50 neighbours and a random selection of 100.000 samples from the haematology set for dimension reduction in this analysis. Diurnal patterns The second and descriptive angle was by studying diurnal patterns in the reduced dataset, as the size of the original dataset allowed us to investigate large patterns within the data. One of the broad epistemic features of, at least part of, the hematology parameters, is the presence of a diurnal pattern . We expected that dimension reduction algorithms preserve such broad qualitative features. We assessed the diurnal patterns in the reduced data with the use of a cosine fit, as implemented in the CosinorPy library . We assessed the diurnal patterns with 100,000 random samples, and based the hour of day on the time of blood draw. Age and sex The third angle was to assess biological relevance by two classification tasks that should be identifiable in the data: firstly, sex prediction in samples of patients between the age of 20 to 50, as during this age-range a clear distinct difference of hemoglobin between men and women exists . Secondly, prediction of samples of patients below 20 versus patients above 60 years old, as the haematological characteristics of young people are known to be distinct from older people . For this purpose, we used Gradient Boosting (GB) model to capture any non-linear associations. To assess the performance of the resulting models, we decided to focus on the accuracy and the Matthews Correlation Coefficient (MCC) metric. The accuracy is the correct prediction of positive and negative cases, divided by the total amount of positives and negatives, i.e. [12pt]{minimal} $$$$ T P + T N T P + F P + T N + F N , where TP, FP, TN and FN are true and false positives, and true and false negatives respectively. The MCC, or the [12pt]{minimal} $$$$ ϕ coefficient, is a measure of the quality of a binary classification model that takes into account true and false positives and negatives, i.e., is a summary measure for the confusion matrix, comparable to the F1 metric. MCC is calculated as follows: [12pt]{minimal} $$}$$ T P × T N - F P × F N ( T P + F P ) × ( T P + F N ) × ( T N + F P ) × ( T N + F N ) . The data were analysed using 10-fold cross validation with an inner validation set (as a result of the folds) and a dedicated outer validation set. 170,000 random samples were used for training, 30,000 random samples were used for the dedicated validation set. Sampling was performed for computational reasons, with regards to dimensionality of the original data. We assessed the significance of performance change using a T-test. Identification of leukemia-like patients As a final angle to assess performance of preservation of biological relevance, we investigated a specific population that is completely divergent from the general population in terms of CBC. To this end, we used samples from patients with chronic lymphocytic leukemia that were diagnosed based on CBC characteristics together with clinical experts, more specifically: based on very high lymphocyte counts. If dimension reduction would preserve biological relevance, these samples should be clearly distinguishable in the lower dimension representation. Failure to detect these patients would significantly impact the use of the dimension reduction methods in clinical practice. To detect potentially significant differences between the populations, we used an unpaired T-test, and considered a p -value below 0.001 to be significant. Software and hardware All analyses were performed with the Python programming language (version 3.9). Imputation was performed using the miceforest package. Dimension reduction was done using the scikit-learn package for PCA and GRP. UMAP was performed using the umap-learn package, TriMap was performed using the trimap package, PacMAP was performed using the pacmap package. Sex and age classification was performed using the xgboost package. All calculations were performed on CPU, namely the Xeon W-2125 at 4GHz and 8 logic cores with 64GB memory. The code for this project is available from https://github.com/UPOD-datascience/celldyn_embedder. We extracted all available CBC measurements from the Abbott CELL-DYN Sapphire from 2005 to 2020 from the UPOD. We filtered out samples for cases where a negative age was reported for a sample. We then applied rigorous quality control based on metadata retrieved from the CELL-DYN Sapphire machines, based on in-house knowledge, gained from clinical chemists and data managers. Examples of such quality control included the handling of erroneous measurements, or measurements that were otherwise suspicious. As some of the CBC measurements are only available if the sample was measured in reticulocyte mode, we imputed these missing variables using the miceforest package in Python, based on the Multiple Imputation with Chain Equations (MICE) approach using a gradient boosting approach . In our data, samples were measured in reticulocyte mode by default from 2013 onwards, providing the opportunity to impute missing data before 2013, since these data could be considered Missing At Random (MAR). Considering the possibility that extreme outliers would distort the overall quality of any dimension reduction model, we transformed white blood cell count parameters to log scale. Additionally, we decided to clip the bounds of each parameter to limit the effect of outliers, while preserving the clinical relevance of the samples. A list of the analysed variables that required clipping thresholds can be found in table S1. In addition, we applied score scaling to all variables, so that the mean is 0 and the standard deviation is 1. Dimension reduction methods One of the most frequently used dimension reduction models historically is PCA, which tries to capture data in linear combinations, using vector decomposition. It creates perpendicular components, meaning that components are not correlated to each other, and using this principle, PCA can reduce the original data into a reduced space by explaining the variance in the original data. This method is very useful when working with collinear features, as these features will be captured in the same components, since they explain the same variance in the original data. For assessing the performance of a PCA, the cumulative explained variance is often used, and this will naturally increase when the number of components are increased. PCA assumes linear relationships between variables, and assumes normally distributed variables. Yet, as the probability exists that the original data might contain non-linear relationships, we decided to use manifold dimension reduction techniques, which are based on the theory that any space can be reduced to lower dimensions based on the shape of the data. In order to achieve this, each data point should be placed in a similar neighbourhood compared to the original space. This makes sure that local structure of the data is better preserved, i.e., that data that is similar in the original space is also similar in the reduced space. Examples of non-linear dimension reduction techniques include Uniform manifold approximation (UMAP), Triplets Manifold Approximation (TriMap), and Pairwise Controlled Manifold Approximation (PaCMAP). In addition to PCA, these methods were used in the current study to capture the large and complex CELL-DYN Sapphire dataset in lower dimension. Finally, we used Gaussian Random Projection (GRP) as a negative control. We will provide a brief overview of these techniques in this section. Although UMAP, PacMAP and TriMap are initialised with PCA by default, the individual components of UMAP, TriMap and PaCMAP have no specific meaning, unlike PCA. For PCA, the additional explained variance diminishes when a higher number of components are used. UMAP UMAP estimates the shape of the data in the higher dimensionality using a weighted graph and then projects the graph onto the lower dimension for dimensionality reduction (see Fig. ). UMAP constructs a high-dimensional graph by extending branches from individual points with a radius r to connect the points to their neighbourhood in high-dimension. These branches then become a graph of various shapes to be projected onto the lower dimension, irrespective of distance between points. The k -nearest neighbours in r can be set, where a low k preserves the local structure, and a higher k preserves the global structure of the original data. Finally, the high-dimensional graph is projected onto a lower dimension using a force-directed graph approach, pulling together points that are close and pushing apart points are further away. This is done based on the weighted connectivity, meaning that points are drawn towards groups of points with which it has multiple connections, rather than points/clusters with singular connections. Clusters are formed based on some threshold, which also depends on the number of nearest neighbours. Increasing the k -nearest neighbours will result in larger groups of interconnected points, at the cost of increased computational complexity. TriMap TriMap is another manifold approach, and is primarily built around triplets constraints . TriMap constructs triplets per point ( i ) and pairs this to [12pt]{minimal} $$n\_inliers$$ n _ i n l i e r s ( j ) according to the distance metric used. For each of these pairings, [12pt]{minimal} $$n\_outliers$$ n _ o u t l i e r s are sampled ( k ) resulting in [12pt]{minimal} $$n\_inliers*n\_outliers$$ n _ i n l i e r s ∗ n _ o u t l i e r s triplets per point ( i , j , k ). Additionally, [12pt]{minimal} $$n\_random$$ n _ r a n d o m triplets are constructed. TriMap then creates a low dimensional representation of the data where the ordering of the distances of these triplets is preserved ( [12pt]{minimal} $$d(i,j) d(i,k)$$ d ( i , j ) ≤ d ( i , k ) ), by weighting the triplets, according to the relative distance of j and k to i (Fig. ). PaCMAP Similarly to TriMap, PaCMAP samples both neighbours and non-neighbours ( Near Pairs and Further Pairs respectively) in order to establish a low-dimensional representation of the original data. Contrary to TriMap, it also focusses on Mid-Near Pairs . Near Pairs are the nearest neighbours based on a scaled distance metric. Mid-near pairs are established by sampling 6 points per observation and then selecting the second-closest point based on distance. The amount of Mid-Near Pairs is set by the [12pt]{minimal} $$MN\_ratio$$ M N _ r a t i o . Finally, Further Pairs are non-neighbours, and the amount of pairs is set using the [12pt]{minimal} $$FP\_ratio$$ F P _ r a t i o . After initializing with PCA, PaCMAP uses a weighted loss function to optimize the low dimensional representation. The loss function is primarily driven by the Near Pairs and Mid-Near Pairs , but gradually is mostly influenced by the Near Pairs . This means that the loss is highly increased if close points in original space are set further away in the reduced space. Gaussian random projection Gaussian Random Projection (GRP) is a dimension reduction technique that is based on the Johnson-Lindenstrauss lemma, which states that any high-dimensional Euclidean space can be reduced onto a lower-dimensional Euclidean space with minimal distortion (at most [12pt]{minimal} $$1+$$ 1 + ϵ ) of the pairwise distance , and a result by Hecht-Nielsen who showed that a random selection of vectors in a high-dimensional space can be considered an orthogonal projection. Gaussian Random Projection does this by projecting original data on a randomly generated matrix with Gaussian distributions. However, the accuracy of the projection and the amount of required components for dimension reduction is highly dependent on the amount of samples and the permitted error ( [12pt]{minimal} $$$$ ϵ ), specifically [12pt]{minimal} $$n\_components 4 ln(n\_samples) / ( ^2/2 - ^2/3)$$ n _ c o m p o n e n t s ≥ 4 l n ( n _ s a m p l e s ) / ( ϵ 2 / 2 - ϵ 2 / 3 ) . This means that GRP can require more components than available dimensions when the number of dimensions is sufficiently low and the number of observations is high. To that end, we included GRP as a negative control for the dimension reduction quality metrics, because we would expect that this method would perform worst when dimension reduction the data to a low number of dimensions ( [12pt]{minimal} $$$$ ≤ 10) because of this constraint, since our data consists of over 3 million samples. Parameter tuning We tuned the amount of neighbours used for UMAP, TriMap, PaCMAP ( [12pt]{minimal} $$n\_neighbours$$ n _ n e i g h b o u r s ). For UMAP and PaCMAP we were interested in the number of neighbours, but for TriMap we were interested in the number of outliers and inliers, since this is important for the construction of triplets in TriMap. Both PCA and GRP do not require any tuning on nearest neighbours, since they are not neighbour-based. Additionally, we also investigated the number of dimensions ( [12pt]{minimal} $$n\_components$$ n _ c o m p o n e n t s ) that were generated by all the dimension reduction methods, as this might increase the amount of information stored in the dimension reduction. For example, in PCA, the amount of total variation explained increases when the amount of components is increased. As computing numerous distinct dimension reductions and their performance is computationally expensive using a nearest-neighbours approach, we also investigated the number of samples we could use for dimension reduction purposes. Distance metrics One important step in the assessment of dimension reduction techniques is the distance metric with which we assess the distances between data points and with which we perform the dimension reduction for the manifold approaches. As mentioned above, the number of dimensions of the reduced data with Euclidean distance is dependent on the number of samples and the permitted distortion ( [12pt]{minimal} $$$$ ϵ ). For a dataset with roughly three million samples, and roughly one hundred dimensions, this means that we are not able to project the data to a lower-dimensional Euclidean space while preserving the distortion [12pt]{minimal} $$0< <1$$ 0 < ϵ < 1 . This practically excludes using the Euclidean distance metric from the perspective of distance preservation, and a fractional distance metric is best suited for the description of distances in high dimensionality ( [12pt]{minimal} $$d>30$$ d > 30 ) . We decided to pursue the Manhattan distance as the simplest expression of the fractional distance. The Manhattan distance is defined as: [12pt]{minimal} $$ _{i=1}^{n }|a_i-b_i|$$ ∑ i = 1 n | a i - b i | . Dimension reduction quality metrics Two main ways that are used for dimension reduction quality metrics are evaluating the global and local structure . Local structure metrics evaluate neighbourhoods of points and how well these are preserved in the reduced data, while global structure metrics evaluate how well the reduced data preserved the relationships between groups of points. In this study, both global and local distance metrics were used to find a balanced representation of the CELL-DYN Sapphire data in lower dimension. The metrics are generally rank-based, since these are insensitive to scaling. One unifying framework for rank-based metrics is the co-ranking matrix (Q-matrix) . The Q-matrix compares the pairwise ranks of the original data versus the reduced data, showing the preservation of local and global distances. Calculating a Q-matrix consists of two steps. Firstly, a ranking of distances between points in both original and reduced data is calculated. Thereafter, a single matrix is constructed combining both rankings, explaining rank preservation in the low-dimensional data. For local preservation measures, we further used the proportion of neighbouring points being preserved (the neighbourhood-kept-ratio), and the trustworthiness score. The neighbourhood-kept-ratio is computed using the number of nearest neighbours [12pt]{minimal} $$(i)$$ N ( i ) for all [12pt]{minimal} $$i$$ i in high-dimensional space and the [12pt]{minimal} $$k$$ k -nearest neighbours [12pt]{minimal} $$'(i)$$ N ′ ( i ) for all [12pt]{minimal} $$i$$ i in low-dimensional space, where [12pt]{minimal} $$i$$ i is each data point. Consequently, [12pt]{minimal} $$(i)$$ N ( i ) and [12pt]{minimal} $$'(i)$$ N ′ ( i ) are compared to see the intersection between their neighbourhoods. The degree of overlap is calculated, and divided by the number of [12pt]{minimal} $$k$$ k to calculate a ratio for each [12pt]{minimal} $$i$$ i . Subsequently, this ratio is divided by the number of samples to get the average neighbourhood preservation. The trustworthiness score ranks neighbourhood points in accordance with how close they are to the observations [12pt]{minimal} $$i$$ i in low- and high- dimensional spaces . If the ranks of neighbourhood points are misaligned in the reduced space, the metric will penalise these shifts, resulting in a lower score. A version of the trustworthiness score was used in this study with help of the Q-matrix framework . For global preservation measures, we used random triplet score and spearman rank correlation. The random triplet score is calculated by retrieving sets of two points [12pt]{minimal} $$(j,k)$$ ( j , k ) at random per [12pt]{minimal} $$i$$ i in the original data to form triplets [12pt]{minimal} $$(i,j,k)$$ ( i , j , k ) . After this, it finds the same set of triplets in the reduced space and calculates the distance from [12pt]{minimal} $$i$$ i to [12pt]{minimal} $$j$$ j ( [12pt]{minimal} $$d_{ij}$$ d ij ) and [12pt]{minimal} $$k$$ k ( [12pt]{minimal} $$d_{ik}$$ d ik ) for both the original and the reduced data. It then orders [12pt]{minimal} $$d_{ij}$$ d ij and [12pt]{minimal} $$d_{ik}$$ d ik based on their distance in both datasets. The degree of order preservation indicates global structure preservation by the dimension reduction method. Five triplets per [12pt]{minimal} $$i$$ i were used in this study. Finally, pairwise distances can be measured using the Spearman rank correlation to assess distance preservation in the reduced data. Another strength of this method is that distance correlation is easily visualized in a graph (e.g., Figure S1 and S2) to assess the correlation of distances between low- and high-dimensional spaces. To compare the different dimension reduction methods with regards to their quality metrics, we performed the quality assessments in 10-fold, and used a T-test for comparison. One of the most frequently used dimension reduction models historically is PCA, which tries to capture data in linear combinations, using vector decomposition. It creates perpendicular components, meaning that components are not correlated to each other, and using this principle, PCA can reduce the original data into a reduced space by explaining the variance in the original data. This method is very useful when working with collinear features, as these features will be captured in the same components, since they explain the same variance in the original data. For assessing the performance of a PCA, the cumulative explained variance is often used, and this will naturally increase when the number of components are increased. PCA assumes linear relationships between variables, and assumes normally distributed variables. Yet, as the probability exists that the original data might contain non-linear relationships, we decided to use manifold dimension reduction techniques, which are based on the theory that any space can be reduced to lower dimensions based on the shape of the data. In order to achieve this, each data point should be placed in a similar neighbourhood compared to the original space. This makes sure that local structure of the data is better preserved, i.e., that data that is similar in the original space is also similar in the reduced space. Examples of non-linear dimension reduction techniques include Uniform manifold approximation (UMAP), Triplets Manifold Approximation (TriMap), and Pairwise Controlled Manifold Approximation (PaCMAP). In addition to PCA, these methods were used in the current study to capture the large and complex CELL-DYN Sapphire dataset in lower dimension. Finally, we used Gaussian Random Projection (GRP) as a negative control. We will provide a brief overview of these techniques in this section. Although UMAP, PacMAP and TriMap are initialised with PCA by default, the individual components of UMAP, TriMap and PaCMAP have no specific meaning, unlike PCA. For PCA, the additional explained variance diminishes when a higher number of components are used. UMAP UMAP estimates the shape of the data in the higher dimensionality using a weighted graph and then projects the graph onto the lower dimension for dimensionality reduction (see Fig. ). UMAP constructs a high-dimensional graph by extending branches from individual points with a radius r to connect the points to their neighbourhood in high-dimension. These branches then become a graph of various shapes to be projected onto the lower dimension, irrespective of distance between points. The k -nearest neighbours in r can be set, where a low k preserves the local structure, and a higher k preserves the global structure of the original data. Finally, the high-dimensional graph is projected onto a lower dimension using a force-directed graph approach, pulling together points that are close and pushing apart points are further away. This is done based on the weighted connectivity, meaning that points are drawn towards groups of points with which it has multiple connections, rather than points/clusters with singular connections. Clusters are formed based on some threshold, which also depends on the number of nearest neighbours. Increasing the k -nearest neighbours will result in larger groups of interconnected points, at the cost of increased computational complexity. TriMap TriMap is another manifold approach, and is primarily built around triplets constraints . TriMap constructs triplets per point ( i ) and pairs this to [12pt]{minimal} $$n\_inliers$$ n _ i n l i e r s ( j ) according to the distance metric used. For each of these pairings, [12pt]{minimal} $$n\_outliers$$ n _ o u t l i e r s are sampled ( k ) resulting in [12pt]{minimal} $$n\_inliers*n\_outliers$$ n _ i n l i e r s ∗ n _ o u t l i e r s triplets per point ( i , j , k ). Additionally, [12pt]{minimal} $$n\_random$$ n _ r a n d o m triplets are constructed. TriMap then creates a low dimensional representation of the data where the ordering of the distances of these triplets is preserved ( [12pt]{minimal} $$d(i,j) d(i,k)$$ d ( i , j ) ≤ d ( i , k ) ), by weighting the triplets, according to the relative distance of j and k to i (Fig. ). PaCMAP Similarly to TriMap, PaCMAP samples both neighbours and non-neighbours ( Near Pairs and Further Pairs respectively) in order to establish a low-dimensional representation of the original data. Contrary to TriMap, it also focusses on Mid-Near Pairs . Near Pairs are the nearest neighbours based on a scaled distance metric. Mid-near pairs are established by sampling 6 points per observation and then selecting the second-closest point based on distance. The amount of Mid-Near Pairs is set by the [12pt]{minimal} $$MN\_ratio$$ M N _ r a t i o . Finally, Further Pairs are non-neighbours, and the amount of pairs is set using the [12pt]{minimal} $$FP\_ratio$$ F P _ r a t i o . After initializing with PCA, PaCMAP uses a weighted loss function to optimize the low dimensional representation. The loss function is primarily driven by the Near Pairs and Mid-Near Pairs , but gradually is mostly influenced by the Near Pairs . This means that the loss is highly increased if close points in original space are set further away in the reduced space. Gaussian random projection Gaussian Random Projection (GRP) is a dimension reduction technique that is based on the Johnson-Lindenstrauss lemma, which states that any high-dimensional Euclidean space can be reduced onto a lower-dimensional Euclidean space with minimal distortion (at most [12pt]{minimal} $$1+$$ 1 + ϵ ) of the pairwise distance , and a result by Hecht-Nielsen who showed that a random selection of vectors in a high-dimensional space can be considered an orthogonal projection. Gaussian Random Projection does this by projecting original data on a randomly generated matrix with Gaussian distributions. However, the accuracy of the projection and the amount of required components for dimension reduction is highly dependent on the amount of samples and the permitted error ( [12pt]{minimal} $$$$ ϵ ), specifically [12pt]{minimal} $$n\_components 4 ln(n\_samples) / ( ^2/2 - ^2/3)$$ n _ c o m p o n e n t s ≥ 4 l n ( n _ s a m p l e s ) / ( ϵ 2 / 2 - ϵ 2 / 3 ) . This means that GRP can require more components than available dimensions when the number of dimensions is sufficiently low and the number of observations is high. To that end, we included GRP as a negative control for the dimension reduction quality metrics, because we would expect that this method would perform worst when dimension reduction the data to a low number of dimensions ( [12pt]{minimal} $$$$ ≤ 10) because of this constraint, since our data consists of over 3 million samples. UMAP estimates the shape of the data in the higher dimensionality using a weighted graph and then projects the graph onto the lower dimension for dimensionality reduction (see Fig. ). UMAP constructs a high-dimensional graph by extending branches from individual points with a radius r to connect the points to their neighbourhood in high-dimension. These branches then become a graph of various shapes to be projected onto the lower dimension, irrespective of distance between points. The k -nearest neighbours in r can be set, where a low k preserves the local structure, and a higher k preserves the global structure of the original data. Finally, the high-dimensional graph is projected onto a lower dimension using a force-directed graph approach, pulling together points that are close and pushing apart points are further away. This is done based on the weighted connectivity, meaning that points are drawn towards groups of points with which it has multiple connections, rather than points/clusters with singular connections. Clusters are formed based on some threshold, which also depends on the number of nearest neighbours. Increasing the k -nearest neighbours will result in larger groups of interconnected points, at the cost of increased computational complexity. TriMap is another manifold approach, and is primarily built around triplets constraints . TriMap constructs triplets per point ( i ) and pairs this to [12pt]{minimal} $$n\_inliers$$ n _ i n l i e r s ( j ) according to the distance metric used. For each of these pairings, [12pt]{minimal} $$n\_outliers$$ n _ o u t l i e r s are sampled ( k ) resulting in [12pt]{minimal} $$n\_inliers*n\_outliers$$ n _ i n l i e r s ∗ n _ o u t l i e r s triplets per point ( i , j , k ). Additionally, [12pt]{minimal} $$n\_random$$ n _ r a n d o m triplets are constructed. TriMap then creates a low dimensional representation of the data where the ordering of the distances of these triplets is preserved ( [12pt]{minimal} $$d(i,j) d(i,k)$$ d ( i , j ) ≤ d ( i , k ) ), by weighting the triplets, according to the relative distance of j and k to i (Fig. ). Similarly to TriMap, PaCMAP samples both neighbours and non-neighbours ( Near Pairs and Further Pairs respectively) in order to establish a low-dimensional representation of the original data. Contrary to TriMap, it also focusses on Mid-Near Pairs . Near Pairs are the nearest neighbours based on a scaled distance metric. Mid-near pairs are established by sampling 6 points per observation and then selecting the second-closest point based on distance. The amount of Mid-Near Pairs is set by the [12pt]{minimal} $$MN\_ratio$$ M N _ r a t i o . Finally, Further Pairs are non-neighbours, and the amount of pairs is set using the [12pt]{minimal} $$FP\_ratio$$ F P _ r a t i o . After initializing with PCA, PaCMAP uses a weighted loss function to optimize the low dimensional representation. The loss function is primarily driven by the Near Pairs and Mid-Near Pairs , but gradually is mostly influenced by the Near Pairs . This means that the loss is highly increased if close points in original space are set further away in the reduced space. Gaussian Random Projection (GRP) is a dimension reduction technique that is based on the Johnson-Lindenstrauss lemma, which states that any high-dimensional Euclidean space can be reduced onto a lower-dimensional Euclidean space with minimal distortion (at most [12pt]{minimal} $$1+$$ 1 + ϵ ) of the pairwise distance , and a result by Hecht-Nielsen who showed that a random selection of vectors in a high-dimensional space can be considered an orthogonal projection. Gaussian Random Projection does this by projecting original data on a randomly generated matrix with Gaussian distributions. However, the accuracy of the projection and the amount of required components for dimension reduction is highly dependent on the amount of samples and the permitted error ( [12pt]{minimal} $$$$ ϵ ), specifically [12pt]{minimal} $$n\_components 4 ln(n\_samples) / ( ^2/2 - ^2/3)$$ n _ c o m p o n e n t s ≥ 4 l n ( n _ s a m p l e s ) / ( ϵ 2 / 2 - ϵ 2 / 3 ) . This means that GRP can require more components than available dimensions when the number of dimensions is sufficiently low and the number of observations is high. To that end, we included GRP as a negative control for the dimension reduction quality metrics, because we would expect that this method would perform worst when dimension reduction the data to a low number of dimensions ( [12pt]{minimal} $$$$ ≤ 10) because of this constraint, since our data consists of over 3 million samples. We tuned the amount of neighbours used for UMAP, TriMap, PaCMAP ( [12pt]{minimal} $$n\_neighbours$$ n _ n e i g h b o u r s ). For UMAP and PaCMAP we were interested in the number of neighbours, but for TriMap we were interested in the number of outliers and inliers, since this is important for the construction of triplets in TriMap. Both PCA and GRP do not require any tuning on nearest neighbours, since they are not neighbour-based. Additionally, we also investigated the number of dimensions ( [12pt]{minimal} $$n\_components$$ n _ c o m p o n e n t s ) that were generated by all the dimension reduction methods, as this might increase the amount of information stored in the dimension reduction. For example, in PCA, the amount of total variation explained increases when the amount of components is increased. As computing numerous distinct dimension reductions and their performance is computationally expensive using a nearest-neighbours approach, we also investigated the number of samples we could use for dimension reduction purposes. One important step in the assessment of dimension reduction techniques is the distance metric with which we assess the distances between data points and with which we perform the dimension reduction for the manifold approaches. As mentioned above, the number of dimensions of the reduced data with Euclidean distance is dependent on the number of samples and the permitted distortion ( [12pt]{minimal} $$$$ ϵ ). For a dataset with roughly three million samples, and roughly one hundred dimensions, this means that we are not able to project the data to a lower-dimensional Euclidean space while preserving the distortion [12pt]{minimal} $$0< <1$$ 0 < ϵ < 1 . This practically excludes using the Euclidean distance metric from the perspective of distance preservation, and a fractional distance metric is best suited for the description of distances in high dimensionality ( [12pt]{minimal} $$d>30$$ d > 30 ) . We decided to pursue the Manhattan distance as the simplest expression of the fractional distance. The Manhattan distance is defined as: [12pt]{minimal} $$ _{i=1}^{n }|a_i-b_i|$$ ∑ i = 1 n | a i - b i | . Two main ways that are used for dimension reduction quality metrics are evaluating the global and local structure . Local structure metrics evaluate neighbourhoods of points and how well these are preserved in the reduced data, while global structure metrics evaluate how well the reduced data preserved the relationships between groups of points. In this study, both global and local distance metrics were used to find a balanced representation of the CELL-DYN Sapphire data in lower dimension. The metrics are generally rank-based, since these are insensitive to scaling. One unifying framework for rank-based metrics is the co-ranking matrix (Q-matrix) . The Q-matrix compares the pairwise ranks of the original data versus the reduced data, showing the preservation of local and global distances. Calculating a Q-matrix consists of two steps. Firstly, a ranking of distances between points in both original and reduced data is calculated. Thereafter, a single matrix is constructed combining both rankings, explaining rank preservation in the low-dimensional data. For local preservation measures, we further used the proportion of neighbouring points being preserved (the neighbourhood-kept-ratio), and the trustworthiness score. The neighbourhood-kept-ratio is computed using the number of nearest neighbours [12pt]{minimal} $$(i)$$ N ( i ) for all [12pt]{minimal} $$i$$ i in high-dimensional space and the [12pt]{minimal} $$k$$ k -nearest neighbours [12pt]{minimal} $$'(i)$$ N ′ ( i ) for all [12pt]{minimal} $$i$$ i in low-dimensional space, where [12pt]{minimal} $$i$$ i is each data point. Consequently, [12pt]{minimal} $$(i)$$ N ( i ) and [12pt]{minimal} $$'(i)$$ N ′ ( i ) are compared to see the intersection between their neighbourhoods. The degree of overlap is calculated, and divided by the number of [12pt]{minimal} $$k$$ k to calculate a ratio for each [12pt]{minimal} $$i$$ i . Subsequently, this ratio is divided by the number of samples to get the average neighbourhood preservation. The trustworthiness score ranks neighbourhood points in accordance with how close they are to the observations [12pt]{minimal} $$i$$ i in low- and high- dimensional spaces . If the ranks of neighbourhood points are misaligned in the reduced space, the metric will penalise these shifts, resulting in a lower score. A version of the trustworthiness score was used in this study with help of the Q-matrix framework . For global preservation measures, we used random triplet score and spearman rank correlation. The random triplet score is calculated by retrieving sets of two points [12pt]{minimal} $$(j,k)$$ ( j , k ) at random per [12pt]{minimal} $$i$$ i in the original data to form triplets [12pt]{minimal} $$(i,j,k)$$ ( i , j , k ) . After this, it finds the same set of triplets in the reduced space and calculates the distance from [12pt]{minimal} $$i$$ i to [12pt]{minimal} $$j$$ j ( [12pt]{minimal} $$d_{ij}$$ d ij ) and [12pt]{minimal} $$k$$ k ( [12pt]{minimal} $$d_{ik}$$ d ik ) for both the original and the reduced data. It then orders [12pt]{minimal} $$d_{ij}$$ d ij and [12pt]{minimal} $$d_{ik}$$ d ik based on their distance in both datasets. The degree of order preservation indicates global structure preservation by the dimension reduction method. Five triplets per [12pt]{minimal} $$i$$ i were used in this study. Finally, pairwise distances can be measured using the Spearman rank correlation to assess distance preservation in the reduced data. Another strength of this method is that distance correlation is easily visualized in a graph (e.g., Figure S1 and S2) to assess the correlation of distances between low- and high-dimensional spaces. To compare the different dimension reduction methods with regards to their quality metrics, we performed the quality assessments in 10-fold, and used a T-test for comparison. Cluster preservation Because biological relevance and meaning of the data should be maintained in the dimension reduction, we assessed preservation of biological relevance from four different angles. As a first angle, we studied the preservation of clusters of similar patients in the reduced data. We analyzed both the raw and the reduced data using HDBSCAN and k-means clustering and retrieved information on the preservation of clustering methods after using dimension reduction. For this analysis, we were interested in the number of clusters extracted, the Normalised Mutual Information (NMI) and Adjusted Rand Index (ARI) scores (higher is better). The NMI and ARI scores are ways to report the extent of cluster preservation in the reduced data, by taking the clusters in the original data as ground truth. K-means clustering retrieves a predefined number of clusters ( k ) based on the Euclidean distance towards a cluster centre, and tries to minimize the sum of distances over these k clusters. In practice, this can result in clusters that are of equal size and density, but are unintuitive for interpretation. HDBSCAN is assigning clusters based on the density of the data, and is therefore more suitable to retrieve clusters with varying densities. This increases the possibility of potentially retrieving meaningful clusters. For our analysis, we used a pipeline with a z-score scaler, a dimension reduction method, and a clustering model (HDBSCAN). As a default we used the Manhattan distance, with 50 neighbours and a random selection of 100.000 samples from the haematology set for dimension reduction in this analysis. Diurnal patterns The second and descriptive angle was by studying diurnal patterns in the reduced dataset, as the size of the original dataset allowed us to investigate large patterns within the data. One of the broad epistemic features of, at least part of, the hematology parameters, is the presence of a diurnal pattern . We expected that dimension reduction algorithms preserve such broad qualitative features. We assessed the diurnal patterns in the reduced data with the use of a cosine fit, as implemented in the CosinorPy library . We assessed the diurnal patterns with 100,000 random samples, and based the hour of day on the time of blood draw. Age and sex The third angle was to assess biological relevance by two classification tasks that should be identifiable in the data: firstly, sex prediction in samples of patients between the age of 20 to 50, as during this age-range a clear distinct difference of hemoglobin between men and women exists . Secondly, prediction of samples of patients below 20 versus patients above 60 years old, as the haematological characteristics of young people are known to be distinct from older people . For this purpose, we used Gradient Boosting (GB) model to capture any non-linear associations. To assess the performance of the resulting models, we decided to focus on the accuracy and the Matthews Correlation Coefficient (MCC) metric. The accuracy is the correct prediction of positive and negative cases, divided by the total amount of positives and negatives, i.e. [12pt]{minimal} $$$$ T P + T N T P + F P + T N + F N , where TP, FP, TN and FN are true and false positives, and true and false negatives respectively. The MCC, or the [12pt]{minimal} $$$$ ϕ coefficient, is a measure of the quality of a binary classification model that takes into account true and false positives and negatives, i.e., is a summary measure for the confusion matrix, comparable to the F1 metric. MCC is calculated as follows: [12pt]{minimal} $$}$$ T P × T N - F P × F N ( T P + F P ) × ( T P + F N ) × ( T N + F P ) × ( T N + F N ) . The data were analysed using 10-fold cross validation with an inner validation set (as a result of the folds) and a dedicated outer validation set. 170,000 random samples were used for training, 30,000 random samples were used for the dedicated validation set. Sampling was performed for computational reasons, with regards to dimensionality of the original data. We assessed the significance of performance change using a T-test. Identification of leukemia-like patients As a final angle to assess performance of preservation of biological relevance, we investigated a specific population that is completely divergent from the general population in terms of CBC. To this end, we used samples from patients with chronic lymphocytic leukemia that were diagnosed based on CBC characteristics together with clinical experts, more specifically: based on very high lymphocyte counts. If dimension reduction would preserve biological relevance, these samples should be clearly distinguishable in the lower dimension representation. Failure to detect these patients would significantly impact the use of the dimension reduction methods in clinical practice. To detect potentially significant differences between the populations, we used an unpaired T-test, and considered a p -value below 0.001 to be significant. Because biological relevance and meaning of the data should be maintained in the dimension reduction, we assessed preservation of biological relevance from four different angles. As a first angle, we studied the preservation of clusters of similar patients in the reduced data. We analyzed both the raw and the reduced data using HDBSCAN and k-means clustering and retrieved information on the preservation of clustering methods after using dimension reduction. For this analysis, we were interested in the number of clusters extracted, the Normalised Mutual Information (NMI) and Adjusted Rand Index (ARI) scores (higher is better). The NMI and ARI scores are ways to report the extent of cluster preservation in the reduced data, by taking the clusters in the original data as ground truth. K-means clustering retrieves a predefined number of clusters ( k ) based on the Euclidean distance towards a cluster centre, and tries to minimize the sum of distances over these k clusters. In practice, this can result in clusters that are of equal size and density, but are unintuitive for interpretation. HDBSCAN is assigning clusters based on the density of the data, and is therefore more suitable to retrieve clusters with varying densities. This increases the possibility of potentially retrieving meaningful clusters. For our analysis, we used a pipeline with a z-score scaler, a dimension reduction method, and a clustering model (HDBSCAN). As a default we used the Manhattan distance, with 50 neighbours and a random selection of 100.000 samples from the haematology set for dimension reduction in this analysis. The second and descriptive angle was by studying diurnal patterns in the reduced dataset, as the size of the original dataset allowed us to investigate large patterns within the data. One of the broad epistemic features of, at least part of, the hematology parameters, is the presence of a diurnal pattern . We expected that dimension reduction algorithms preserve such broad qualitative features. We assessed the diurnal patterns in the reduced data with the use of a cosine fit, as implemented in the CosinorPy library . We assessed the diurnal patterns with 100,000 random samples, and based the hour of day on the time of blood draw. The third angle was to assess biological relevance by two classification tasks that should be identifiable in the data: firstly, sex prediction in samples of patients between the age of 20 to 50, as during this age-range a clear distinct difference of hemoglobin between men and women exists . Secondly, prediction of samples of patients below 20 versus patients above 60 years old, as the haematological characteristics of young people are known to be distinct from older people . For this purpose, we used Gradient Boosting (GB) model to capture any non-linear associations. To assess the performance of the resulting models, we decided to focus on the accuracy and the Matthews Correlation Coefficient (MCC) metric. The accuracy is the correct prediction of positive and negative cases, divided by the total amount of positives and negatives, i.e. [12pt]{minimal} $$$$ T P + T N T P + F P + T N + F N , where TP, FP, TN and FN are true and false positives, and true and false negatives respectively. The MCC, or the [12pt]{minimal} $$$$ ϕ coefficient, is a measure of the quality of a binary classification model that takes into account true and false positives and negatives, i.e., is a summary measure for the confusion matrix, comparable to the F1 metric. MCC is calculated as follows: [12pt]{minimal} $$}$$ T P × T N - F P × F N ( T P + F P ) × ( T P + F N ) × ( T N + F P ) × ( T N + F N ) . The data were analysed using 10-fold cross validation with an inner validation set (as a result of the folds) and a dedicated outer validation set. 170,000 random samples were used for training, 30,000 random samples were used for the dedicated validation set. Sampling was performed for computational reasons, with regards to dimensionality of the original data. We assessed the significance of performance change using a T-test. As a final angle to assess performance of preservation of biological relevance, we investigated a specific population that is completely divergent from the general population in terms of CBC. To this end, we used samples from patients with chronic lymphocytic leukemia that were diagnosed based on CBC characteristics together with clinical experts, more specifically: based on very high lymphocyte counts. If dimension reduction would preserve biological relevance, these samples should be clearly distinguishable in the lower dimension representation. Failure to detect these patients would significantly impact the use of the dimension reduction methods in clinical practice. To detect potentially significant differences between the populations, we used an unpaired T-test, and considered a p -value below 0.001 to be significant. All analyses were performed with the Python programming language (version 3.9). Imputation was performed using the miceforest package. Dimension reduction was done using the scikit-learn package for PCA and GRP. UMAP was performed using the umap-learn package, TriMap was performed using the trimap package, PacMAP was performed using the pacmap package. Sex and age classification was performed using the xgboost package. All calculations were performed on CPU, namely the Xeon W-2125 at 4GHz and 8 logic cores with 64GB memory. The code for this project is available from https://github.com/UPOD-datascience/celldyn_embedder. Descriptives In total, we extracted 3, 093, 792 samples from 358, 614 unique patients. We used 70 different blood cell characteristics for this study, all of them continuous variables. We used no categorical variables in our embedding. The descriptives and missingness for each variable in the data that were used in this study are described in Supplementary File 2. 52.8% of the samples were from male patients, the median age at measurement was 51 (IQR: 27–66). After preprocessing the haematological data, we applied imputation to 1, 107, 049 samples for haematological parameters that were missing as a result of laboratory protocols (e.g., not using reticulocyte mode). The distribution of the samples per patient is shown in Fig. Dimension reductions Parameter tuning results Number of neighbours First, we compared the amount of nearest neighbours used for dimension reduction for both UMAP, TriMap, and PaCMAP. The results are shown in Fig. for UMAP, and in Figure S3 for PacMAP. For an increasing sample size and number of neighbours, we observed an initial improvement with a rapid stagnation (Fig. , Figure S3). The neighbourhood kept ratio ranged from 0.29 at 5000 samples and 5 nearest neighbours to 0.33 at 160,000 samples and 100 nearest neighbours for UMAP. However, the scores for any number of neighbours from 15 and above were similar with increasing sample size. For trustworthiness, using 5 nearest neighbours yielded worse results than using 15 or more neighbours (ranging from 0.89 to 0.91). For all other number of neighbours, the trustworthiness was limited to 0.92. For global distance preservation methods, UMAP was stable for random triplet score ranging from 0.72 for 5 nearest neighbours at 5000 samples to 0.74 for all number of neighbours at 40,000 to 160,000 samples. The distance correlation increased from 0.64 at 5000 samples to 0.69 at 40,000 to 160,000 samples (Fig. ). For PaCMAP, a similar pattern was observed (Figure S3). Local distance preservation as measured through the neighbourhood kept ratio ranged from 0.33 at 5000 samples and 5 nearest neighbours to 0.36 at 160,000 samples with 30, 50, or 100 nearest neighbours. Trustworthiness ranged from 0.88 at 5000 samples for 5 nearest neighbours to 0.90 at 160,000 samples for all other number of nearest neighbours. However, this performance was already reached at 10,000 samples by using 30, 50, or 100 nearest neighbours. Global distance preservation as measured by the random triplets score ranged from 0.73 to 0.74 for all number of neighbours. Distance correlation remained relatively stable, with scores ranging from 0.66 to 0.67. Considering the results, we decided to limit the sample sizes to 40,000 for TriMap to find the number of in- and outlying neighbours, because increasing the sample beyond this point yielded similar result, yet dramatically increased computational costs (data not shown). The result of this tuning are found in Figure S4. We observed no large differences for the number of outliers used for TriMap. However, we did observe substantial increased for global distance preservation metrics when increasing the number of inliers. Random triplets score increased from 0.75 (5 inliers) to 0.78 (100 inliers). Distance Correlation increased from 0.75 (5 inliers) to 0.81 (100 inliers). However, we decided to move forward with 50 inliers and 15 outliers for TriMap, since increasing the number of neighbours was computationally not feasible for the entire data set of over 3 million samples (data not shown). Number of components As we used 50 nearest neighbours for the dimension reductions, we also used 50 nearest neighbours for calculating the dimension reduction quality metrics. We then increased the number of components for the final dimension reduction. We used 40,000 random samples, which were matched across the dimension reduction methods. We compared 2, 4, 6, 8, 10, 20 and 30 components to get a rough estimation of the increase in performance for each of these models. Figure shows the results for the neighbourhood kept ratio, trustworthiness, random triplet score and distance correlation. For all scores, PCA performed best across all number of component ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) (Fig. ). Additionally, performances for UMAP, TriMap, and PaCMAP barely increased by increasing the number of components. Focussing on local distances, the neighbourhood kept ratio increased from 0.27 (2 dimensions) to 0.89 (30 dimensions) for PCA, whereas it stagnated for UMAP around 0.32, TriMap around 0.36, and PaCMAP around 0.35. GRP increased from 0.13 (2 dimensions) to 0.55 (30 dimensions). Trustworthiness was high for all dimension reduction methods, except for GRP at lower dimensions. PCA (range 0.92–0.97) had the highest scores, UMAP and PaCMAP performed similarly (ranges 0.90–0.92 for UMAP; 0.91–0.93 for PaCMAP). TriMap performed better than the other manifold approaches (range 0.92–0.94). GRP performed worse at lower dimensions (range 0.73 - 0.94). When it comes to global distances, PCA outperformed all other dimension reduction methods on both random triplets score (0.78 to 0.98) and distance correlation (range 0.81 - 0.93, max = 0.93 at 8 dimensions). The random triplets score remained stable for the three manifold approaches, scoring 0.74 for UMAP, 0.78 for TriMap, and 0.73 for PaCMAP. GRP increased from 0.66 at 2 dimensions to 0.86 at 30 dimensions. Distance correlation for the manifold approaches increased primarily with lower dimensions, scoring 0.90 for UMAP, 0.91 for PaCMAP and 0.92 for Trimap to 0.92 for UMAP, 0.93 for PaCMAP and 0.94 for TriMap at 4 components, which then remained stable with increasing dimensions. Although we observed an increase of performance for PCA and GRP with increasing components, we also observed a stagnation for manifold approaches at 4 components. Considering the increasing computational complexity of the manifold approaches with increasing components, we decided to limit the number of components to 6 for all methods, in order to reduce the entire data set of over 3 million samples. Preservation of biological representation Cluster preservation Table shows the performances of clustering methods using the reduced data. We observed an excess of clusters with subsequent low values for the Normalised Mutual Information (NMI) score and Adjusted Rand Index (ARI), showing that the dimension reduction methods have a tendency to generate an excess of clusters in comparison with the real data. We identified 12 clusters in the original data, whereas we found 32, 31 and 12 for PCA at 3, 6 and 12 components respectively. For the manifold approaches, we found a large inflation of clusters. For UMAP we identified 115, 84, and 81 clusters; for TriMap we identified 45, 44 and 53 clusters; for PaCMAP we identified 42, 43 and 54 clusters, all with 3, 6 and 12 components respectively. Finally, for GRP we identified 30, 22 and 5 clusters for 3,6 and 12 components respectively. Comparing the NMI score and ARI we found that, overall, scores were low ( [12pt]{minimal} $$$$ ≤ 0.10 for NMI and ARI, and did not improve when increasing the number of components, except for GRP with a NMI of 0.01 at 3 components and 0.12 at 12 components, and an ARI of −0.0003 at 3 components to 0.19 (table ). Furthermore, we found that, in terms of cluster quality, UMAP stagnates to a value well under the optimum for increasing number of components being on par with PCA for smaller number of components (Figure S6). Additionally, we observed that all manifold approaches maintain a high level of cluster-inflation for increasing number of reduced dimensions. Finally, we observed that for a low number of reduced dimensions, all tested dimension reduction techniques produced a considerable inflated number of clusters as detected by HDBSCAN compared to the baseline cluster detection on the original data (Figure S7, Table ). Diurnal patterns Figure shows the 6 UMAP dimensions. We observed a diurnal pattern for each of the components, primarily split between daycare (6:00–18:00) and care during the night, with clear progression within the daycare time-period. The clearest diurnal patterns in the non-reduced data are obtained for the neutrophil and the eosinophil fractions. For these fractions, and all components for the dimension reduction techniques, we observed significant results for the cosine-fit (table S2). The p -value represents the probability of the amplitude being zero . Additionally, Fig. shows the retention of periodicity in the dimension reductions compared to the periodicity of the neutrophil fraction. We chose neutrophil fraction as this parameter has a clear diurnal evolution. Prediction performance To assess preservation of biological relevance, we compared age ( [12pt]{minimal} $$$$ ≤ 20 versus [12pt]{minimal} $$$$ ≥ 60) and sex prediction performance of original data to the prediction performance of reduced data. Results of age at sampling predictions can be found in Fig. . We used data from 170,000 random samples and matching the samples to their reduced data, and used 30,000 random samples for validation. We observed a significant ( [12pt]{minimal} $$p <0.001$$ p < 0.001 ) drop in performance when data from any dimension reduction method was used. We observed very stable performances across the 10-fold cross validation, resulting in small variation for the accuracy and MCC. While the original data showed higher performance (accuracy = 0.88, MCC= 0.74) for age-classification, we observed a lower accuracy, ranging from 0.76 for GRP to 0.80 for the manifold methods (PCA = 0.79, and a lower MCC, ranging from 0.47 for GRP to 0.56 for TriMap (PCA = 0.55; UMAP = 0.55; PaCMAP = 0.56). This meant that applying dimension reduction negatively impacted classification tasks. The same pattern was observed in sex prediction (Figure S5). The original data showed an accurary of 0.76 and a MCC of 0.51. For the data in reduced space, the accuracy ranged from 0.61 for GRP to 0.7 for UMAP and TriMap (PCA = 0.68; PaCMAP = 0.69. The MCC ranged from 0.18 for GRP to 0.39 for UMAP (PCA = 0.34; TriMap = 0.38; PaCMAP = 0.36). Identification of leukemia-like patients In the original data (Fig. ) we found significant differences between patients that were identified as having chronic lymphocytic leukemia (CLL) with respect to our overall population for both white blood cell count as lymphocyte count. In total, we identified 3205 samples from patients with CLL, and compared these samples to all other samples in the data ( n = 3,090,580). For all dimension reductions, we found similar results, where the CLL patients’ data had significantly different distributions ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) compared to the general population for a large portion of the dimensions (Figures S8 to S12). In total, we extracted 3, 093, 792 samples from 358, 614 unique patients. We used 70 different blood cell characteristics for this study, all of them continuous variables. We used no categorical variables in our embedding. The descriptives and missingness for each variable in the data that were used in this study are described in Supplementary File 2. 52.8% of the samples were from male patients, the median age at measurement was 51 (IQR: 27–66). After preprocessing the haematological data, we applied imputation to 1, 107, 049 samples for haematological parameters that were missing as a result of laboratory protocols (e.g., not using reticulocyte mode). The distribution of the samples per patient is shown in Fig. Parameter tuning results Number of neighbours First, we compared the amount of nearest neighbours used for dimension reduction for both UMAP, TriMap, and PaCMAP. The results are shown in Fig. for UMAP, and in Figure S3 for PacMAP. For an increasing sample size and number of neighbours, we observed an initial improvement with a rapid stagnation (Fig. , Figure S3). The neighbourhood kept ratio ranged from 0.29 at 5000 samples and 5 nearest neighbours to 0.33 at 160,000 samples and 100 nearest neighbours for UMAP. However, the scores for any number of neighbours from 15 and above were similar with increasing sample size. For trustworthiness, using 5 nearest neighbours yielded worse results than using 15 or more neighbours (ranging from 0.89 to 0.91). For all other number of neighbours, the trustworthiness was limited to 0.92. For global distance preservation methods, UMAP was stable for random triplet score ranging from 0.72 for 5 nearest neighbours at 5000 samples to 0.74 for all number of neighbours at 40,000 to 160,000 samples. The distance correlation increased from 0.64 at 5000 samples to 0.69 at 40,000 to 160,000 samples (Fig. ). For PaCMAP, a similar pattern was observed (Figure S3). Local distance preservation as measured through the neighbourhood kept ratio ranged from 0.33 at 5000 samples and 5 nearest neighbours to 0.36 at 160,000 samples with 30, 50, or 100 nearest neighbours. Trustworthiness ranged from 0.88 at 5000 samples for 5 nearest neighbours to 0.90 at 160,000 samples for all other number of nearest neighbours. However, this performance was already reached at 10,000 samples by using 30, 50, or 100 nearest neighbours. Global distance preservation as measured by the random triplets score ranged from 0.73 to 0.74 for all number of neighbours. Distance correlation remained relatively stable, with scores ranging from 0.66 to 0.67. Considering the results, we decided to limit the sample sizes to 40,000 for TriMap to find the number of in- and outlying neighbours, because increasing the sample beyond this point yielded similar result, yet dramatically increased computational costs (data not shown). The result of this tuning are found in Figure S4. We observed no large differences for the number of outliers used for TriMap. However, we did observe substantial increased for global distance preservation metrics when increasing the number of inliers. Random triplets score increased from 0.75 (5 inliers) to 0.78 (100 inliers). Distance Correlation increased from 0.75 (5 inliers) to 0.81 (100 inliers). However, we decided to move forward with 50 inliers and 15 outliers for TriMap, since increasing the number of neighbours was computationally not feasible for the entire data set of over 3 million samples (data not shown). Number of components As we used 50 nearest neighbours for the dimension reductions, we also used 50 nearest neighbours for calculating the dimension reduction quality metrics. We then increased the number of components for the final dimension reduction. We used 40,000 random samples, which were matched across the dimension reduction methods. We compared 2, 4, 6, 8, 10, 20 and 30 components to get a rough estimation of the increase in performance for each of these models. Figure shows the results for the neighbourhood kept ratio, trustworthiness, random triplet score and distance correlation. For all scores, PCA performed best across all number of component ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) (Fig. ). Additionally, performances for UMAP, TriMap, and PaCMAP barely increased by increasing the number of components. Focussing on local distances, the neighbourhood kept ratio increased from 0.27 (2 dimensions) to 0.89 (30 dimensions) for PCA, whereas it stagnated for UMAP around 0.32, TriMap around 0.36, and PaCMAP around 0.35. GRP increased from 0.13 (2 dimensions) to 0.55 (30 dimensions). Trustworthiness was high for all dimension reduction methods, except for GRP at lower dimensions. PCA (range 0.92–0.97) had the highest scores, UMAP and PaCMAP performed similarly (ranges 0.90–0.92 for UMAP; 0.91–0.93 for PaCMAP). TriMap performed better than the other manifold approaches (range 0.92–0.94). GRP performed worse at lower dimensions (range 0.73 - 0.94). When it comes to global distances, PCA outperformed all other dimension reduction methods on both random triplets score (0.78 to 0.98) and distance correlation (range 0.81 - 0.93, max = 0.93 at 8 dimensions). The random triplets score remained stable for the three manifold approaches, scoring 0.74 for UMAP, 0.78 for TriMap, and 0.73 for PaCMAP. GRP increased from 0.66 at 2 dimensions to 0.86 at 30 dimensions. Distance correlation for the manifold approaches increased primarily with lower dimensions, scoring 0.90 for UMAP, 0.91 for PaCMAP and 0.92 for Trimap to 0.92 for UMAP, 0.93 for PaCMAP and 0.94 for TriMap at 4 components, which then remained stable with increasing dimensions. Although we observed an increase of performance for PCA and GRP with increasing components, we also observed a stagnation for manifold approaches at 4 components. Considering the increasing computational complexity of the manifold approaches with increasing components, we decided to limit the number of components to 6 for all methods, in order to reduce the entire data set of over 3 million samples. Number of neighbours First, we compared the amount of nearest neighbours used for dimension reduction for both UMAP, TriMap, and PaCMAP. The results are shown in Fig. for UMAP, and in Figure S3 for PacMAP. For an increasing sample size and number of neighbours, we observed an initial improvement with a rapid stagnation (Fig. , Figure S3). The neighbourhood kept ratio ranged from 0.29 at 5000 samples and 5 nearest neighbours to 0.33 at 160,000 samples and 100 nearest neighbours for UMAP. However, the scores for any number of neighbours from 15 and above were similar with increasing sample size. For trustworthiness, using 5 nearest neighbours yielded worse results than using 15 or more neighbours (ranging from 0.89 to 0.91). For all other number of neighbours, the trustworthiness was limited to 0.92. For global distance preservation methods, UMAP was stable for random triplet score ranging from 0.72 for 5 nearest neighbours at 5000 samples to 0.74 for all number of neighbours at 40,000 to 160,000 samples. The distance correlation increased from 0.64 at 5000 samples to 0.69 at 40,000 to 160,000 samples (Fig. ). For PaCMAP, a similar pattern was observed (Figure S3). Local distance preservation as measured through the neighbourhood kept ratio ranged from 0.33 at 5000 samples and 5 nearest neighbours to 0.36 at 160,000 samples with 30, 50, or 100 nearest neighbours. Trustworthiness ranged from 0.88 at 5000 samples for 5 nearest neighbours to 0.90 at 160,000 samples for all other number of nearest neighbours. However, this performance was already reached at 10,000 samples by using 30, 50, or 100 nearest neighbours. Global distance preservation as measured by the random triplets score ranged from 0.73 to 0.74 for all number of neighbours. Distance correlation remained relatively stable, with scores ranging from 0.66 to 0.67. Considering the results, we decided to limit the sample sizes to 40,000 for TriMap to find the number of in- and outlying neighbours, because increasing the sample beyond this point yielded similar result, yet dramatically increased computational costs (data not shown). The result of this tuning are found in Figure S4. We observed no large differences for the number of outliers used for TriMap. However, we did observe substantial increased for global distance preservation metrics when increasing the number of inliers. Random triplets score increased from 0.75 (5 inliers) to 0.78 (100 inliers). Distance Correlation increased from 0.75 (5 inliers) to 0.81 (100 inliers). However, we decided to move forward with 50 inliers and 15 outliers for TriMap, since increasing the number of neighbours was computationally not feasible for the entire data set of over 3 million samples (data not shown). Number of components As we used 50 nearest neighbours for the dimension reductions, we also used 50 nearest neighbours for calculating the dimension reduction quality metrics. We then increased the number of components for the final dimension reduction. We used 40,000 random samples, which were matched across the dimension reduction methods. We compared 2, 4, 6, 8, 10, 20 and 30 components to get a rough estimation of the increase in performance for each of these models. Figure shows the results for the neighbourhood kept ratio, trustworthiness, random triplet score and distance correlation. For all scores, PCA performed best across all number of component ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) (Fig. ). Additionally, performances for UMAP, TriMap, and PaCMAP barely increased by increasing the number of components. Focussing on local distances, the neighbourhood kept ratio increased from 0.27 (2 dimensions) to 0.89 (30 dimensions) for PCA, whereas it stagnated for UMAP around 0.32, TriMap around 0.36, and PaCMAP around 0.35. GRP increased from 0.13 (2 dimensions) to 0.55 (30 dimensions). Trustworthiness was high for all dimension reduction methods, except for GRP at lower dimensions. PCA (range 0.92–0.97) had the highest scores, UMAP and PaCMAP performed similarly (ranges 0.90–0.92 for UMAP; 0.91–0.93 for PaCMAP). TriMap performed better than the other manifold approaches (range 0.92–0.94). GRP performed worse at lower dimensions (range 0.73 - 0.94). When it comes to global distances, PCA outperformed all other dimension reduction methods on both random triplets score (0.78 to 0.98) and distance correlation (range 0.81 - 0.93, max = 0.93 at 8 dimensions). The random triplets score remained stable for the three manifold approaches, scoring 0.74 for UMAP, 0.78 for TriMap, and 0.73 for PaCMAP. GRP increased from 0.66 at 2 dimensions to 0.86 at 30 dimensions. Distance correlation for the manifold approaches increased primarily with lower dimensions, scoring 0.90 for UMAP, 0.91 for PaCMAP and 0.92 for Trimap to 0.92 for UMAP, 0.93 for PaCMAP and 0.94 for TriMap at 4 components, which then remained stable with increasing dimensions. Although we observed an increase of performance for PCA and GRP with increasing components, we also observed a stagnation for manifold approaches at 4 components. Considering the increasing computational complexity of the manifold approaches with increasing components, we decided to limit the number of components to 6 for all methods, in order to reduce the entire data set of over 3 million samples. First, we compared the amount of nearest neighbours used for dimension reduction for both UMAP, TriMap, and PaCMAP. The results are shown in Fig. for UMAP, and in Figure S3 for PacMAP. For an increasing sample size and number of neighbours, we observed an initial improvement with a rapid stagnation (Fig. , Figure S3). The neighbourhood kept ratio ranged from 0.29 at 5000 samples and 5 nearest neighbours to 0.33 at 160,000 samples and 100 nearest neighbours for UMAP. However, the scores for any number of neighbours from 15 and above were similar with increasing sample size. For trustworthiness, using 5 nearest neighbours yielded worse results than using 15 or more neighbours (ranging from 0.89 to 0.91). For all other number of neighbours, the trustworthiness was limited to 0.92. For global distance preservation methods, UMAP was stable for random triplet score ranging from 0.72 for 5 nearest neighbours at 5000 samples to 0.74 for all number of neighbours at 40,000 to 160,000 samples. The distance correlation increased from 0.64 at 5000 samples to 0.69 at 40,000 to 160,000 samples (Fig. ). For PaCMAP, a similar pattern was observed (Figure S3). Local distance preservation as measured through the neighbourhood kept ratio ranged from 0.33 at 5000 samples and 5 nearest neighbours to 0.36 at 160,000 samples with 30, 50, or 100 nearest neighbours. Trustworthiness ranged from 0.88 at 5000 samples for 5 nearest neighbours to 0.90 at 160,000 samples for all other number of nearest neighbours. However, this performance was already reached at 10,000 samples by using 30, 50, or 100 nearest neighbours. Global distance preservation as measured by the random triplets score ranged from 0.73 to 0.74 for all number of neighbours. Distance correlation remained relatively stable, with scores ranging from 0.66 to 0.67. Considering the results, we decided to limit the sample sizes to 40,000 for TriMap to find the number of in- and outlying neighbours, because increasing the sample beyond this point yielded similar result, yet dramatically increased computational costs (data not shown). The result of this tuning are found in Figure S4. We observed no large differences for the number of outliers used for TriMap. However, we did observe substantial increased for global distance preservation metrics when increasing the number of inliers. Random triplets score increased from 0.75 (5 inliers) to 0.78 (100 inliers). Distance Correlation increased from 0.75 (5 inliers) to 0.81 (100 inliers). However, we decided to move forward with 50 inliers and 15 outliers for TriMap, since increasing the number of neighbours was computationally not feasible for the entire data set of over 3 million samples (data not shown). As we used 50 nearest neighbours for the dimension reductions, we also used 50 nearest neighbours for calculating the dimension reduction quality metrics. We then increased the number of components for the final dimension reduction. We used 40,000 random samples, which were matched across the dimension reduction methods. We compared 2, 4, 6, 8, 10, 20 and 30 components to get a rough estimation of the increase in performance for each of these models. Figure shows the results for the neighbourhood kept ratio, trustworthiness, random triplet score and distance correlation. For all scores, PCA performed best across all number of component ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) (Fig. ). Additionally, performances for UMAP, TriMap, and PaCMAP barely increased by increasing the number of components. Focussing on local distances, the neighbourhood kept ratio increased from 0.27 (2 dimensions) to 0.89 (30 dimensions) for PCA, whereas it stagnated for UMAP around 0.32, TriMap around 0.36, and PaCMAP around 0.35. GRP increased from 0.13 (2 dimensions) to 0.55 (30 dimensions). Trustworthiness was high for all dimension reduction methods, except for GRP at lower dimensions. PCA (range 0.92–0.97) had the highest scores, UMAP and PaCMAP performed similarly (ranges 0.90–0.92 for UMAP; 0.91–0.93 for PaCMAP). TriMap performed better than the other manifold approaches (range 0.92–0.94). GRP performed worse at lower dimensions (range 0.73 - 0.94). When it comes to global distances, PCA outperformed all other dimension reduction methods on both random triplets score (0.78 to 0.98) and distance correlation (range 0.81 - 0.93, max = 0.93 at 8 dimensions). The random triplets score remained stable for the three manifold approaches, scoring 0.74 for UMAP, 0.78 for TriMap, and 0.73 for PaCMAP. GRP increased from 0.66 at 2 dimensions to 0.86 at 30 dimensions. Distance correlation for the manifold approaches increased primarily with lower dimensions, scoring 0.90 for UMAP, 0.91 for PaCMAP and 0.92 for Trimap to 0.92 for UMAP, 0.93 for PaCMAP and 0.94 for TriMap at 4 components, which then remained stable with increasing dimensions. Although we observed an increase of performance for PCA and GRP with increasing components, we also observed a stagnation for manifold approaches at 4 components. Considering the increasing computational complexity of the manifold approaches with increasing components, we decided to limit the number of components to 6 for all methods, in order to reduce the entire data set of over 3 million samples. Cluster preservation Table shows the performances of clustering methods using the reduced data. We observed an excess of clusters with subsequent low values for the Normalised Mutual Information (NMI) score and Adjusted Rand Index (ARI), showing that the dimension reduction methods have a tendency to generate an excess of clusters in comparison with the real data. We identified 12 clusters in the original data, whereas we found 32, 31 and 12 for PCA at 3, 6 and 12 components respectively. For the manifold approaches, we found a large inflation of clusters. For UMAP we identified 115, 84, and 81 clusters; for TriMap we identified 45, 44 and 53 clusters; for PaCMAP we identified 42, 43 and 54 clusters, all with 3, 6 and 12 components respectively. Finally, for GRP we identified 30, 22 and 5 clusters for 3,6 and 12 components respectively. Comparing the NMI score and ARI we found that, overall, scores were low ( [12pt]{minimal} $$$$ ≤ 0.10 for NMI and ARI, and did not improve when increasing the number of components, except for GRP with a NMI of 0.01 at 3 components and 0.12 at 12 components, and an ARI of −0.0003 at 3 components to 0.19 (table ). Furthermore, we found that, in terms of cluster quality, UMAP stagnates to a value well under the optimum for increasing number of components being on par with PCA for smaller number of components (Figure S6). Additionally, we observed that all manifold approaches maintain a high level of cluster-inflation for increasing number of reduced dimensions. Finally, we observed that for a low number of reduced dimensions, all tested dimension reduction techniques produced a considerable inflated number of clusters as detected by HDBSCAN compared to the baseline cluster detection on the original data (Figure S7, Table ). Diurnal patterns Figure shows the 6 UMAP dimensions. We observed a diurnal pattern for each of the components, primarily split between daycare (6:00–18:00) and care during the night, with clear progression within the daycare time-period. The clearest diurnal patterns in the non-reduced data are obtained for the neutrophil and the eosinophil fractions. For these fractions, and all components for the dimension reduction techniques, we observed significant results for the cosine-fit (table S2). The p -value represents the probability of the amplitude being zero . Additionally, Fig. shows the retention of periodicity in the dimension reductions compared to the periodicity of the neutrophil fraction. We chose neutrophil fraction as this parameter has a clear diurnal evolution. Prediction performance To assess preservation of biological relevance, we compared age ( [12pt]{minimal} $$$$ ≤ 20 versus [12pt]{minimal} $$$$ ≥ 60) and sex prediction performance of original data to the prediction performance of reduced data. Results of age at sampling predictions can be found in Fig. . We used data from 170,000 random samples and matching the samples to their reduced data, and used 30,000 random samples for validation. We observed a significant ( [12pt]{minimal} $$p <0.001$$ p < 0.001 ) drop in performance when data from any dimension reduction method was used. We observed very stable performances across the 10-fold cross validation, resulting in small variation for the accuracy and MCC. While the original data showed higher performance (accuracy = 0.88, MCC= 0.74) for age-classification, we observed a lower accuracy, ranging from 0.76 for GRP to 0.80 for the manifold methods (PCA = 0.79, and a lower MCC, ranging from 0.47 for GRP to 0.56 for TriMap (PCA = 0.55; UMAP = 0.55; PaCMAP = 0.56). This meant that applying dimension reduction negatively impacted classification tasks. The same pattern was observed in sex prediction (Figure S5). The original data showed an accurary of 0.76 and a MCC of 0.51. For the data in reduced space, the accuracy ranged from 0.61 for GRP to 0.7 for UMAP and TriMap (PCA = 0.68; PaCMAP = 0.69. The MCC ranged from 0.18 for GRP to 0.39 for UMAP (PCA = 0.34; TriMap = 0.38; PaCMAP = 0.36). Identification of leukemia-like patients In the original data (Fig. ) we found significant differences between patients that were identified as having chronic lymphocytic leukemia (CLL) with respect to our overall population for both white blood cell count as lymphocyte count. In total, we identified 3205 samples from patients with CLL, and compared these samples to all other samples in the data ( n = 3,090,580). For all dimension reductions, we found similar results, where the CLL patients’ data had significantly different distributions ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) compared to the general population for a large portion of the dimensions (Figures S8 to S12). Table shows the performances of clustering methods using the reduced data. We observed an excess of clusters with subsequent low values for the Normalised Mutual Information (NMI) score and Adjusted Rand Index (ARI), showing that the dimension reduction methods have a tendency to generate an excess of clusters in comparison with the real data. We identified 12 clusters in the original data, whereas we found 32, 31 and 12 for PCA at 3, 6 and 12 components respectively. For the manifold approaches, we found a large inflation of clusters. For UMAP we identified 115, 84, and 81 clusters; for TriMap we identified 45, 44 and 53 clusters; for PaCMAP we identified 42, 43 and 54 clusters, all with 3, 6 and 12 components respectively. Finally, for GRP we identified 30, 22 and 5 clusters for 3,6 and 12 components respectively. Comparing the NMI score and ARI we found that, overall, scores were low ( [12pt]{minimal} $$$$ ≤ 0.10 for NMI and ARI, and did not improve when increasing the number of components, except for GRP with a NMI of 0.01 at 3 components and 0.12 at 12 components, and an ARI of −0.0003 at 3 components to 0.19 (table ). Furthermore, we found that, in terms of cluster quality, UMAP stagnates to a value well under the optimum for increasing number of components being on par with PCA for smaller number of components (Figure S6). Additionally, we observed that all manifold approaches maintain a high level of cluster-inflation for increasing number of reduced dimensions. Finally, we observed that for a low number of reduced dimensions, all tested dimension reduction techniques produced a considerable inflated number of clusters as detected by HDBSCAN compared to the baseline cluster detection on the original data (Figure S7, Table ). Figure shows the 6 UMAP dimensions. We observed a diurnal pattern for each of the components, primarily split between daycare (6:00–18:00) and care during the night, with clear progression within the daycare time-period. The clearest diurnal patterns in the non-reduced data are obtained for the neutrophil and the eosinophil fractions. For these fractions, and all components for the dimension reduction techniques, we observed significant results for the cosine-fit (table S2). The p -value represents the probability of the amplitude being zero . Additionally, Fig. shows the retention of periodicity in the dimension reductions compared to the periodicity of the neutrophil fraction. We chose neutrophil fraction as this parameter has a clear diurnal evolution. To assess preservation of biological relevance, we compared age ( [12pt]{minimal} $$$$ ≤ 20 versus [12pt]{minimal} $$$$ ≥ 60) and sex prediction performance of original data to the prediction performance of reduced data. Results of age at sampling predictions can be found in Fig. . We used data from 170,000 random samples and matching the samples to their reduced data, and used 30,000 random samples for validation. We observed a significant ( [12pt]{minimal} $$p <0.001$$ p < 0.001 ) drop in performance when data from any dimension reduction method was used. We observed very stable performances across the 10-fold cross validation, resulting in small variation for the accuracy and MCC. While the original data showed higher performance (accuracy = 0.88, MCC= 0.74) for age-classification, we observed a lower accuracy, ranging from 0.76 for GRP to 0.80 for the manifold methods (PCA = 0.79, and a lower MCC, ranging from 0.47 for GRP to 0.56 for TriMap (PCA = 0.55; UMAP = 0.55; PaCMAP = 0.56). This meant that applying dimension reduction negatively impacted classification tasks. The same pattern was observed in sex prediction (Figure S5). The original data showed an accurary of 0.76 and a MCC of 0.51. For the data in reduced space, the accuracy ranged from 0.61 for GRP to 0.7 for UMAP and TriMap (PCA = 0.68; PaCMAP = 0.69. The MCC ranged from 0.18 for GRP to 0.39 for UMAP (PCA = 0.34; TriMap = 0.38; PaCMAP = 0.36). In the original data (Fig. ) we found significant differences between patients that were identified as having chronic lymphocytic leukemia (CLL) with respect to our overall population for both white blood cell count as lymphocyte count. In total, we identified 3205 samples from patients with CLL, and compared these samples to all other samples in the data ( n = 3,090,580). For all dimension reductions, we found similar results, where the CLL patients’ data had significantly different distributions ( [12pt]{minimal} $$p<0.001$$ p < 0.001 ) compared to the general population for a large portion of the dimensions (Figures S8 to S12). In this study, we investigated the use of dimension reduction methods in a large set of routine CBC data from the Abbott CELL-DYN Sapphire haemocytometer. We compared PCA, UMAP, TriMap, and PaCMAP with multiple performance metrics (neighbourhood kept ratio, trustworthiness, random triplet score, and distance correlation). We found that looking at dimension reduction metrics, PCA was best performing in comparison with UMAP, TriMap and PaCMAP. As the purpose of these dimension reductions lie in analysis and interpretation, we investigated if any biological representation was correctly maintained. We found that diurnal patterns were maintained, but that predictive tasks (such as age and sex) performed significantly worse compared to the original data, and that clustering tasks resulted in an overestimation of clusters compared to the original data. We conclude that using dimension reductions will result in a loss of information compared to the original data, even in predictive tasks where subgroups should be apparently clear. In literature, UMAP and other (non-linear) dimensionality reduction techniques are evaluated as superior with respect to PCA . However, the utility of UMAP and other nearest-neighbours-based dimension reduction methods is seemingly limited to very-low dimensional representations for the purpose of visualisation . In our study, we observe that for increasing dimensionality, the manifold techniques converge to dimension reduction scores that are far from optimal, whereas PCA reaches near-optimal scores well before it is able to explain 95% of the variance (n components = 30). This is likely the case for other global methods, but further research is needed to study this. We deem that this effect is partly because of the large sample size, meaning that the neighbourhood for a certain sample is harder to define, or we need a larger number of neighbours when increasing sample size. However, increasing the number of neighbours can result in computational issues, considering the pairwise nature of the dimension reduction techniques and performance measures. When dealing with neighbourhood-based dimension reduction methods such as UMAP, TriMap and PaCMAP there is a trade-off between the preservation of local and global characteristics. Possible mitigations are to increase the number of components and the number of nearest neighbours. The number of dimensions and neighbours is dependent on the amount of samples in the dataset. However, increasing the number of dimensions and number of neighbours increases the complexity of dimension reduction. Furthermore, for an increasing number of samples of multiple modalities, the heterogeneity of the data can increase, and it then becomes more difficult to embed the data with sufficient accuracy i.e., more samples does not inherently equate to a better dimension reduction. Finally, PCA becomes competitive in terms of dimension reduction performance when increasing the number of dimensions since the amount of explained variance increases, while being orders of magnitude more efficient computationally, especially if one considers the availability of Incremental PCA that has a constant memory complexity . Another PCA-related approach would be the application of Kernel PCA for non-linear PCA. However, Kernel PCA has notable scaling issues with sample size, and is therefore not useful in our setting . Finally, the use of Independent Principal Component Analysis could be of interest, since it has no assumption concerning gaussian distributions of input variables. Additionally, through the combination of Independent and Principal Component Analysis, more biologically meaningful components may be identified . Another way to mitigate the issue with the trade-off between global and local characteristics, is to limit the number of samples used for the dimension reduction such that it contains enough samples per stratification, but not more. This requires enough information for the stratifications we are interested in, which in turn requires labelling. This is a known issue when using (routine) healthcare data, as the administrative start of a disease that is indicated by registration of a certain diagnosis does not coincide with the physical start of the disease. As the physical start of the disease may affect some or all parts of the CBC, labelling of disease presence at the time of blood draw is intrinsically difficult. Moreover, most patients that visit our tertiary care centre suffer from complex diseases and multiple comorbidities, further complicating labelling of our haematology data. Because of these issues, we were unable to retrieve clear labels for our samples. One other mitigation of the problem with large sample sizes and neighborhood-based dimension reduction methods that leads to improved tractability is the use of dimension reduction alignment, where we partition the datasets to create many dimension reductions that are subsequently aligned, using, e.g., the Procustes transformation . Another benefit of dimension reduction alignment is that adding new data to the dimension reduction is much faster. Biological performance We investigated patterns in the data that are known to be present within haematology data. Indeed, we observed known diurnal patterns of white blood cells . This pattern was also observed within the data after dimension reduction, showing preservation of intraday variation by dimension reduction methods. With respect to the prediction of samples belonging to subgroups in the data, we observed a significant decreased performance in the reduced data. We deem that dimension reduction before prediction tasks in these data is not a preferable approach, since the loss of information or quality of data representation is an apparent issue. Increasing the number of dimensions might mitigate this , but can lead to more complex dimension reduction processes, and we observed that the manifold approaches did not convergence to an optimal data representation of increasing dimensionality (Fig. ). Rather than using dimension reduction, more emphasis should be given towards proper feature selection for analysis when the amount of parameters is too high for the amount of samples. This can, of course, be combined with dimension reduction . In literature, we found some beneficial results of using dimension reduction before prediction in different settings, since it can offer similar or better model performance to using original data at least in experimental circumstances , or that dimension reduction can be used for feature selection . However, this requires a robust dimension reduction method, which also preserves the distances when applying this method. Considering our findings, the use of unsupervised dimension reduction techniques before modelling should be approached with caution or even refrained from. Finally, we applied clustering to assess cluster preservation in reduced data. We find that using dimension reduction will result in an overestimation of the amount of clusters when using HDBSCAN. This comes together with the loss of information, or quality. As mentioned before, this might be mitigable, but will increase computational costs and complexity. We do observe that CLL patients are still significantly different after applying dimension reduction methods. This means that, although dimension reduction methods do not completely preserve biological or clinical relevancy, obvious extremes in the data are still apparent. However, the differences between CLL and non-CLL patients become less apparent after dimension reduction, resulting in a limited clinical diagnostic applicability of dimension reduction methods. It must be noted that, although these patients were indeed CLL patients, not all data points from these patients may necessarily be overlapping with disease (e.g., blood samples taken before CLL was present). This may also result in overlap in counts with the general population. We were able to retroactively identify patients with CLL, but had no information on the exact point in time where CLL was diagnosed. Further research Considering we have limited our study towards unsupervised non-parametric dimension reduction methods, a logical next step is to use supervised and/or parametric dimension reduction. An improvement of non-parametric UMAP is parametric UMAP where a learnable parameterised model sits between the dimension reductions and the final loss, enabling the addition of e.g. a global loss contribution . Additionally, when we are dealing with large volume data, benefit might be gained from using fully parameterised dimension reduction methods such as Differentiating dimension reduction Networks, which is more interpretable compared to UMAP and t-SNE because of the parametric nature . Finally, when it comes to generalizability of dimension reduction results, and working towards a more holistic integrative approach of data analysis within healthcare, fully parameterised models such as variational autoencoders are interesting from the perspective of transfer learning, as it adds flexibility to continue learning with incoming data and transferring the resulting model to other institutions to continue training on their on-premise data, which can play a role in federated learning, see e.g. DynAE . An interesting approach is the use of a contrastive-loss function, as opposed to reconstruction loss function, see e.g. , or a hybrid of reconstruction loss on the output representation plus a contrastive-loss on the latent representation for autoencoder architectures. In addition, research could study the use of (semi-)supervised dimension reduction approaches. To ensure clinical relevance, sparsely available labels can be employed, and consequently semi-supervised UMAP/t-SNE or Multi-Class, Multi-Label dimension reduction can be deployed. Possible other variables that can be of interest for this approach can consist of demographic data (e.g., sex or age), data on time of day, or other relevant variables such as in-/out-patient status, hospital department, or even length-of-stay. Limitations Finally, our study is prone to some limitations. Most importantly, we lack a clear healthy control group, as our data is from tertiary care only. The data encompasses some samples that come from healthy individuals (such as patients that were referred to the UMCU, but their diagnostic work-up did not confirm any diseases), but because labels are not available, we cannot identify these samples definitively. Secondly, we can not completely rule out any differences between haematology analyzers as well as differences throughout time due to software versioning. However, since the machines are used in clinical care, and are calibrated as such, we deem this effect to be limited. Thirdly, for our work, we focused on the effect of dimension reduction techniques on downstream clinical tasks. The imputation method played a faciliatory role in allowing for a comparison over all samples. The majority of samples showed no missingness and the same imputed data was used for all dimension reduction approaches. In addition, our missing data could be considered MAR, and therefore we do not believe that a potentially sub-par imputation method would skew the results favorable for any particular dimension reduction approach. Moreover, there are some limitations considering the neighbourhood-based dimension reduction methods. One main limitation of UMAP is that the negative sampling process does not take into account the distance to the current point outside the number of nearest neighbours surrounding each point. This inaccuracy becomes more expressed if the number of samples with respect to the number of nearest neighbours is increased. The result of this is that points that are just outside the direct neighbourhood are placed incorrectly, further away in the reduced data. Additionally, UMAP is a greedy algorithm, basically requiring a copy of the original data such that incoming data can be interpolated onto the low-dimensional manifold. Furthermore, at the time of writing, neither TriMap nor PaCMAP provide a clear opportunity to embed unseen data into the space of the existing dimension reduction. This makes it harder for example to share dimension reductions between healthcare institutions, which might be beneficial, since this allows for easier interpretation of haematology measurements in the context of the overall population. Another limitation in our study is that we did not use a topology preservation metric. Scoring based on topology metrics might result in a higher ranking for the manifold approaches, as these are especially designed to preserve topology. Furthermore, we have limited our research to PCA, GRP and three manifold approaches. Of course, many more methods are available. For example, self-organising maps have been used successfully in haematological data, specifically at single-cell level . We investigated patterns in the data that are known to be present within haematology data. Indeed, we observed known diurnal patterns of white blood cells . This pattern was also observed within the data after dimension reduction, showing preservation of intraday variation by dimension reduction methods. With respect to the prediction of samples belonging to subgroups in the data, we observed a significant decreased performance in the reduced data. We deem that dimension reduction before prediction tasks in these data is not a preferable approach, since the loss of information or quality of data representation is an apparent issue. Increasing the number of dimensions might mitigate this , but can lead to more complex dimension reduction processes, and we observed that the manifold approaches did not convergence to an optimal data representation of increasing dimensionality (Fig. ). Rather than using dimension reduction, more emphasis should be given towards proper feature selection for analysis when the amount of parameters is too high for the amount of samples. This can, of course, be combined with dimension reduction . In literature, we found some beneficial results of using dimension reduction before prediction in different settings, since it can offer similar or better model performance to using original data at least in experimental circumstances , or that dimension reduction can be used for feature selection . However, this requires a robust dimension reduction method, which also preserves the distances when applying this method. Considering our findings, the use of unsupervised dimension reduction techniques before modelling should be approached with caution or even refrained from. Finally, we applied clustering to assess cluster preservation in reduced data. We find that using dimension reduction will result in an overestimation of the amount of clusters when using HDBSCAN. This comes together with the loss of information, or quality. As mentioned before, this might be mitigable, but will increase computational costs and complexity. We do observe that CLL patients are still significantly different after applying dimension reduction methods. This means that, although dimension reduction methods do not completely preserve biological or clinical relevancy, obvious extremes in the data are still apparent. However, the differences between CLL and non-CLL patients become less apparent after dimension reduction, resulting in a limited clinical diagnostic applicability of dimension reduction methods. It must be noted that, although these patients were indeed CLL patients, not all data points from these patients may necessarily be overlapping with disease (e.g., blood samples taken before CLL was present). This may also result in overlap in counts with the general population. We were able to retroactively identify patients with CLL, but had no information on the exact point in time where CLL was diagnosed. Considering we have limited our study towards unsupervised non-parametric dimension reduction methods, a logical next step is to use supervised and/or parametric dimension reduction. An improvement of non-parametric UMAP is parametric UMAP where a learnable parameterised model sits between the dimension reductions and the final loss, enabling the addition of e.g. a global loss contribution . Additionally, when we are dealing with large volume data, benefit might be gained from using fully parameterised dimension reduction methods such as Differentiating dimension reduction Networks, which is more interpretable compared to UMAP and t-SNE because of the parametric nature . Finally, when it comes to generalizability of dimension reduction results, and working towards a more holistic integrative approach of data analysis within healthcare, fully parameterised models such as variational autoencoders are interesting from the perspective of transfer learning, as it adds flexibility to continue learning with incoming data and transferring the resulting model to other institutions to continue training on their on-premise data, which can play a role in federated learning, see e.g. DynAE . An interesting approach is the use of a contrastive-loss function, as opposed to reconstruction loss function, see e.g. , or a hybrid of reconstruction loss on the output representation plus a contrastive-loss on the latent representation for autoencoder architectures. In addition, research could study the use of (semi-)supervised dimension reduction approaches. To ensure clinical relevance, sparsely available labels can be employed, and consequently semi-supervised UMAP/t-SNE or Multi-Class, Multi-Label dimension reduction can be deployed. Possible other variables that can be of interest for this approach can consist of demographic data (e.g., sex or age), data on time of day, or other relevant variables such as in-/out-patient status, hospital department, or even length-of-stay. Finally, our study is prone to some limitations. Most importantly, we lack a clear healthy control group, as our data is from tertiary care only. The data encompasses some samples that come from healthy individuals (such as patients that were referred to the UMCU, but their diagnostic work-up did not confirm any diseases), but because labels are not available, we cannot identify these samples definitively. Secondly, we can not completely rule out any differences between haematology analyzers as well as differences throughout time due to software versioning. However, since the machines are used in clinical care, and are calibrated as such, we deem this effect to be limited. Thirdly, for our work, we focused on the effect of dimension reduction techniques on downstream clinical tasks. The imputation method played a faciliatory role in allowing for a comparison over all samples. The majority of samples showed no missingness and the same imputed data was used for all dimension reduction approaches. In addition, our missing data could be considered MAR, and therefore we do not believe that a potentially sub-par imputation method would skew the results favorable for any particular dimension reduction approach. Moreover, there are some limitations considering the neighbourhood-based dimension reduction methods. One main limitation of UMAP is that the negative sampling process does not take into account the distance to the current point outside the number of nearest neighbours surrounding each point. This inaccuracy becomes more expressed if the number of samples with respect to the number of nearest neighbours is increased. The result of this is that points that are just outside the direct neighbourhood are placed incorrectly, further away in the reduced data. Additionally, UMAP is a greedy algorithm, basically requiring a copy of the original data such that incoming data can be interpolated onto the low-dimensional manifold. Furthermore, at the time of writing, neither TriMap nor PaCMAP provide a clear opportunity to embed unseen data into the space of the existing dimension reduction. This makes it harder for example to share dimension reductions between healthcare institutions, which might be beneficial, since this allows for easier interpretation of haematology measurements in the context of the overall population. Another limitation in our study is that we did not use a topology preservation metric. Scoring based on topology metrics might result in a higher ranking for the manifold approaches, as these are especially designed to preserve topology. Furthermore, we have limited our research to PCA, GRP and three manifold approaches. Of course, many more methods are available. For example, self-organising maps have been used successfully in haematological data, specifically at single-cell level . When applying dimension reduction to high-dimensional high-volume haematology data, we found that a global statistics based reduction technique such as PCA performs systematically better than much more recent non-linear minimum-distortion dimension reduction techniques in representing the underlying data. In general, the use of dimension reduction method had limited biological performance, especially as a precursor for prediction tasks. Therefore, we advise that dimension reduction techniques are limited to data visualisation applications, e.g. for exploratory data analysis and research dissemination. The use of dimension reduction techniques as components in diagnostic pipelines may lead to decreased quality of integrated diagnostics in clinical care. Supplementary Material 1. Supplementary Material 2. |
Paternal well-being perception during childbirth: Experience of prepared Chilean fathers after a prenatal education intervention | 53f83099-5a75-475f-a1bc-e94aa5bb6eda | 11533973 | Patient Education as Topic[mh] | The birth process, since its origins, has corresponded to a social and family event, in which the arrival of a child consolidates the family figure for many couples . It is common for men to participate in one way or another during the pregnancy and birth of their own children, which has become a key aspect to their benefit as individuals and as fathers . Likewise, their participation benefits life and family project of the couple . The arrival of a child, and the process of establishing fatherly relationships require not only the physical presence of the father, but his social and emotional presence during those special moments of life, which could be transcendental to the relationship with his child . A father who is present and active at birth is involved from the gestational stage, both in his role as a companion as well as his role as a father . For other cultures, however, it would be enough to just be present fulfilling their social and safety responsibilities for their partner, without participating or being directly involved during labor or birth Whichever the case, fathers, and especially first-time fathers, require knowledge about pregnancy, birth, and parenting, to meet their expectation of supporting or relating to the child. Unfortunately, not all fathers are involved or prepared to play their roles during the prenatal stage, and even less so during the time of birth . A father who does not have the adequate preparation for their expectations and needs could live this experience in a scenario full of negative emotions of fear, stress, and anxiety , instead of living it as a fulfilling experience of well-being. Fathers’ involvement and well-being concept. If the challenge is to include fathers in the role of co-parenting from early stages of pregnancy, and especially by being involved at birth, it is important to highlight fathers’ positive experiences as much as it has been done for mothers and, in the same way trying to understand the lived experience and the father’s well-being state. This concept has been widely defined and analyzed from historical philosophical perspectives and developed as a line of research within psychology , however, it has not been possible to fully agree upon its definition or the nature of its structure . Moreover, studies on the well-being experienced in significant life situations, such as the experience of fatherhood , where health processes are intertwined, have been scarce. Most authors have explained the concept of well-being, and its relationship to individual or social judgments, based on the constructs of life satisfaction and happiness Similarly, given that well-being has been linked to happiness and life satisfaction, authors have attempted to define well-being through a hedonic approach, focusing mainly on affectivity or positive emotionality, enjoyment or pleasure . Accordingly, it has been described through an eudaemonic perspective, which, in addition to affectivity, associates satisfaction with personal development and growth to effort, meaning, achievements, and commitment, among others What has been reported thus far in the literature, and what could relate the concept of well-being to fathers’ experiences at birth, correspond to some of the elements that shape well-being, which have been explored independently. The case of paternal affectivity at birth has been studied and understood through qualitative methods , while other elements that could be related to well-being have been approached from a quantitative perspective and considering the construct of “client satisfaction” . On the other hand, the focus is, most of the time, on the experience of first-time fathers . Regardless of what the approach is, or from what approach the concept of paternal well-being is defined, what is proposed in this study is to understand, from a qualitative perspective, how fathers lived the experience of childbirth, and how they were involved in this process, within the framework of the concept of perceived well-being.
Primary Study (PS) Design Two studies with a qualitative approach were conducted between the Pontificia Universidad Católica de Chile and the University of Melbourne, Australia. Both studies aimed to explore the experience of fathers’ participation in labor and birth, asynchronously, in 2016 and 2020, respectively. This article reports on Chilean fathers’ experiences during the childbirth between 2016 and 2017 . A secondary supplementary data analysis of the primary study (PS) was conducted, to reveal the phenomenon of paternal experience at childbirth under another conceptual framework, different from that of the PS. PS Site The study was carried out in a mixed public and private health system of a University Health Network in Santiago de Chile and was continued in the participants’ homes at the time of postpartum. PS Selection Criteria Adult men, partners of pregnant women, were invited to participate in a prenatal education intervention (PEI) for fathers through Action Research. Information posters about the study and the contact of the researchers were placed in the health centers. Fathers were invited to live a paternal experience of labor and birth in their different roles after the PEI. Fathers who would not participate at childbirth were excluded. PS Sample Definition It was a convenience sample from a total of 12 adult fathers, who were trained in the PEI. The intervention included focus groups to understand education needs of expectant fathers. After that, they participated in three to four educational sessions that focused on male parents’participation during pregnancy, childbirth, and postpartum period. Four themes synthesized the PEI of the PS: fathers’ intention to establish contact with their baby from prenatal period; fathers’ needs to know more about childbirth; fathers’ desire to make physical father-child contact at childbirth; training how to care the baby during the first days after birth. After the prenatal intervention all of them were invited to share their lived experience of childbirth starting 2 weeks postpartum. The final sample for interviews was 8 fathers, since due to time issues 4 of the 12 could not participate. Despite this, the saturation criterion for this new approach to the phenomenon was met PS Data Collection Data was collected through 8 open in-depth interviews in the way face to face, which lasted an average of 50 minutes. All interviews were carried out by the principal researcher, who was a female PhD. The guiding open-ended question for fathers was: Could you please share… How was the experience of participating and being involved in the birth of your child? All individual interviews prior participants authorization were recorded in an MP3 audio system and finally transcribed verbatim by a trained research assistance. Participants’ identities were kept confidential, and pseudonyms were used in the transcriptions of the interviews. Field notes were made by the researcher after each interview. Data Analysis and Treatment Secondary Analysis was carried out by a Chilean-Australian researchers’ team (two Chilean coder and two Australian re-coder). The analysis procedure was also led by the principal investigator of the original study and no software was used for data processing. It was based on an interpretive paradigm and a phenomenological approach ; that allowed to understand the experiences lived by fathers under a new framework, based on the concept of well-being, since the phenomenon was observed as it was experienced and perceived by each participant. On the other hand, Husserl’s phenomenological approach allowed the discovery of “essential units” in the data that had not been revealed in the PS. In this way, through a phenomenological approach, the central categories emerged from this current secondary analysis of data could support the concept of well-being perceived by a father when they narrate their childbirth experience. To confirm member checking process main results summary was sent to the participants. Ethical Aspects The research procedures were developed in accordance with international ethical regulations, and according to the legislation that regulates health research in Chile. Both the PS and the current secondary analysis were approved by the institutional Scientific Ethics Committee at the Pontificia Universidad Católica de Chile, (ID: 15-159; ID: 190313012) which validated the process and the Consent Informed Format of the PS. To protect the identity of the participants in the transcriptions, fancy pseudonyms chosen by themselves were used, while for the analysis data participants (P) were presented through alphanumeric coding, according to the chronological order of participation (P1, P2, P3, and so on).
Two studies with a qualitative approach were conducted between the Pontificia Universidad Católica de Chile and the University of Melbourne, Australia. Both studies aimed to explore the experience of fathers’ participation in labor and birth, asynchronously, in 2016 and 2020, respectively. This article reports on Chilean fathers’ experiences during the childbirth between 2016 and 2017 . A secondary supplementary data analysis of the primary study (PS) was conducted, to reveal the phenomenon of paternal experience at childbirth under another conceptual framework, different from that of the PS.
The study was carried out in a mixed public and private health system of a University Health Network in Santiago de Chile and was continued in the participants’ homes at the time of postpartum.
Adult men, partners of pregnant women, were invited to participate in a prenatal education intervention (PEI) for fathers through Action Research. Information posters about the study and the contact of the researchers were placed in the health centers. Fathers were invited to live a paternal experience of labor and birth in their different roles after the PEI. Fathers who would not participate at childbirth were excluded.
It was a convenience sample from a total of 12 adult fathers, who were trained in the PEI. The intervention included focus groups to understand education needs of expectant fathers. After that, they participated in three to four educational sessions that focused on male parents’participation during pregnancy, childbirth, and postpartum period. Four themes synthesized the PEI of the PS: fathers’ intention to establish contact with their baby from prenatal period; fathers’ needs to know more about childbirth; fathers’ desire to make physical father-child contact at childbirth; training how to care the baby during the first days after birth. After the prenatal intervention all of them were invited to share their lived experience of childbirth starting 2 weeks postpartum. The final sample for interviews was 8 fathers, since due to time issues 4 of the 12 could not participate. Despite this, the saturation criterion for this new approach to the phenomenon was met
Data was collected through 8 open in-depth interviews in the way face to face, which lasted an average of 50 minutes. All interviews were carried out by the principal researcher, who was a female PhD. The guiding open-ended question for fathers was: Could you please share… How was the experience of participating and being involved in the birth of your child? All individual interviews prior participants authorization were recorded in an MP3 audio system and finally transcribed verbatim by a trained research assistance. Participants’ identities were kept confidential, and pseudonyms were used in the transcriptions of the interviews. Field notes were made by the researcher after each interview.
Secondary Analysis was carried out by a Chilean-Australian researchers’ team (two Chilean coder and two Australian re-coder). The analysis procedure was also led by the principal investigator of the original study and no software was used for data processing. It was based on an interpretive paradigm and a phenomenological approach ; that allowed to understand the experiences lived by fathers under a new framework, based on the concept of well-being, since the phenomenon was observed as it was experienced and perceived by each participant. On the other hand, Husserl’s phenomenological approach allowed the discovery of “essential units” in the data that had not been revealed in the PS. In this way, through a phenomenological approach, the central categories emerged from this current secondary analysis of data could support the concept of well-being perceived by a father when they narrate their childbirth experience. To confirm member checking process main results summary was sent to the participants.
The research procedures were developed in accordance with international ethical regulations, and according to the legislation that regulates health research in Chile. Both the PS and the current secondary analysis were approved by the institutional Scientific Ethics Committee at the Pontificia Universidad Católica de Chile, (ID: 15-159; ID: 190313012) which validated the process and the Consent Informed Format of the PS. To protect the identity of the participants in the transcriptions, fancy pseudonyms chosen by themselves were used, while for the analysis data participants (P) were presented through alphanumeric coding, according to the chronological order of participation (P1, P2, P3, and so on).
The reports of the original study came from eight male fathers with the following characteristics: Average age of 37, with ages ranging from 27 to 54. Of the total, six participants were first-time fathers. Regarding nationality, one was Argentine and the other seven were Chilean. Regarding their educational levels, four participants had professional technical degrees, two had complete high school education, and two had completed university higher education. Their socio-economic levels were determined by their health insurance system, where three participants had private health insurance, and five were dependent on public healthcare. Regarding their marital status, four of them were married and the other four were single cohabitants. The information that emerged from the original interviews was analyzed and interpreted within the framework of the essential units that could characterize the fathers’ experiences at birth, and potential relationships between their experiences and perceptions of well-being. It was observed that most fathers began narrating in a descriptive way and pointing out sequences of events. Later, as the interviews progressed, the fathers were transported to the moment of birth and relived the experience in a deeper way. From the phenomenological interpretative analysis of the narratives, each of the meanings (essential units) of paternal lived experiences, and their relationships with well-being were revealed. The four central themes or categories that emerged in relation to the Chilean fathers’ lived experiences and well-being perception at childbirth are shown in . I. Feeling as a part of the healthcare team The fathers’ narratives reflected a sense of feeling part of the healthcare team from two perspectives. The first indicates a form of satisfaction with respect to the reception of the health team, as in for example: “ So, she was very generous in the space (the midwife), with the sense of calm with which we lived it. We were quite lucky in that regard, because there was a lot of good disposition from everyone, so it was very good, it was very comforting”. (P7) It is also worth highlighting an aspect of empowerment from the fathers’ narratives, where they felt like active protagonists in the process, and felt the impulse to participate actively. Feeling included and integrated into the health team. For some fathers, being able to put into practice what they learned, and being able to understand each of the procedures that were performed at birth was of great personal satisfaction. […] I did not leave at any time because I went there (to the neonatology unit), they cleaned her, they did the tests they had to do, and I was there, and it was great, I felt part of everything. (P3) A great satisfaction, I feel that I participated within what one can participate, and in what we had learned with the confidence that the workshops gave me. (P2) Feeling that participation is lived in complicity with the health team. For some fathers, a feeling of complicity with the healthcare team emerged, as if the fathers had felt that the professional team knew, valued, and became part of their competences and personal interests. There was a complicity (with the healthcare team), a planning of what we were going to do.—She is going to be born, we’re going to give her to the mother, then we’re going to examine her and we’re going to hand her to you, so calm down—(P1) Feeling like obeying or receiving directions. For some of them, however, receiving instructions from the healthcare team, prevented them from freely take on their roles, and made them an obedient participant with some degree of insecurity and nervousness. This could have made some fathers feel undervalued. They left me waiting outside […]I was a little nervous. Then they told me –put on these clothes– and there they told me – quickly! that we are already on the hour– (P4) II. Perceiving himself capable of containing and supporting his partner and being a guardian of the process Much of the contents of the fathers’ narratives focused on the experience of providing support to their partners, placing their partners´ needs during labor at the center of attention. In such cases, they felt a responsibility and commitment toward their role in monitoring the labor process and the safety of their partner. Feeling helpful and able to guard and support his partner in labor. For some interviewees, perceiving themselves as being useful, and participating in assisting and supporting the couple made them feel that their presence was relevant, both to accompany and to ensure the safety of their child and partner. This was described by fathers as a positive, special, and “magical” experience that left them very satisfied. And I felt useful, we knew what I had to do… I worried about the saline solution, and the anesthesia so they wouldn’t run out. So, I felt like I was participating, and of the birth itself too, for my part I felt part of it. I don’t know, it was special, everything was magical. […] I felt that she (the couple) trusted me. (P8) Feeling worried and caring about the partner’s pain. Some fathers revealed that the issue of pain was a concern, but it also encouraged them or mobilized them to try to make their partner in labor feel better by first trying by themselves or asking someone for help. I tried to help and calm her down a little bit, we had moments when the anesthesia… I mean, I was trying to help in any way possible and notifying the team of anything. (P2) Feeling helpless from not being able to provide support or containment . It was observed that the same situation that prompted some fathers to occupy themselves and to act, generated frustration in others, who felt that they were unable to do more to ease the pain of their partner, which became part of their own pain. I saw she was in a lot of pain and gave me… I do not know… sorrow. Obviously, it was for something, for something nice but just like seeing her that she was suffering and not being able to do anything but hold her, nothing could be done. (P1) III. Being committed to being a father from the first moment of contact with the child It can be seen in when fathers were able to have physical contact with their newborn child, according to the type of delivery. Fathers expressed how their feelings were evolving from the moment of birth. The first sensations and emotions were generated at the time of the child’s first appearance. Anxiety and stress prevailed at the very moment of birth, when there was no certainty that the birth would be successful. However, after the birth occurred, for these trained participants, impulses toward fathering emerged when they held the child in their arms, and establish skin-to-skin, father-child contact, after the period of mother-child contact. In that moment previous worried emotions turned to feelings of tenderness and peace. Feeling that they can carry the child naturally and calmly . Some participants felt compelled to hug and hold their child with intimate contact, as if an internal, innate force was guiding them. Touching and being able to talk to their babies became a natural and spontaneous tendency. They felt that they were providing calm and tranquility. I didn’t feel afraid to take her! It was… immediately everything was natural. […] the most beautiful thing was that she was born, well as her crying. When I held my daughter in my arms, she was quiet, she was super calm, relaxed. (P1) Perceiving that through physical contact (bonding) they could transmit care and protection to their child. For many fathers, physically contacting their child represented a very special moment for them, and they perceived that physical contact generated wellbeing to their babies, which was a comfortable feeling of tranquility and security. Some felt that they were able to pass love and protection through skin-to-skin contact. Once I had her on my chest skin to skin, she stood calm…in a nutshell it is like feeling protected and loved. (P3) I realized that she was so helpless […] she was very close to me, as if she asked me for affection and protection. (P6) Feeling a direct connection and father-child synchrony through physical contact . For fathers, physical contact with the child was a way to establish a deep connection that transcended the merely physical realm. From the moment they saw their child for the first time, an initial connection was established, but touching them established a deeper connection, and there was even a synchronization of breathing between the father and baby. The sensory stimuli that were generated through skin contact allowed them to connect recognize each other. From the minute I saw my baby, I don’t think he looked at me but if I felt a connection, I think it was automatic for me to take him, the touch gives you… an emotional connection, a connection of feelings more than something physical. (P2) When bonding was happening, I was feeling him on my chest and he felt calm. It was an exciting union; it felt as if our breath was one. My breathing was not normal, it was like accelerated, I was completed connected with him, it was fantastic. (P8) Perceiving that the child recognizes him as a “dad”. For some fathers, the experience of feeling recognized by their child was very relevant. The moment of direct physical contact between them corresponded to the instance in which they tested if they could be recognized by contact with the hands or by voice. I took him in my hands and it was a sacred moment, he would recognize my hands, my voice […] with his eyes open, as if to say –here I am, I came into the world– I felt that, and I was very struck by that, as if he had said –this is my dad–. (P5) Feeling the child’s life through contact. A very particular sensation transmitted in some of the narratives corresponded to the satisfaction of perceiving the life of the child, through the breath, the heartbeat, and the temperature of the skin. Fathers felt great about being able to perceive chest movements (breathing movements), which accounted for their child’s vitality. I don’t know how to describe it. I felt his warmth and he was moving a little; I felt his heart, that is, it was more the movement, the movement of the little chest with the breath. (P2) I was happy that she felt comfortable and safer; she felt the protection that was there - that more than anything, to feel her breathe too, it gave me joy that she was breathing. (P4) Feeling how the commitment to be a father was sparked from birth . The moment of birth in general, and especially the moment of the father-child encounter, was transformed into a pact for the fathers, as well as a commitment to fathering and responsibility in care. I’m going to be forever grateful that I was able to spend those 15 to 20 minutes with her in proximity. Contact is something that doesn’t have a price. I guess that will grow and will be strengthened in the future. (P7) She was there [the baby], and I loved her very much. That made me feel ready, like now I must take care of her forever; like that’s why I didn’t pay attention to other details. (P1) IV. Being wrapped in a whirlwind of emotions The fathers revealed in their testimonies that, regardless of how useful the prenatal preparation was, they did not feel empowered enough to manage and control their emotions. The event of childbirth, as a borderline experience for the mother, was more so for the father. They felt that, despite having enough information about the birth process, they did not have control of it. This whirlwind of sensations and emotions, although they were mostly positive, reflected the state of affective well-being of the fathers. States of tranquility and pleasure turned to states of greater nervousness and stress, and vice versa, as the moment of birth approached. Experiencing the uncertainty of not knowing how to act. The experience reported by a father who failed to position himself in his role as a companion during the first stage of labor revealed the nervousness experienced, not knowing how to act when he only received indications from healthcare team, and they were contradictory. A little nervous, just because there were many people, and everyone told me –stay there! – it was like… I didn’t know where to position myself because one person told me something, and another told me something different. (P4) The experience of contradicting emotions when facing birth and fatherhood. During the moment of birth, positive, and negative emotions and sensations were experienced simultaneously. Emotions such as fear, anxiety and uncertainty were related to the outcome of the birth, and the health of the newborn. The crying that represented the vitality of the newborn was a fundamental element that allowed fathers to turn the state of anxiety and uncertainty into a state of tranquility and well-being. In the latter state, they were able to enjoy the moment, reliving the moment as being magical and indescribable. My life changed, to look at her, to observe her movements, her crying… I was in the clouds. It is indescribable the happiness that it made me feel. I felt very serene. (P3) Focusing his attention only on his child – a magical moment that disconnects him from the environment . In their narratives, the fathers referred to a magical and peaceful experience that is difficult to reproduce. The encounter with their newborn made them distance themselves and disconnect from the entire environment, to focus their attention only on the emotion generated by the presence of their baby. I loved seeing my daughter come out, I went inward […] I tended to disconnect [from the environment]. I focused so much on the moment that I… I was filled with so many emotion… a lot. It was a unique sensation -an experience that can´t be transmitted. But the truth is that they cannot be reproduced…all emotions at that time. It was a very magical moment. (P6) It was an incredible sensation of peace. There was nothing else around. It was a very emotional moment, actually!! (P5)
This study was based on the narratives of men who lived a birth experience, for which they had prepared during the pregnancy stage. This prenatal intervention focused on fathers’ own interests and expectations. Hence, the findings of this study could be related to fathers who are involved from pregnancy, rather than the more “common fathers”, who generally feels invisible, without any control over the birth process , and with a secondary role in this life event . A relevant aspect of this study was that it unearthed experiences lived by Chilean fathers from the perspective of subjective well-being. This can be deemed relevant, given that much of the available literature has described fathers’ experiences of birth through the lens of satisfaction felt by the fathers as “clients”. The latter is closer to the judgment or opinion that the father makes about the external conditions that come from the environment during the birth experience . Thus, it is a great challenge to discuss the results of this study based on the revelation of a phenomenon seen from a different way than what has typically been described in the literature. Four central categories emerged from Chilean fathers’ narratives of their experiences of well-being during their child’s birth. Such categories were reflected by judgments that each father made about his personal performance, as well as their sensations, emotions, and involvement in the process. Hence, fathers were able to explain the paternal experience at birth as a significant life situation, framed around the concept of well-being which is more from an eudaemonic perspective related to achievements, commitment, and involvement, than from a hedonic stance (sensations/emotions related to enjoyment and pleasure) . One of the central themes or categories, namely Feeling as a part of the healthcare team , could be understood both from the perspective of well-being, and the concept of user satisfaction. On the one hand, under the concept of user satisfaction, fathers described how they perceived themselves participating in the system as collaborators during the labor process. From the perspective of well-being, they felt capable and empowered with the health team. Fathers, as other authors point out, could feel that they participated in the event, when the health care provider integrated and welcomed them . Perhaps the difference that is reflected in this Chilean study, compared to what other authors have described, could be related to fathers’ motivation and empowerment, which resulted from the preparation and special intervention they received before birth. The theme or category of Perceiving himself as being capable of containing and supporting his partner and being a guardian of the process could also be understood from the eudaemonic well-being perspective, as it relates to the role of a companion. The participant of this study, as reported above, felt that they –by themselves– were useful in providing support to their partner, which to them was fundamental. This agrees with what other authors have pointed out regarding the role of guardian, caregiver and support of the couple during childbirth . From the perspective of user satisfaction, it has been reported that when fathers felt they had made a positive difference in the birth and delivery process of their partners, they expressed greater satisfaction . Additionally, other fathers reported dissatisfaction when they had to work hard to feel included and involved in supporting and containing their partner . There have also been reports of fathers feeling excluded from the process altogether . Fathers in this study felt that they were able to collaborate in overseeing and ensuring the safety of their partner and their child during birth. This contributed directly to the understanding of the phenomenon of paternal well-being, while in other studies the level of fathers’ satisfaction related to the safety and health of the partner and newborn has focused exclusively on the action and competencies of the health team . Regardless of whether interpretations are from the lens of well-being or user satisfaction, studies agree that fathers who do not perceive that everything is under control , especially when complications occur during the process, generate feelings of frustration and incompetence in providing support to the partner . One of the issues that deserves to be highlighted in this discussion concerns paternal well-being from the lens of eudaemony, specifically regarding the role of the father. The latter corresponds to the category of Being committed to being a father from the first moment of contact with the child . The core of this category was that fathers felt they could connect with their child through contact and commit to caring for, holding, and calming them, more than just feeling a physical connection. There is currently little evidence of well-being or lived experience focusing on the meaning of fatherly at the time of physical father-child contact . The only studies reporting an active father role during the moment of contact with his child were in the situation that mothers being disabled during childbirth . These studies mainly described benefits for the newborn, rather than for men and fathering related on emotions and feelings . The fourth theme, namely Being wrapped in a whirlwind of emotions , has been most frequently reported by other authors as experiencing a dynamic scenario of multiple emotions, typically described as a “carousel of emotions” . For most studies, the emotions experienced by fathers who have not received adequate preparation have been predominantly negative, and related to fear, stress, anguish, and frustration . In this Chilean study, nevertheless, the “whirlwind of emotions”, which also emerged in their narratives, tended to focus more on the positive results of emotional well-being, referred to as peace, serenity, and happiness, rather than on the negative emotions of fear and anxiety. This study is a contribution to the emergence and promotion of positive and present fathering. It also contributes to unveiling new male experiences, and different roles within the family group. Two major limitations of this study need to be addressed. The first corresponds to the fact that some participants were first time fathers and others had previous experience. Hence, the data reported by the Chilean fathers could present biases, given that the information was analyzed as a whole group, without distinguishing participants with previous birth experiences. The second aspect to consider as a limitation is that only trained fathers participated in this study. Thus, participants in this study do not correspond to the general population of Chilean partners who typically live a birth experience without preparation. As a strength of this study, it could be noted that this is the first exploratory effort in the Latin American region to gather fathers’ experiences from the lens of well-being, rather than that of user satisfaction. From this starting point, it could be possible for nursing field to design an instrument in the future to collect fathers’ perceptions of well-being during childbirth, where the main dimensions derive from the four central themes or categories revealed in this study.
Father’s lived experience at Childbirth could be explained considering psychological well-being concept. Emotions and sensations typical of hedonic well-being were perceived by the trained parents around the moment of birth, while everything related to the eudemonistic approach, such as commitment, achievements, and satisfaction with respect to parental roles, were revealed once the birth had occurred.
|
Guiding Principles for the Practice of Integrative Physical Therapy | ec4441a0-793d-4484-a889-ce0ebff6f413 | 10757068 | Patient-Centered Care[mh] | Integrative health is an evidence-based healing-oriented approach that takes into account the whole person, emphasizes lifestyle and the therapeutic relationship between practitioner and patient, and makes use of all appropriate therapies. The National Center for Complementary and Integrative Health further defines integrative health as bringing conventional health care approaches (medication, physical rehabilitation, psychotherapy, etc.) and complementary health approaches (acupuncture, yoga, probiotics, etc.) together with an emphasis on treating the whole person. The inclusion of physical rehabilitation by the National Center for Complementary and Integrative Health as a conventional approach begs the question of how would a physical therapist move into an integrative health paradigm. Integrative health is often conceptualized through a tree metaphor where a single symptom or condition is represented by a leaf or branch, the whole person and their full state of health and disease is the trunk, and the roots reflect deeper contributing factors. To extend the metaphor, the soil and air surrounding the tree represent the environment in which a person exists. This is further elucidated by the Center for Disease Control’s use of the term exposome , defined as a measure of all exposures from an individual’s diet, lifestyle, environment, etc. and how those exposures relate to their health and wellbeing. The metaphor can extend further to where the entire forest becomes representative of the larger society and culture, including social determinants of health (SDOH). When seeking integrative care, a patient can expect to explore multiple pathways of support throughout their whole tree and ecosystem ( ). Integrative health embraces a balanced approach to the 3-legged stool conceptualization of evidence-based medicine—giving each leg (patient preference, clinician expertise, and scientific evidence) appropriate weight. The mission of the American Physical Therapy Association is to advance the physical therapy profession toward the improvement of health for the whole of society with a vision of improving the individual human experience as part of the larger transformation of society through the optimization of movement. There is an emerging trend to integrate nutrition, prevention and wellness promotion, and population health, and the utilization of complementary and integrative health movement practices , into physical therapy education and practice. Although the profession is embracing elements of integrative health care, the practice of Integrative Physical Therapy (IPT) requires a foundational shift in how care is administered. Integrative health approaches are increasingly sought out, with continued growth expected. The general principles of integrative health practices are defined as therapeutic partnership, recognition of mind, body, spirit, and community as factors that influence health, evidence-informed conventional and alternative healing practices, prioritizing less invasive interventions, openness to new paradigms of healing, health promotion and prevention, and the need for integrative health providers to personally embody these principles. Given a risk for the integrative health movement to inadvertently justify interventions that lack an evidence base, the practice of IPT calls for greater clarity via the creation of an established model that captures the unique perspectives of physical therapists. To that end, the authors propose a series of 5 specific guiding principles of integrative physical therapy practice as therapeutic alliance, whole person health (WPH), living systems theory, movement as an integrative experience, and salutogenesis ( ) . These principles are influenced by both modern, research-driven care models and ancient health systems.
Therapeutic Alliance The clinician and patient relationship is central to the IPT tree. This relationship is conceived as a partnership (rather than a hierarchy) and encompasses values of person-centered, collaborative, and trauma-informed care (TIC). In this context, the integrative physical therapist serves as a caring guide for the person in the patient role. Treatment plans are created collaboratively with awareness of complex power dynamics, aligning the integrative physical therapist’s clinical judgment and skill within the patient’s expressed values, preferences, and intensions for seeking treatment. Central to therapeutic alliance is the cultivation of the healthiest possible relational space. This is especially important for populations that have experienced systemic distrust in health care systems and historical disparities in services. In practice, therapeutic alliance relies on the clinician’s ability to engage in deep and embodied listening, garner trust, demonstrate cultural humility by seeing the relationship as a forum for mutual learning, and safely elicit a comprehensive patient narrative. There is evidence that history taking may be an effective pain reducer in and of itself. The integrative health model prioritizes time-specific strategies in support of this endeavor, including visit durations that are long enough to garner the trust for comprehensive history taking. When delivering physical therapist interventions within the larger context of humility, uncertainty, complexity of causality, and a purposefully attuned and caring relationship, the potential to meet implicit needs of the patient (such as to feel safe, to be seen, heard, validated, cared for, or empowered) increases. This approach to the development of therapeutic alliance aims to also serve as support for provider wellbeing, where the provider no longer shoulders the weight of being the sole expert dictating the care of the patient. In a market study of integrative health physicians, 67% of doctors surveyed reported quality of life as much better or somewhat better since beginning to practice integrative medicine. Trauma-Informed Care A useful tool to leverage the first guiding principle of therapeutic alliance is the practice of TIC. TIC involves recognition of the effects of traumatic events and the broad spectrum of potential impacts that may arise out of past life experiences. A full set of considerations in regards to Adverse Childhood Experiences and TIC within rehabilitation care have been described. Examples of TIC within the context of IPT practice include requesting permission for any form of touch, maximizing patient empowerment by presenting options within each treatment session, and awareness of cultural humility, especially when working with persons from historically minoritized communities and/or when the physical therapist and the patient do not share the same cultural identity. Whole Person Health As the second guiding principle, WPH means considering the whole person—not just isolated organs or body systems—and honoring the complex factors that promote health or disease. WPH emphasizes individual and collective empowerment for wellbeing across the interconnected biological, behavioral, social, and environmental areas. IPT practice spotlights each of these factors in cocreating a WPH rehabilitation plan of care, with an understanding of the profound impact these factors can have on goals, prognosis, choice of healing modalities, etc. In this framework, the neuro-motor systems of the body are viewed as interdependent and intrinsically linked with all other body systems, alongside lifestyle and coping patterns, physical posture, environment and exposome, cognition, mental and emotional health, trauma and stress, culture, spirituality, and structural oppression. A mutually agreed upon assessment of historical and lifestyle factors that may be influencing the person’s condition can be an integral part of IPT care. With skilled communication, an exploration of physical activity, nutrition, hydration, sleep, substance use, and social support carries potential for transformation of root factors potentially limiting wellbeing and optimal recovery. However, in alignment with the guiding principle of therapeutic alliance, it is imperative to approach the topic of lifestyle renewal with the utmost care, compassion, and contextual understanding. Many lifestyle choices can be rooted in adaptive coping arising out of unmet needs, historical experience, SDOH, and resource availability. The broad set of SDOH are especially important to consider when working with historically marginalized groups such as persons who have been racialized, women, elderly, Indigenous populations, recent immigrants or refugees, and members of the lesbian, gay, bisexual, transgender, queer, intersex, and asexual communities. There is compelling evidence that experiences of discrimination are associated with worse health outcomes. Additional factors such as income, adverse childhood experiences, inadequate education, occupational and environmental hazard exposure, food access and security, zip code, neighborhood safety, and access to nature can all inform health trajectories toward or away from optimal health. Within a treatment visit, naming the influence of upstream and contextual determinants is a starting point to redistribute agency to the patient and put their struggles in a context that supports growth and empowerment. Specific strategies to further address SDOH in rehabilitation care have been delineated. Mind–Body Medicine Consistent with WPH, integrative physical therapists are uniquely suited to support development of optimal body–mind–environment relationships. Within an integrative health model, the concept of mind–body medicine carries with it the intrinsic understanding that the mind and the body are not 2 separate interacting entities, but are rather intrinsically connected aspects of the whole of the human experience. In an IPT context, this translates to an understanding that you cannot treat the physical body without also affecting mental and emotional states (and vice versa). The development of body awareness and the use of imagery and biofeedback principles are core aspects of mind–body medicine and can be used to support empowerment and the development of a positive relationship with one’s physical body. Furthermore, IPT aims to engage an examination of sociological factors relevant to the physical therapist profession such as has been delineated by Nicholls, including contributing to structural change, applying critical inquiry towards personal, professional, and societal transformation, moving beyond WPH to whole community and even planetary health and well-being. Living Systems Theory The third guiding principle, living systems theory explores the ways that living systems maintain themselves, interact, and adapt. Living systems are viewed as open (exchanging both energy and matter with the environment) and self-organizing (emergence of an overall order of a given system resulting from the collective interactions of individual components). In this way, living systems interact interdependently with their environment(s) and are dependent upon the interacting processes of multiple systems to survive and thrive. A dynamic living systems perspective demands tolerance of uncertainty as well as a skill set that optimizes strength-based inputs, carrying the potential to shift health trajectories toward greater well-being, regardless of the presence of injury, illness, or disease. Within the integrative health “tree” metaphor, integrative physical therapists are trained to look beyond the branch (symptom) level and dig into the roots (root causes), soil (environment and exposome), and forest (society and culture) and subsequently promote skills for enhanced resilience. Biotensegrity The biotensegrity model as an evolution of the reductive muscle and joint approach to understanding and treating musculoskeletal system dysfunctions is another concept that supports the IPT living system guiding principle. The bio-fascial-neuro-endocrine system is by nature integrative in that it encompasses, interweaves, and interpenetrates all organs, muscles, bones, and nerve fibers, giving the body functional structure, and enabling all body systems to operate in an integrated manner. The biotensegrity model takes into account the state of interdependence of all the body’s tissues in the transmission of force, the creation of movement and stillness, and the function of other systems throughout the body. Within an IPT perspective, understanding how the whole system responds to the demands placed on it becomes as equally important as the understanding of single joint and muscle strength and mobility tests and treatments. Nervous System Regulation As a second example of a living system in IPT practice, self-regulation provides a framework to explore how information is taken in, processed, and integrated into psychophysiological, somatoemotional, and biobehavioral responses. The autonomic nervous system is inseparably intertwined with the central nervous system and with every other system in the body, making the process of self-regulation of the stress response a holistic experience. As each organism is “hardwired” for the subconscious detection of danger (a process termed neuroception or threat appraisal), the central nervous system–autonomic nervous system –full body system functions on a continuum of mobilization and restoration. There are a number of evidence-informed theories (ie, polyvagal theory, neurovisceral integration theory, and the preparatory set) that underscore the relevance of approaching clinical care within a living systems perspective. Within living systems theory, we see that sources of stress and support can arise throughout the tree (individual body and mind), soil (environment and exposome), and ecosystem (society and culture)—at the levels of mental, emotional, physical, environmental, nutritional, social, and/or existential. Mitigation of stress through any one of these levels carries influence within the larger whole. Vital to the IPT model of care is the understanding of the complexities and uniqueness of how stress moves from healthy demand into toxic stress for each patient and collaboratively exploring stress management skills and resources. This lens, while potentially helpful for understanding all clinical presentations, is especially helpful in working to support patients with chronic disease, persistent pain, and states of multi-morbidity. Through a living systems theory perspective, the integrative physical therapist can utilize an in-depth understanding of how such physiological responses impact symptomology, sensorimotor learning, behavior, and/or cognitive-emotional states. This aim includes considering how these self-regulatory states apply to movement interventions that elicit playfulness, joy, or personal connection (in contrast to conventional exercises). Both interoception (the ability to sense and regulate one’s internal state) and coregulation (the degree to which the state of one person’s nervous system regulation affects that of others) influence the experience of movement and are powerful tools that the integrative physical therapist can utilize within treatment sessions. Manual therapy, neuromuscular reeducation techniques, and therapeutic exercises can reflect an understanding of living systems theory when supporting safe body awareness and accurate interoception, cultivating regulated body–mind–behavioral states. When optimizing wellness through improved stress management is emphasized as part of an IPT care model, integrative physical therapists themselves also benefit. Embodying a positive lifestyle, practicing stress management, and utilizing integrative-minded movement practices can improve one’s own health and wellbeing and could serve as health care worker burnout prevention. , Movement as an Integrative Experience As movement specialists, physical therapists aim to work with patients to develop nonableist and customized movement patterns and/or physical activity practices. With alignment to the patient’s history, motivations, and goals, this effort may include more frequent movement and/or prescribed doses of aerobic, resistance, flexibility, and neuromotor control activities. However, an IPT approach reframes physical activity into movement experiences informed by mindfulness research and social, environmental, and cultural factors. , Integrative physical therapy supports examining energy management and balance in a holistic manner, including the relevance of rest and restorative processes. It is vital in integrative physical therapy to explore each patient’s relationship to their body and how that translates into their relationship with movement. In this paradigm, movement may also be considered metaphorically. Am I moving backward or forward in life? Where is “movement” happening (or not happening) in my body? In my life as a whole? The integrative physical therapist approach also acknowledges and leverages how movement influences and is influenced by learning and cognition ; mental, emotional, and spiritual well-being ; stress and trauma biology ; social relationships ; and the attainment of various needs including instinctual and survival oriented impulses. The act of conscious breathing itself is considered a movement practice, promoting neurophysiological regulation and the relaxation response. , As introduced in living systems theory, the science of interoception also underscores movement as an integrative experience, bringing insight into the relationships between the mind, brain, body, environment, and behavior. The development of self-referential processes around the integration of sensory signals with thoughts, beliefs, memories, intentions, posture, and movement can facilitate greater regulation and resilience. , This interoceptive skill building can help the person to notice dysregulation that may stem from interacting physical, mental, social, environmental, etc. stressors, and refocus on what is aligned with meaning, purpose, and committed action in the service of one’s goals and values. Mind and Body Movement Systems Integrative physical therapy may include conventional forms of exercise alongside mind and body movement systems such as yoga, tai chi, qigong, dance, the Feldenkrais Method, the Alexander Technique, Laban Movement Analysis, etc. In response to the Flexner Report of 1910, the prevailing biomedical model of health categorized many ancient, indigenous, and culturally specific health practices as alternative medicine. In some cases, this categorization has created an unhelpful polarization and marginalization of many of the world’s established healing practices. The field of integrative health strives to offer an enhanced approach, wherein complementary and alternative medicine practices can be synergistically integrated with conventional care in a person-centered and evidence-based manner. The inclusion of ancient mind and body movement practices into physical therapy moves us away from these practices being marginalized as an “alternative” to conventional care, but rather as “integrated” into a model that includes both conventional and traditional healing practices. Salutogenesis Integrative health models recognize that health and wellness are more than the absence of disease. Salutogenesis (health creation) is a concept that emphasizes facilitating a move toward greater well-being and flourishing, rather than solely moving away from illness, recognizing that even when aspects of an illness or injury remain, a person can continue to engage with life in a values-aligned way and experience well-being. , Applying salutogenesis to the practice of integrative physical therapy means moving beyond restoring function or mobility toward a place where patients’ efforts in rehabilitation connect to personal concepts of flourishing within their current circumstance or situation. Eudaimonia As one means toward embracing a salutogenic framework, the Aristotelian concept of eudaimonic well-being refers to thriving within a well-lived life, a steadfast joy that does not fluctuate with circumstance. Within a salutogenic framework, eudaimonic wellbeing is defined as: meaning and purpose, self-defined virtues and ethics, social connection, autonomy, and personal expressiveness, and self-actualization and realization. Eudaimonic well-being has been explored and found to be connected to many positive health outcomes including: improved immune function, decreased allostatic load, decreased all-cause mortality independent of age, gender, physical inactivity, and the presence of disease. , In chronic pain conditions, salutogenesis through eudaimonic well-being is related to lower levels of fatigue, disability, pain intensity, pain medication use, and to improved wellbeing, patient functioning, adjustment to chronic pain, depression symptoms, and life satisfaction. , Within an IPT practice, salutogenesis means not just focusing on treating an injury or disorder, but encompassing interventions that encourage thriving. This could include mindfulness practices, healthy lifestyle, joyful movement, and/or interoceptive practices for improved mind and body connection. Integration of the Guiding Principles It is essential to recognize that all 5 of the guiding principles (therapeutic partnership, WPH, living systems theory, movement as an integrative experience, and salutogenesis) are interdependent. For example, facilitating shifts in central nervous system–autonomic nervous system–full body regulation can impact how a person moves, their ability for social communication and relating, their pain experience, and more. Although systemic chronic inflammation is associated with a host of poor health outcomes and can be linked back to the interaction of SDOH with lifestyle (ie, poor nutrition, sedentary behavior, poorly managed stress, inadequate sleep, etc.), social experiences can also coregulate inflammation, and social pain has shared physiological representations with physical pain. This highlights the overlap between WPH, living systems, and therapeutic partnership within IPT care. WPH itself incorporates a salutogenic approach, while also acknowledging the person as a living system. And TIC transcends the therapeutic partnership to include the recognition of trauma’s impact on the nervous system, the ability to self or coregulate, one’s relationship to movement, and contributions to WPH and salutogenesis, crossing through all 5 of the guiding principles. Integrative Physical Therapy—From Theory to Practice Some existing specialties within physical therapy lend themselves particularly well to an IPT approach. One example could be found in the field of pelvic health, where the sensitivity required to facilitate healing in the intimate areas of the pelvis and genitals aligns with elements of IPT care, such as trauma informed care, therapeutic partnership, and WPH. Physical therapists who work with people who experience chronic pain are another group that have a natural congruence with IPT. The biopsychosocial and biopsychosocial-spiritual approaches to chronic pain care have strong grounding in WPH and salutogenesis. Physical therapists working with people undergoing a cancer journey are also appropriate for this type of care. Navigating the physical, social, and emotional burden of cancer treatment has facilitated the field of integrative oncology, where understanding how to apply salutogenesis, healthier movement habits, and stress management strategies all contribute to long-term health, wellbeing, and survivorship. Evidence that the physical therapy profession can be inclusive of an integrative health approach can be seen in clinical practice guidelines based around preventative care, nutrition, yoga, etc. , Although these clinical practice guidelines can be viewed as an emerging empirical evidence base for IPT care, the inclusion of preventative care, complementary and integrative health modalities, lifestyle education, etc. into an allopathic physical therapist paradigm is not equivalent to IPT care ( ). The shift to an integrative health paradigm requires that the 5 guiding principles are at the forefront of IPT patient care. The disintegrated existence of aspects of integrative health already present within the field of physical therapy indicates an even more pressing need for the profession to establish guiding principles. Physical therapists interested in this paradigm would benefit from a set of standards to guide professional development. An innovative example of an integrative physical therapy approach is currently offered at Hennepin Healthcare Systems in Minneapolis. Located in downtown Minneapolis, Hennepin Healthcare System includes a safety net hospital and outpatient clinics, providing care for low-income, the uninsured, and vulnerable populations. The integrative physical therapy specialty at Hennepin Healthcare System is described as a whole person approach to rehabilitation, focusing on maximizing the body’s ability to self-heal, nervous system regulation, and exploring mind, body, and spirit aspects of movement, helping patients uncover deeper compassion, joy, confidence, and purpose within their bodies. Sixty-minute treatment sessions allow for patient and IPT relationship building and wellness promotion through extensive subjective intakes that include inquiry around lifestyle, environment (safety and access to resources), social support, history of trauma, and SDOH. In this practice, the integrative physical therapists work in collaboration with the other integrative health practitioners in the organization, cofacilitating lifestyle-based group medical visits with integrative medical doctors, teaching trauma-sensitive yoga for the mother and baby mental health day hospital and cancer center, and practicing alongside acupuncturists and chiropractors. Although practices and modalities within the IPT specialty include conventional physical therapist practices, the therapists in this specialty carry additional training in mind–body science, yoga and yoga therapy, TIC, myofascial manual therapy, meditation, breathwork, and/or integrative medicine. In this case, each of the integrative health practices are more than an add-on modality, but rather inform the overall holistic paradigm of care within the specialty. Another environment conducive to the IPT approach can be found the Veterans Health Administration Whole Health approach to care. This approach represents a systems-wide shift emphasizing health promotion, patient-driven care, and the integration of complementary and integrative health services along with conventional care. This includes a move toward asking “what matters to you, vs what is the matter with you.” There are 3 components to the Veterans Health Administration Whole Health System. The first of which is the Pathway, which includes empowering individuals to explore what matters to them through reflecting on one’s mission, aspiration, and purpose to set personalized goals for wellbeing. The second component includes well-being programs to support skills building of these practices—including complementary and integrative health approaches, other well-being approaches, and health coaching. Lastly, the Whole Health Clinical Care program integrates what matters to the veteran’s personalized goals and self-care, including healing environments and relationships, complementary and integrative health approaches, personal health planning, and health coaching. In the Veterans Health Administration, initiatives in physical therapy that incorporate these guiding principles in varying amounts include a biopsychosocial mentorship for physical therapists, codisciplinary pain care with physical therapists and behavioral health professionals, and the Tele-Pain-EVP Empower Veterans Program which centers on interdisciplinary care and the exploration of purpose and one’s identified values while supporting self-care skills building including self-regulation practices and WPH. Challenges, Limitations, and Future Directions for IPT Although a full discussion of the challenges and limitations to IPT care are out of the scope of this article, we recognize that deploying an IPT model of care demands both individual and structural effort. The dominance of our allopathic health care model creates myriad challenges when moving into IPT practice. From legal barriers for reimbursement for preventive or wellness-oriented care to limitations of diagnosis and referral frameworks that push physical therapists into the limited belief that they are treating a body part rather than the whole person, true integrative care requires transformation on individual and system levels. Similar to the efforts to advance physical therapy in mental health care, some might fear that facets of WPH are outside the scope of physical therapist practice. However, in the integrative health context, physical, mental, emotional, and spiritual wellbeing are all intrinsically connected; it is impossible to treat one without affecting the others. Clear delineation of scope of practice requires careful and ongoing discernment and clinical supervision and mentoring models are recommended. IPT also necessitates treatment sessions with adequate time to build a therapeutic relationship between the physical therapist and the patient. Clinics that utilize shorter duration sessions will likely not be conducive to an IPT approach. Integrative physical therapists may also require additional time between patients to self-regulate in order to engage with each patient and client in a fully present state. This point raises an additional challenge to an IPT practice given that this work requires commitment to one’s own self-care, lifestyle, and mind and body practices. As there is an ever-growing public demand for a more holistic approach to health care, delineating the comprehensive mind–body skills that support IPT practice will establish physical therapy as an essential profession within the growing field of integrative health. The specifics on appropriate training and competencies for integrative physical therapy practice remain open. The American Medical Academy has set standards for physicians wishing to specialize in integrative health that include a 2-year fellowship program and an examination, among other requirements. The profession of physical therapy could benefit from examining the relevance of similar standards. Although a full exploration of standards for IPT practice are outside the scope of this article, determining the specifics for training and/or certification, including learning objectives, Commission on Accreditation in Physical Therapy Education standards, and continuing education certification curriculum could be a next step toward formalizing an IPT specialty. A more in-depth look at individual and structural barriers to IPT practice and how to overcome them have been delineated. Given the larger rapidly changing health care landscape, the physical therapy profession must address the question of whether integrative physical therapy should become a board-recognized clinical specialty versus seeing the principles that inform integrative physical therapy as a natural embodiment of the American Physical Therapy Association mission and vision of societal transformation. Either way, there is an urgent need for objective, evidence-based information and training on integrative physical therapy approaches and further elaboration and clinical models that integrate these principles. Conclusions This paper proposes a foundation for integrative physical therapy practice with 5 interdependent guiding principles of therapeutic partnership, WPH, living systems, movement as an integrative experience, and salutogeneis. This perspective centers on the interdependence of all aspects of the person’s experience and supports the optimization of well-being with a focus on meaning, patient-identified values, and a purpose-filled life. Fully embracing an integrative health model as a physical therapist requires a deepened, mindful therapeutic presence, and a unique skill set to meet each proposed guideline within the dynamic complexities and challenges of modern health care. As we recognize that the health of all people is interdependent with ecological and planetary health, the need for an integrative approach to physical therapy that leverages these connections becomes even more pressing. This paper envisions a starting point of foundational guiding principles that constitute integrative physical therapist practice.
The clinician and patient relationship is central to the IPT tree. This relationship is conceived as a partnership (rather than a hierarchy) and encompasses values of person-centered, collaborative, and trauma-informed care (TIC). In this context, the integrative physical therapist serves as a caring guide for the person in the patient role. Treatment plans are created collaboratively with awareness of complex power dynamics, aligning the integrative physical therapist’s clinical judgment and skill within the patient’s expressed values, preferences, and intensions for seeking treatment. Central to therapeutic alliance is the cultivation of the healthiest possible relational space. This is especially important for populations that have experienced systemic distrust in health care systems and historical disparities in services. In practice, therapeutic alliance relies on the clinician’s ability to engage in deep and embodied listening, garner trust, demonstrate cultural humility by seeing the relationship as a forum for mutual learning, and safely elicit a comprehensive patient narrative. There is evidence that history taking may be an effective pain reducer in and of itself. The integrative health model prioritizes time-specific strategies in support of this endeavor, including visit durations that are long enough to garner the trust for comprehensive history taking. When delivering physical therapist interventions within the larger context of humility, uncertainty, complexity of causality, and a purposefully attuned and caring relationship, the potential to meet implicit needs of the patient (such as to feel safe, to be seen, heard, validated, cared for, or empowered) increases. This approach to the development of therapeutic alliance aims to also serve as support for provider wellbeing, where the provider no longer shoulders the weight of being the sole expert dictating the care of the patient. In a market study of integrative health physicians, 67% of doctors surveyed reported quality of life as much better or somewhat better since beginning to practice integrative medicine.
A useful tool to leverage the first guiding principle of therapeutic alliance is the practice of TIC. TIC involves recognition of the effects of traumatic events and the broad spectrum of potential impacts that may arise out of past life experiences. A full set of considerations in regards to Adverse Childhood Experiences and TIC within rehabilitation care have been described. Examples of TIC within the context of IPT practice include requesting permission for any form of touch, maximizing patient empowerment by presenting options within each treatment session, and awareness of cultural humility, especially when working with persons from historically minoritized communities and/or when the physical therapist and the patient do not share the same cultural identity.
As the second guiding principle, WPH means considering the whole person—not just isolated organs or body systems—and honoring the complex factors that promote health or disease. WPH emphasizes individual and collective empowerment for wellbeing across the interconnected biological, behavioral, social, and environmental areas. IPT practice spotlights each of these factors in cocreating a WPH rehabilitation plan of care, with an understanding of the profound impact these factors can have on goals, prognosis, choice of healing modalities, etc. In this framework, the neuro-motor systems of the body are viewed as interdependent and intrinsically linked with all other body systems, alongside lifestyle and coping patterns, physical posture, environment and exposome, cognition, mental and emotional health, trauma and stress, culture, spirituality, and structural oppression. A mutually agreed upon assessment of historical and lifestyle factors that may be influencing the person’s condition can be an integral part of IPT care. With skilled communication, an exploration of physical activity, nutrition, hydration, sleep, substance use, and social support carries potential for transformation of root factors potentially limiting wellbeing and optimal recovery. However, in alignment with the guiding principle of therapeutic alliance, it is imperative to approach the topic of lifestyle renewal with the utmost care, compassion, and contextual understanding. Many lifestyle choices can be rooted in adaptive coping arising out of unmet needs, historical experience, SDOH, and resource availability. The broad set of SDOH are especially important to consider when working with historically marginalized groups such as persons who have been racialized, women, elderly, Indigenous populations, recent immigrants or refugees, and members of the lesbian, gay, bisexual, transgender, queer, intersex, and asexual communities. There is compelling evidence that experiences of discrimination are associated with worse health outcomes. Additional factors such as income, adverse childhood experiences, inadequate education, occupational and environmental hazard exposure, food access and security, zip code, neighborhood safety, and access to nature can all inform health trajectories toward or away from optimal health. Within a treatment visit, naming the influence of upstream and contextual determinants is a starting point to redistribute agency to the patient and put their struggles in a context that supports growth and empowerment. Specific strategies to further address SDOH in rehabilitation care have been delineated. Mind–Body Medicine Consistent with WPH, integrative physical therapists are uniquely suited to support development of optimal body–mind–environment relationships. Within an integrative health model, the concept of mind–body medicine carries with it the intrinsic understanding that the mind and the body are not 2 separate interacting entities, but are rather intrinsically connected aspects of the whole of the human experience. In an IPT context, this translates to an understanding that you cannot treat the physical body without also affecting mental and emotional states (and vice versa). The development of body awareness and the use of imagery and biofeedback principles are core aspects of mind–body medicine and can be used to support empowerment and the development of a positive relationship with one’s physical body. Furthermore, IPT aims to engage an examination of sociological factors relevant to the physical therapist profession such as has been delineated by Nicholls, including contributing to structural change, applying critical inquiry towards personal, professional, and societal transformation, moving beyond WPH to whole community and even planetary health and well-being.
Consistent with WPH, integrative physical therapists are uniquely suited to support development of optimal body–mind–environment relationships. Within an integrative health model, the concept of mind–body medicine carries with it the intrinsic understanding that the mind and the body are not 2 separate interacting entities, but are rather intrinsically connected aspects of the whole of the human experience. In an IPT context, this translates to an understanding that you cannot treat the physical body without also affecting mental and emotional states (and vice versa). The development of body awareness and the use of imagery and biofeedback principles are core aspects of mind–body medicine and can be used to support empowerment and the development of a positive relationship with one’s physical body. Furthermore, IPT aims to engage an examination of sociological factors relevant to the physical therapist profession such as has been delineated by Nicholls, including contributing to structural change, applying critical inquiry towards personal, professional, and societal transformation, moving beyond WPH to whole community and even planetary health and well-being.
The third guiding principle, living systems theory explores the ways that living systems maintain themselves, interact, and adapt. Living systems are viewed as open (exchanging both energy and matter with the environment) and self-organizing (emergence of an overall order of a given system resulting from the collective interactions of individual components). In this way, living systems interact interdependently with their environment(s) and are dependent upon the interacting processes of multiple systems to survive and thrive. A dynamic living systems perspective demands tolerance of uncertainty as well as a skill set that optimizes strength-based inputs, carrying the potential to shift health trajectories toward greater well-being, regardless of the presence of injury, illness, or disease. Within the integrative health “tree” metaphor, integrative physical therapists are trained to look beyond the branch (symptom) level and dig into the roots (root causes), soil (environment and exposome), and forest (society and culture) and subsequently promote skills for enhanced resilience. Biotensegrity The biotensegrity model as an evolution of the reductive muscle and joint approach to understanding and treating musculoskeletal system dysfunctions is another concept that supports the IPT living system guiding principle. The bio-fascial-neuro-endocrine system is by nature integrative in that it encompasses, interweaves, and interpenetrates all organs, muscles, bones, and nerve fibers, giving the body functional structure, and enabling all body systems to operate in an integrated manner. The biotensegrity model takes into account the state of interdependence of all the body’s tissues in the transmission of force, the creation of movement and stillness, and the function of other systems throughout the body. Within an IPT perspective, understanding how the whole system responds to the demands placed on it becomes as equally important as the understanding of single joint and muscle strength and mobility tests and treatments. Nervous System Regulation As a second example of a living system in IPT practice, self-regulation provides a framework to explore how information is taken in, processed, and integrated into psychophysiological, somatoemotional, and biobehavioral responses. The autonomic nervous system is inseparably intertwined with the central nervous system and with every other system in the body, making the process of self-regulation of the stress response a holistic experience. As each organism is “hardwired” for the subconscious detection of danger (a process termed neuroception or threat appraisal), the central nervous system–autonomic nervous system –full body system functions on a continuum of mobilization and restoration. There are a number of evidence-informed theories (ie, polyvagal theory, neurovisceral integration theory, and the preparatory set) that underscore the relevance of approaching clinical care within a living systems perspective. Within living systems theory, we see that sources of stress and support can arise throughout the tree (individual body and mind), soil (environment and exposome), and ecosystem (society and culture)—at the levels of mental, emotional, physical, environmental, nutritional, social, and/or existential. Mitigation of stress through any one of these levels carries influence within the larger whole. Vital to the IPT model of care is the understanding of the complexities and uniqueness of how stress moves from healthy demand into toxic stress for each patient and collaboratively exploring stress management skills and resources. This lens, while potentially helpful for understanding all clinical presentations, is especially helpful in working to support patients with chronic disease, persistent pain, and states of multi-morbidity. Through a living systems theory perspective, the integrative physical therapist can utilize an in-depth understanding of how such physiological responses impact symptomology, sensorimotor learning, behavior, and/or cognitive-emotional states. This aim includes considering how these self-regulatory states apply to movement interventions that elicit playfulness, joy, or personal connection (in contrast to conventional exercises). Both interoception (the ability to sense and regulate one’s internal state) and coregulation (the degree to which the state of one person’s nervous system regulation affects that of others) influence the experience of movement and are powerful tools that the integrative physical therapist can utilize within treatment sessions. Manual therapy, neuromuscular reeducation techniques, and therapeutic exercises can reflect an understanding of living systems theory when supporting safe body awareness and accurate interoception, cultivating regulated body–mind–behavioral states. When optimizing wellness through improved stress management is emphasized as part of an IPT care model, integrative physical therapists themselves also benefit. Embodying a positive lifestyle, practicing stress management, and utilizing integrative-minded movement practices can improve one’s own health and wellbeing and could serve as health care worker burnout prevention. ,
The biotensegrity model as an evolution of the reductive muscle and joint approach to understanding and treating musculoskeletal system dysfunctions is another concept that supports the IPT living system guiding principle. The bio-fascial-neuro-endocrine system is by nature integrative in that it encompasses, interweaves, and interpenetrates all organs, muscles, bones, and nerve fibers, giving the body functional structure, and enabling all body systems to operate in an integrated manner. The biotensegrity model takes into account the state of interdependence of all the body’s tissues in the transmission of force, the creation of movement and stillness, and the function of other systems throughout the body. Within an IPT perspective, understanding how the whole system responds to the demands placed on it becomes as equally important as the understanding of single joint and muscle strength and mobility tests and treatments.
As a second example of a living system in IPT practice, self-regulation provides a framework to explore how information is taken in, processed, and integrated into psychophysiological, somatoemotional, and biobehavioral responses. The autonomic nervous system is inseparably intertwined with the central nervous system and with every other system in the body, making the process of self-regulation of the stress response a holistic experience. As each organism is “hardwired” for the subconscious detection of danger (a process termed neuroception or threat appraisal), the central nervous system–autonomic nervous system –full body system functions on a continuum of mobilization and restoration. There are a number of evidence-informed theories (ie, polyvagal theory, neurovisceral integration theory, and the preparatory set) that underscore the relevance of approaching clinical care within a living systems perspective. Within living systems theory, we see that sources of stress and support can arise throughout the tree (individual body and mind), soil (environment and exposome), and ecosystem (society and culture)—at the levels of mental, emotional, physical, environmental, nutritional, social, and/or existential. Mitigation of stress through any one of these levels carries influence within the larger whole. Vital to the IPT model of care is the understanding of the complexities and uniqueness of how stress moves from healthy demand into toxic stress for each patient and collaboratively exploring stress management skills and resources. This lens, while potentially helpful for understanding all clinical presentations, is especially helpful in working to support patients with chronic disease, persistent pain, and states of multi-morbidity. Through a living systems theory perspective, the integrative physical therapist can utilize an in-depth understanding of how such physiological responses impact symptomology, sensorimotor learning, behavior, and/or cognitive-emotional states. This aim includes considering how these self-regulatory states apply to movement interventions that elicit playfulness, joy, or personal connection (in contrast to conventional exercises). Both interoception (the ability to sense and regulate one’s internal state) and coregulation (the degree to which the state of one person’s nervous system regulation affects that of others) influence the experience of movement and are powerful tools that the integrative physical therapist can utilize within treatment sessions. Manual therapy, neuromuscular reeducation techniques, and therapeutic exercises can reflect an understanding of living systems theory when supporting safe body awareness and accurate interoception, cultivating regulated body–mind–behavioral states. When optimizing wellness through improved stress management is emphasized as part of an IPT care model, integrative physical therapists themselves also benefit. Embodying a positive lifestyle, practicing stress management, and utilizing integrative-minded movement practices can improve one’s own health and wellbeing and could serve as health care worker burnout prevention. ,
As movement specialists, physical therapists aim to work with patients to develop nonableist and customized movement patterns and/or physical activity practices. With alignment to the patient’s history, motivations, and goals, this effort may include more frequent movement and/or prescribed doses of aerobic, resistance, flexibility, and neuromotor control activities. However, an IPT approach reframes physical activity into movement experiences informed by mindfulness research and social, environmental, and cultural factors. , Integrative physical therapy supports examining energy management and balance in a holistic manner, including the relevance of rest and restorative processes. It is vital in integrative physical therapy to explore each patient’s relationship to their body and how that translates into their relationship with movement. In this paradigm, movement may also be considered metaphorically. Am I moving backward or forward in life? Where is “movement” happening (or not happening) in my body? In my life as a whole? The integrative physical therapist approach also acknowledges and leverages how movement influences and is influenced by learning and cognition ; mental, emotional, and spiritual well-being ; stress and trauma biology ; social relationships ; and the attainment of various needs including instinctual and survival oriented impulses. The act of conscious breathing itself is considered a movement practice, promoting neurophysiological regulation and the relaxation response. , As introduced in living systems theory, the science of interoception also underscores movement as an integrative experience, bringing insight into the relationships between the mind, brain, body, environment, and behavior. The development of self-referential processes around the integration of sensory signals with thoughts, beliefs, memories, intentions, posture, and movement can facilitate greater regulation and resilience. , This interoceptive skill building can help the person to notice dysregulation that may stem from interacting physical, mental, social, environmental, etc. stressors, and refocus on what is aligned with meaning, purpose, and committed action in the service of one’s goals and values. Mind and Body Movement Systems Integrative physical therapy may include conventional forms of exercise alongside mind and body movement systems such as yoga, tai chi, qigong, dance, the Feldenkrais Method, the Alexander Technique, Laban Movement Analysis, etc. In response to the Flexner Report of 1910, the prevailing biomedical model of health categorized many ancient, indigenous, and culturally specific health practices as alternative medicine. In some cases, this categorization has created an unhelpful polarization and marginalization of many of the world’s established healing practices. The field of integrative health strives to offer an enhanced approach, wherein complementary and alternative medicine practices can be synergistically integrated with conventional care in a person-centered and evidence-based manner. The inclusion of ancient mind and body movement practices into physical therapy moves us away from these practices being marginalized as an “alternative” to conventional care, but rather as “integrated” into a model that includes both conventional and traditional healing practices.
Integrative physical therapy may include conventional forms of exercise alongside mind and body movement systems such as yoga, tai chi, qigong, dance, the Feldenkrais Method, the Alexander Technique, Laban Movement Analysis, etc. In response to the Flexner Report of 1910, the prevailing biomedical model of health categorized many ancient, indigenous, and culturally specific health practices as alternative medicine. In some cases, this categorization has created an unhelpful polarization and marginalization of many of the world’s established healing practices. The field of integrative health strives to offer an enhanced approach, wherein complementary and alternative medicine practices can be synergistically integrated with conventional care in a person-centered and evidence-based manner. The inclusion of ancient mind and body movement practices into physical therapy moves us away from these practices being marginalized as an “alternative” to conventional care, but rather as “integrated” into a model that includes both conventional and traditional healing practices.
Integrative health models recognize that health and wellness are more than the absence of disease. Salutogenesis (health creation) is a concept that emphasizes facilitating a move toward greater well-being and flourishing, rather than solely moving away from illness, recognizing that even when aspects of an illness or injury remain, a person can continue to engage with life in a values-aligned way and experience well-being. , Applying salutogenesis to the practice of integrative physical therapy means moving beyond restoring function or mobility toward a place where patients’ efforts in rehabilitation connect to personal concepts of flourishing within their current circumstance or situation. Eudaimonia As one means toward embracing a salutogenic framework, the Aristotelian concept of eudaimonic well-being refers to thriving within a well-lived life, a steadfast joy that does not fluctuate with circumstance. Within a salutogenic framework, eudaimonic wellbeing is defined as: meaning and purpose, self-defined virtues and ethics, social connection, autonomy, and personal expressiveness, and self-actualization and realization. Eudaimonic well-being has been explored and found to be connected to many positive health outcomes including: improved immune function, decreased allostatic load, decreased all-cause mortality independent of age, gender, physical inactivity, and the presence of disease. , In chronic pain conditions, salutogenesis through eudaimonic well-being is related to lower levels of fatigue, disability, pain intensity, pain medication use, and to improved wellbeing, patient functioning, adjustment to chronic pain, depression symptoms, and life satisfaction. , Within an IPT practice, salutogenesis means not just focusing on treating an injury or disorder, but encompassing interventions that encourage thriving. This could include mindfulness practices, healthy lifestyle, joyful movement, and/or interoceptive practices for improved mind and body connection.
As one means toward embracing a salutogenic framework, the Aristotelian concept of eudaimonic well-being refers to thriving within a well-lived life, a steadfast joy that does not fluctuate with circumstance. Within a salutogenic framework, eudaimonic wellbeing is defined as: meaning and purpose, self-defined virtues and ethics, social connection, autonomy, and personal expressiveness, and self-actualization and realization. Eudaimonic well-being has been explored and found to be connected to many positive health outcomes including: improved immune function, decreased allostatic load, decreased all-cause mortality independent of age, gender, physical inactivity, and the presence of disease. , In chronic pain conditions, salutogenesis through eudaimonic well-being is related to lower levels of fatigue, disability, pain intensity, pain medication use, and to improved wellbeing, patient functioning, adjustment to chronic pain, depression symptoms, and life satisfaction. , Within an IPT practice, salutogenesis means not just focusing on treating an injury or disorder, but encompassing interventions that encourage thriving. This could include mindfulness practices, healthy lifestyle, joyful movement, and/or interoceptive practices for improved mind and body connection.
It is essential to recognize that all 5 of the guiding principles (therapeutic partnership, WPH, living systems theory, movement as an integrative experience, and salutogenesis) are interdependent. For example, facilitating shifts in central nervous system–autonomic nervous system–full body regulation can impact how a person moves, their ability for social communication and relating, their pain experience, and more. Although systemic chronic inflammation is associated with a host of poor health outcomes and can be linked back to the interaction of SDOH with lifestyle (ie, poor nutrition, sedentary behavior, poorly managed stress, inadequate sleep, etc.), social experiences can also coregulate inflammation, and social pain has shared physiological representations with physical pain. This highlights the overlap between WPH, living systems, and therapeutic partnership within IPT care. WPH itself incorporates a salutogenic approach, while also acknowledging the person as a living system. And TIC transcends the therapeutic partnership to include the recognition of trauma’s impact on the nervous system, the ability to self or coregulate, one’s relationship to movement, and contributions to WPH and salutogenesis, crossing through all 5 of the guiding principles. Integrative Physical Therapy—From Theory to Practice Some existing specialties within physical therapy lend themselves particularly well to an IPT approach. One example could be found in the field of pelvic health, where the sensitivity required to facilitate healing in the intimate areas of the pelvis and genitals aligns with elements of IPT care, such as trauma informed care, therapeutic partnership, and WPH. Physical therapists who work with people who experience chronic pain are another group that have a natural congruence with IPT. The biopsychosocial and biopsychosocial-spiritual approaches to chronic pain care have strong grounding in WPH and salutogenesis. Physical therapists working with people undergoing a cancer journey are also appropriate for this type of care. Navigating the physical, social, and emotional burden of cancer treatment has facilitated the field of integrative oncology, where understanding how to apply salutogenesis, healthier movement habits, and stress management strategies all contribute to long-term health, wellbeing, and survivorship. Evidence that the physical therapy profession can be inclusive of an integrative health approach can be seen in clinical practice guidelines based around preventative care, nutrition, yoga, etc. , Although these clinical practice guidelines can be viewed as an emerging empirical evidence base for IPT care, the inclusion of preventative care, complementary and integrative health modalities, lifestyle education, etc. into an allopathic physical therapist paradigm is not equivalent to IPT care ( ). The shift to an integrative health paradigm requires that the 5 guiding principles are at the forefront of IPT patient care. The disintegrated existence of aspects of integrative health already present within the field of physical therapy indicates an even more pressing need for the profession to establish guiding principles. Physical therapists interested in this paradigm would benefit from a set of standards to guide professional development. An innovative example of an integrative physical therapy approach is currently offered at Hennepin Healthcare Systems in Minneapolis. Located in downtown Minneapolis, Hennepin Healthcare System includes a safety net hospital and outpatient clinics, providing care for low-income, the uninsured, and vulnerable populations. The integrative physical therapy specialty at Hennepin Healthcare System is described as a whole person approach to rehabilitation, focusing on maximizing the body’s ability to self-heal, nervous system regulation, and exploring mind, body, and spirit aspects of movement, helping patients uncover deeper compassion, joy, confidence, and purpose within their bodies. Sixty-minute treatment sessions allow for patient and IPT relationship building and wellness promotion through extensive subjective intakes that include inquiry around lifestyle, environment (safety and access to resources), social support, history of trauma, and SDOH. In this practice, the integrative physical therapists work in collaboration with the other integrative health practitioners in the organization, cofacilitating lifestyle-based group medical visits with integrative medical doctors, teaching trauma-sensitive yoga for the mother and baby mental health day hospital and cancer center, and practicing alongside acupuncturists and chiropractors. Although practices and modalities within the IPT specialty include conventional physical therapist practices, the therapists in this specialty carry additional training in mind–body science, yoga and yoga therapy, TIC, myofascial manual therapy, meditation, breathwork, and/or integrative medicine. In this case, each of the integrative health practices are more than an add-on modality, but rather inform the overall holistic paradigm of care within the specialty. Another environment conducive to the IPT approach can be found the Veterans Health Administration Whole Health approach to care. This approach represents a systems-wide shift emphasizing health promotion, patient-driven care, and the integration of complementary and integrative health services along with conventional care. This includes a move toward asking “what matters to you, vs what is the matter with you.” There are 3 components to the Veterans Health Administration Whole Health System. The first of which is the Pathway, which includes empowering individuals to explore what matters to them through reflecting on one’s mission, aspiration, and purpose to set personalized goals for wellbeing. The second component includes well-being programs to support skills building of these practices—including complementary and integrative health approaches, other well-being approaches, and health coaching. Lastly, the Whole Health Clinical Care program integrates what matters to the veteran’s personalized goals and self-care, including healing environments and relationships, complementary and integrative health approaches, personal health planning, and health coaching. In the Veterans Health Administration, initiatives in physical therapy that incorporate these guiding principles in varying amounts include a biopsychosocial mentorship for physical therapists, codisciplinary pain care with physical therapists and behavioral health professionals, and the Tele-Pain-EVP Empower Veterans Program which centers on interdisciplinary care and the exploration of purpose and one’s identified values while supporting self-care skills building including self-regulation practices and WPH.
Some existing specialties within physical therapy lend themselves particularly well to an IPT approach. One example could be found in the field of pelvic health, where the sensitivity required to facilitate healing in the intimate areas of the pelvis and genitals aligns with elements of IPT care, such as trauma informed care, therapeutic partnership, and WPH. Physical therapists who work with people who experience chronic pain are another group that have a natural congruence with IPT. The biopsychosocial and biopsychosocial-spiritual approaches to chronic pain care have strong grounding in WPH and salutogenesis. Physical therapists working with people undergoing a cancer journey are also appropriate for this type of care. Navigating the physical, social, and emotional burden of cancer treatment has facilitated the field of integrative oncology, where understanding how to apply salutogenesis, healthier movement habits, and stress management strategies all contribute to long-term health, wellbeing, and survivorship. Evidence that the physical therapy profession can be inclusive of an integrative health approach can be seen in clinical practice guidelines based around preventative care, nutrition, yoga, etc. , Although these clinical practice guidelines can be viewed as an emerging empirical evidence base for IPT care, the inclusion of preventative care, complementary and integrative health modalities, lifestyle education, etc. into an allopathic physical therapist paradigm is not equivalent to IPT care ( ). The shift to an integrative health paradigm requires that the 5 guiding principles are at the forefront of IPT patient care. The disintegrated existence of aspects of integrative health already present within the field of physical therapy indicates an even more pressing need for the profession to establish guiding principles. Physical therapists interested in this paradigm would benefit from a set of standards to guide professional development. An innovative example of an integrative physical therapy approach is currently offered at Hennepin Healthcare Systems in Minneapolis. Located in downtown Minneapolis, Hennepin Healthcare System includes a safety net hospital and outpatient clinics, providing care for low-income, the uninsured, and vulnerable populations. The integrative physical therapy specialty at Hennepin Healthcare System is described as a whole person approach to rehabilitation, focusing on maximizing the body’s ability to self-heal, nervous system regulation, and exploring mind, body, and spirit aspects of movement, helping patients uncover deeper compassion, joy, confidence, and purpose within their bodies. Sixty-minute treatment sessions allow for patient and IPT relationship building and wellness promotion through extensive subjective intakes that include inquiry around lifestyle, environment (safety and access to resources), social support, history of trauma, and SDOH. In this practice, the integrative physical therapists work in collaboration with the other integrative health practitioners in the organization, cofacilitating lifestyle-based group medical visits with integrative medical doctors, teaching trauma-sensitive yoga for the mother and baby mental health day hospital and cancer center, and practicing alongside acupuncturists and chiropractors. Although practices and modalities within the IPT specialty include conventional physical therapist practices, the therapists in this specialty carry additional training in mind–body science, yoga and yoga therapy, TIC, myofascial manual therapy, meditation, breathwork, and/or integrative medicine. In this case, each of the integrative health practices are more than an add-on modality, but rather inform the overall holistic paradigm of care within the specialty. Another environment conducive to the IPT approach can be found the Veterans Health Administration Whole Health approach to care. This approach represents a systems-wide shift emphasizing health promotion, patient-driven care, and the integration of complementary and integrative health services along with conventional care. This includes a move toward asking “what matters to you, vs what is the matter with you.” There are 3 components to the Veterans Health Administration Whole Health System. The first of which is the Pathway, which includes empowering individuals to explore what matters to them through reflecting on one’s mission, aspiration, and purpose to set personalized goals for wellbeing. The second component includes well-being programs to support skills building of these practices—including complementary and integrative health approaches, other well-being approaches, and health coaching. Lastly, the Whole Health Clinical Care program integrates what matters to the veteran’s personalized goals and self-care, including healing environments and relationships, complementary and integrative health approaches, personal health planning, and health coaching. In the Veterans Health Administration, initiatives in physical therapy that incorporate these guiding principles in varying amounts include a biopsychosocial mentorship for physical therapists, codisciplinary pain care with physical therapists and behavioral health professionals, and the Tele-Pain-EVP Empower Veterans Program which centers on interdisciplinary care and the exploration of purpose and one’s identified values while supporting self-care skills building including self-regulation practices and WPH.
Although a full discussion of the challenges and limitations to IPT care are out of the scope of this article, we recognize that deploying an IPT model of care demands both individual and structural effort. The dominance of our allopathic health care model creates myriad challenges when moving into IPT practice. From legal barriers for reimbursement for preventive or wellness-oriented care to limitations of diagnosis and referral frameworks that push physical therapists into the limited belief that they are treating a body part rather than the whole person, true integrative care requires transformation on individual and system levels. Similar to the efforts to advance physical therapy in mental health care, some might fear that facets of WPH are outside the scope of physical therapist practice. However, in the integrative health context, physical, mental, emotional, and spiritual wellbeing are all intrinsically connected; it is impossible to treat one without affecting the others. Clear delineation of scope of practice requires careful and ongoing discernment and clinical supervision and mentoring models are recommended. IPT also necessitates treatment sessions with adequate time to build a therapeutic relationship between the physical therapist and the patient. Clinics that utilize shorter duration sessions will likely not be conducive to an IPT approach. Integrative physical therapists may also require additional time between patients to self-regulate in order to engage with each patient and client in a fully present state. This point raises an additional challenge to an IPT practice given that this work requires commitment to one’s own self-care, lifestyle, and mind and body practices. As there is an ever-growing public demand for a more holistic approach to health care, delineating the comprehensive mind–body skills that support IPT practice will establish physical therapy as an essential profession within the growing field of integrative health. The specifics on appropriate training and competencies for integrative physical therapy practice remain open. The American Medical Academy has set standards for physicians wishing to specialize in integrative health that include a 2-year fellowship program and an examination, among other requirements. The profession of physical therapy could benefit from examining the relevance of similar standards. Although a full exploration of standards for IPT practice are outside the scope of this article, determining the specifics for training and/or certification, including learning objectives, Commission on Accreditation in Physical Therapy Education standards, and continuing education certification curriculum could be a next step toward formalizing an IPT specialty. A more in-depth look at individual and structural barriers to IPT practice and how to overcome them have been delineated. Given the larger rapidly changing health care landscape, the physical therapy profession must address the question of whether integrative physical therapy should become a board-recognized clinical specialty versus seeing the principles that inform integrative physical therapy as a natural embodiment of the American Physical Therapy Association mission and vision of societal transformation. Either way, there is an urgent need for objective, evidence-based information and training on integrative physical therapy approaches and further elaboration and clinical models that integrate these principles.
This paper proposes a foundation for integrative physical therapy practice with 5 interdependent guiding principles of therapeutic partnership, WPH, living systems, movement as an integrative experience, and salutogeneis. This perspective centers on the interdependence of all aspects of the person’s experience and supports the optimization of well-being with a focus on meaning, patient-identified values, and a purpose-filled life. Fully embracing an integrative health model as a physical therapist requires a deepened, mindful therapeutic presence, and a unique skill set to meet each proposed guideline within the dynamic complexities and challenges of modern health care. As we recognize that the health of all people is interdependent with ecological and planetary health, the need for an integrative approach to physical therapy that leverages these connections becomes even more pressing. This paper envisions a starting point of foundational guiding principles that constitute integrative physical therapist practice.
|
The differential impact of pediatric COVID-19 between high-income countries and low- and middle-income countries: A systematic review of fatality and ICU admission in children worldwide | 9153c0f5-2e01-40b1-8995-0e35448fb761 | 7845974 | Pediatrics[mh] | Since December 2019, the Severe Acute Respiratory Syndrome (SARS) coronavirus 2 (SARS-CoV-2) has spread throughout the world and coronavirus disease 2019 (COVID-19) has resulted in more than 85 million cases and approximately 1.8 million deaths worldwide as of Dec 31, 2020 . Case counts and deaths by country have been reported in near real time by several sources . Previous large cohort studies from many countries and review articles of children with COVID-19 concluded that death was rare . However, the overall global impact of COVID-19 in children is presently unknown, as is how the impact for children varies between countries. Investigating the differential impact of COVID-19 in children by country is important to direct limited global resources to more vulnerable regions. The number of deaths and intensive care unit (ICU) admissions per capita as well as case fatality rate (CFR) and ICU admission rate due to COVID-19 have been widely used as measures of COVID-19 severity and are more likely to reflect true impact than number of cases which is dependent on local testing patterns and the size of population. Notably, less access to ICU level care in more resource limited countries results in an inability to provide the highest level care to the most critically ill which may be related to higher case fatality . To better understand the global epidemiology of COVID-19 in children and differences in outcome across countries, we conducted a systematic review of multiple databases in multiple languages as well as national reports from governments or public health authorities.
This study was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines . This systematic review is registered with PROSPERO (registration number: CRD42020179696). Our primary objective was to provide a robust global data of pediatric COVID-19 deaths and ICU admissions. Our secondary objectives were to determine pediatric CFR and ICU admission rates and to compare results across HIC and LMIC. The case definition included children (aged 0–19 years) with polymerase chain reaction (PCR)-confirmed SARS-CoV-2 infection. Serologically diagnosed cases without PCR confirmation were excluded from our analysis. The targeted focus of this study is to evaluate the global impact of PCR-confirmed SARS-CoV-2 infection in children. The epidemiology of the multi-system inflammatory syndrome in children (MIS-C), including deaths and ICU admissions from this condition, was excluded. Case reports, case series, prospective studies, case control studies, and cross-sectional studies were all eligible data sources. Search strategy We searched MEDLINE, Embase, the Cochrane Library, CINAHL and WHO COVID-19 databases to identify articles related to or including any COVID-19 cases without language restriction. To more fully capture global data, we also searched the following non-English databases: CNKI and Wanfang (Chinese), Kmbase (Korean), ICHUSI Web (Japanese), LILAC and SciELO (Spanish and Portuguese), LiSSa (French), Ulakbim (Turkish), Magiran (Farsi), Islamic world citation center (Arabic) and Russian Scientific Electronic Library (Russian). The article search was completed in three times. The first search using all databases was conducted on April 30 (search 1) and the second and the third search using only MEDLINE and Embase was conducted on August 10 (search 2) and December 7 (search 3). The details of the search strategy are provided in . Study selection and risk of bias assessment Articles were extracted from each database using Endnote X8 (Clarivate Analytics US LLC, PA), and then the software package COVIDENCE (Veritas Health Innovation; www.covidence.org ) was used to manage these. Extracted studies were screened by two reviewers independently. Full text reviews of English and non-English articles were conducted by two independent reviewers fluent in the language of the study or report. Studies were screened using the inclusion and exclusion criteria described below. We included studies with any pediatric COVID-19 case (aged 0–19 years) from which fatality or ICU admission data could be extracted. We excluded studies without age-specific information, without any pediatric cases, and without any extractable outcome data for the pediatric cases. Review articles, guidelines, expert opinions, articles whose main topic was infection control, mathematical modelling studies, molecular studies, animal studies, studies about other coronaviruses, and overlapping datasets were also excluded. If more than one study from the same population was found (overlapping data), the study with the most comprehensive result or the largest pediatric sample size was included. For feasibility reasons, in search 3 we included articles reporting nationwide subgroup outcomes (neonatal, age-specific or ICU respiratory support outcomes) but excluded other articles from countries with comprehensive national reports for fatality and ICU outcomes. For articles with overlapping data, where one article had more comprehensive data for fatality and the other article had more comprehensive ICU admission data, both were included and the more comprehensive data was extracted from the appropriate paper. Additional methodologic details of dealing with overlapping data are provided in (page 18–19). Discrepancies in inclusion or exclusion during screening were discussed between the two reviewers until consensus was achieved or resolved by a third reviewer. In cases of ambiguity of data in extracted articles, for example if we could not rule out a possibility of overlap with another article or if we could not find clear outcomes of included cases (ICU admission or maximal respiratory support), the authors of the studies were contacted. Two independent reviewers evaluated the quality of included studies using Critical Appraisal Tools of Joanna Briggs Institute . Discrepancies in the risk of bias assessment tools were discussed between the two reviewers until consensus was achieved or resolved by a third reviewer. National report search To assess grey literature sources, we followed the Canadian Agency for Drugs and Technology checklist and searched health surveys, case notification systems, national reports or survey data and governmental official reports from national Centers for Disease Control (CDC), governments, ministries of health, national departments of health or national academic associations in all 218 countries defined by the World Bank. All references from their websites were investigated to find national reports for pediatric COVID-19 cases and age-specific COVID-19 cases using the same inclusion and exclusion criteria for articles. The national report data searches were performed on May 22–24 (search 1), August 27-September 1 (search 2), and December 7–11, 2020 (search 3). Data extraction and analysis A Microsoft Excel datasheet was used to track all included articles, governmental and public health reports. Data extraction forms were independently completed by each reviewer. For included studies and reports, we extracted the number, age and clinical outcomes (death and ICU admission) of all identified pediatric cases. For children admitted to ICU, if available, data on the type of ICU admission, maximal respiratory support and required vasopressor use were also extracted. Additional clinical data including laboratory results and directed treatments were not extracted. A third reviewer checked the article/report list and reviewed the included extraction forms of the first two reviewers to ensure accuracy, to ensure no duplication of articles or datasets, and to resolve any discrepancies. First, all pediatric COVID-19 deaths and ICU admissions were pooled to calculate the global and national number of deaths and ICU admissions in children. If a child was admitted to ICU and died due to COVID-19, we counted the case as one death and one ICU admission. Then, COVID-19 deaths or ICU admissions/1,000,000 children were calculated by dividing the number of pediatric COVID-19 deaths in each country by the total number of children (0-19years of age) in that population. National pediatric populations by age for all countries were obtained from data in United Nations (estimated population on July 1, 2020) . Data including the number of pediatric SARS-CoV-2 infections and outcomes (pediatric death or ICU admission) were also synthesized to calculate the combined pediatric CFR and ICU admission rate. The data with both denominator (the number of confirmed SARS-CoV-2 infections) and numerator (the number of deaths or ICU admissions) were included in our calculation of CFR and ICU admission rate. The outcome data were pooled at a country level. We combined the number of pediatric cases and the number of pediatric outcome events from all data extracted to calculate the pooled pediatric CFR and ICU admission rate. The number of deaths and ICU admissions as well as CFRs and ICU admission rates were compared between high-income countries (HICs) and low- and middle-income countries (LMICs), which were defined according to the World Bank Country Classification . LMICs included low-, lower-middle-, and upper-middle-income countries. World geographical maps were built with the geographic information system QGIS (v3.10, https://qgis.org ) to illustrate national COVID-19 deaths/1,000.000 children and CFR for children. Age-specific CFR and rate of ICU admission were calculated among the included cases with detailed age information. Microsoft Excel and Stata ® v14.2 software were used for data synthesis. Outcomes were presented with 95% confidence intervals. Pearson's chi-squared tests were performed to compare outcomes between groups. If a CFR or ICU admission rate could not be calculated because either the age range or dates of reporting of cases and deaths or ICU admission differed, or if only the absolute number of hospitalizations, instead of the number of cases, was reported, then we did not include these national data into our calculation of global CFR or ICU admission rate, but did include in the total number of deaths or ICU admissions. Data which may not be nationally representative, including case reports or case series from a specific hospital and subnational data (data from a specific city), as well as clinically diagnosed cases without PCR confirmation, were not included in our primary calculations, but evaluated in the sensitivity analysis to investigate the robustness of our result. In order to ensure data accurately reflected the time frame between searches 2 and 3, we excluded countries with CDC COVID-19 levels 2 to 4 (moderate, high, very high transmission) that had not updated their national reports for more than 2 months prior to December 7 . These national reports were evaluated in the sensitivity analysis.
We searched MEDLINE, Embase, the Cochrane Library, CINAHL and WHO COVID-19 databases to identify articles related to or including any COVID-19 cases without language restriction. To more fully capture global data, we also searched the following non-English databases: CNKI and Wanfang (Chinese), Kmbase (Korean), ICHUSI Web (Japanese), LILAC and SciELO (Spanish and Portuguese), LiSSa (French), Ulakbim (Turkish), Magiran (Farsi), Islamic world citation center (Arabic) and Russian Scientific Electronic Library (Russian). The article search was completed in three times. The first search using all databases was conducted on April 30 (search 1) and the second and the third search using only MEDLINE and Embase was conducted on August 10 (search 2) and December 7 (search 3). The details of the search strategy are provided in .
Articles were extracted from each database using Endnote X8 (Clarivate Analytics US LLC, PA), and then the software package COVIDENCE (Veritas Health Innovation; www.covidence.org ) was used to manage these. Extracted studies were screened by two reviewers independently. Full text reviews of English and non-English articles were conducted by two independent reviewers fluent in the language of the study or report. Studies were screened using the inclusion and exclusion criteria described below. We included studies with any pediatric COVID-19 case (aged 0–19 years) from which fatality or ICU admission data could be extracted. We excluded studies without age-specific information, without any pediatric cases, and without any extractable outcome data for the pediatric cases. Review articles, guidelines, expert opinions, articles whose main topic was infection control, mathematical modelling studies, molecular studies, animal studies, studies about other coronaviruses, and overlapping datasets were also excluded. If more than one study from the same population was found (overlapping data), the study with the most comprehensive result or the largest pediatric sample size was included. For feasibility reasons, in search 3 we included articles reporting nationwide subgroup outcomes (neonatal, age-specific or ICU respiratory support outcomes) but excluded other articles from countries with comprehensive national reports for fatality and ICU outcomes. For articles with overlapping data, where one article had more comprehensive data for fatality and the other article had more comprehensive ICU admission data, both were included and the more comprehensive data was extracted from the appropriate paper. Additional methodologic details of dealing with overlapping data are provided in (page 18–19). Discrepancies in inclusion or exclusion during screening were discussed between the two reviewers until consensus was achieved or resolved by a third reviewer. In cases of ambiguity of data in extracted articles, for example if we could not rule out a possibility of overlap with another article or if we could not find clear outcomes of included cases (ICU admission or maximal respiratory support), the authors of the studies were contacted. Two independent reviewers evaluated the quality of included studies using Critical Appraisal Tools of Joanna Briggs Institute . Discrepancies in the risk of bias assessment tools were discussed between the two reviewers until consensus was achieved or resolved by a third reviewer.
To assess grey literature sources, we followed the Canadian Agency for Drugs and Technology checklist and searched health surveys, case notification systems, national reports or survey data and governmental official reports from national Centers for Disease Control (CDC), governments, ministries of health, national departments of health or national academic associations in all 218 countries defined by the World Bank. All references from their websites were investigated to find national reports for pediatric COVID-19 cases and age-specific COVID-19 cases using the same inclusion and exclusion criteria for articles. The national report data searches were performed on May 22–24 (search 1), August 27-September 1 (search 2), and December 7–11, 2020 (search 3).
A Microsoft Excel datasheet was used to track all included articles, governmental and public health reports. Data extraction forms were independently completed by each reviewer. For included studies and reports, we extracted the number, age and clinical outcomes (death and ICU admission) of all identified pediatric cases. For children admitted to ICU, if available, data on the type of ICU admission, maximal respiratory support and required vasopressor use were also extracted. Additional clinical data including laboratory results and directed treatments were not extracted. A third reviewer checked the article/report list and reviewed the included extraction forms of the first two reviewers to ensure accuracy, to ensure no duplication of articles or datasets, and to resolve any discrepancies. First, all pediatric COVID-19 deaths and ICU admissions were pooled to calculate the global and national number of deaths and ICU admissions in children. If a child was admitted to ICU and died due to COVID-19, we counted the case as one death and one ICU admission. Then, COVID-19 deaths or ICU admissions/1,000,000 children were calculated by dividing the number of pediatric COVID-19 deaths in each country by the total number of children (0-19years of age) in that population. National pediatric populations by age for all countries were obtained from data in United Nations (estimated population on July 1, 2020) . Data including the number of pediatric SARS-CoV-2 infections and outcomes (pediatric death or ICU admission) were also synthesized to calculate the combined pediatric CFR and ICU admission rate. The data with both denominator (the number of confirmed SARS-CoV-2 infections) and numerator (the number of deaths or ICU admissions) were included in our calculation of CFR and ICU admission rate. The outcome data were pooled at a country level. We combined the number of pediatric cases and the number of pediatric outcome events from all data extracted to calculate the pooled pediatric CFR and ICU admission rate. The number of deaths and ICU admissions as well as CFRs and ICU admission rates were compared between high-income countries (HICs) and low- and middle-income countries (LMICs), which were defined according to the World Bank Country Classification . LMICs included low-, lower-middle-, and upper-middle-income countries. World geographical maps were built with the geographic information system QGIS (v3.10, https://qgis.org ) to illustrate national COVID-19 deaths/1,000.000 children and CFR for children. Age-specific CFR and rate of ICU admission were calculated among the included cases with detailed age information. Microsoft Excel and Stata ® v14.2 software were used for data synthesis. Outcomes were presented with 95% confidence intervals. Pearson's chi-squared tests were performed to compare outcomes between groups. If a CFR or ICU admission rate could not be calculated because either the age range or dates of reporting of cases and deaths or ICU admission differed, or if only the absolute number of hospitalizations, instead of the number of cases, was reported, then we did not include these national data into our calculation of global CFR or ICU admission rate, but did include in the total number of deaths or ICU admissions. Data which may not be nationally representative, including case reports or case series from a specific hospital and subnational data (data from a specific city), as well as clinically diagnosed cases without PCR confirmation, were not included in our primary calculations, but evaluated in the sensitivity analysis to investigate the robustness of our result. In order to ensure data accurately reflected the time frame between searches 2 and 3, we excluded countries with CDC COVID-19 levels 2 to 4 (moderate, high, very high transmission) that had not updated their national reports for more than 2 months prior to December 7 . These national reports were evaluated in the sensitivity analysis.
A total of 28,557 articles from 16 databases were identified. After duplicates were removed, 16,027 records remained. Title and abstract screening excluded 13,562 records. The full text of 2,465 records were assessed in detail. 443 articles met the eligibility criteria , the characteristics of which are presented in . We also identified 225 national reports from governments, public health sectors, or national academic associations, of which 145 national reports met the inclusion criteria . The characteristics of the included national reports are presented in . The pediatric death and ICU admission data were available from national reports of 138 countries (59 from HIC and 79 from LMIC) and 22 countries (10 from HIC and 12 from LMIC), respectively. National reports from 8 countries had not been updated for more than 2 months before the date of final search (December 7, 2020). Eight countries did not report any confirmed COVID-19 cases in children as of December 7, 2020. Neither the number of pediatric deaths nor pediatric ICU admissions were available from 80 countries with confirmed SARS-CoV-2 infections as of the same date. In total, there were 3,788 pediatric COVID-19 deaths and 3,118 ICU admissions identified from published literature or national reports worldwide. Among the 3,788 reported pediatric COVID-19 deaths, 321 (8.5%) and 3,394 (91.5%) deaths were reported from HIC and LMIC, while 16.5% and 83.5% of pediatric population from all included countries were from HIC and LMIC respectively. Of the 3,118 ICU admissions, 2,234 (71.7%) were from HIC, while only 720 (28.3%) ICU admissions were reported from LMIC. shows the variation of deaths/1,000,000 children by country. There is an excess of deaths in Middle and South American countries. Among countries where the nation-wide number of pediatric COVID-19 deaths or ICU admissions were available, COVID-19 deaths/1,000,000 children was calculated as 1.32 and 2.77 in HIC and LMIC ( p < 0.001) respectively , while ICU admission/1,000,000 children were 18.80 and 1.48 in HIC and LMIC ( p < 0.001), although the ICU admission data from lower MIC and LIC were scarce . Overall, 3,379,049 children with a known outcome for fatality and 1,738,306 children with a known outcome for ICU admission were included in our calculation of CFR and ICU admission rate as a secondary outcome analysis. The estimated pediatric CFR was 0.061% (95% CI [0.059–0.064%]) (2,061/3,379,049) and pediatric ICU admission rate was 0.152% [0.146–0.158%] (2,644/1,738,306). The world map of national pediatric CFR is presented in . The pediatric CFRs in HICs, upper MICs, lower MICs and LIC were 0.012% [0.010–0.013%], 0.150% [0.140–0.162%], 0.433% [0.407–0.461%] and 0.241% [0.230–0.253%], respectively . The pediatric CFRs are significantly higher in LMIC than HIC (CFR 0.29% [0.28–0.31%] in LMIC vs 0.03% [0.03–0.03%] in HIC; p < 0.001). ICU admission rates were 0.128% [0.122–0.133%] in HIC and 0.397% [0.367–0.430%] in upper MIC; p < 0.001). Age-specific deaths and ICU admissions/1,000,000 children were 10.03 [9.07–11.07] and 16.84 [15.15–18.67] in < 1 year old, 1.64 [1.44–1.86] and 1.40 [1.16–1.67] in 1–4 years old, 0.92 [0.79–1.06] and 0.73 [0.58–0.90] in 5–9 years old, 1.13 [0.99–1.30] and 0.97 [0.79–1.12] in 10–14 years old and 2.70 [2.47–2.95] and 2.78 [2.47–3.12] in 15–19 years old, respectively ( and ). Disaggregated age data were available in 558,961 and 242,827 children with confirmed SARS-CoV-2 infection as denominators for calculation of age-specific CFR and ICU admission rate, respectively. Age-specific CFR and ICU admission rate are shown in and . Infants < 1 year old had the highest CFR (0.58% [0.50–0.67%]) and ICU admission rate (1.41% [1.27–1.56%]). Of 1,924 pediatric COVID-19 ICU admissions with data regarding required maximal respiratory support, 578 (30.0%) required invasive mechanical ventilation . Twenty-nine neonatal COVID-19 fatalities and 129 neonatal ICU admissions were identified. Neonatal case fatality rate and ICU admission rates were estimated as 1.74% (15/864) and 10.81% (91/842), respectively. The result of one-way sensitivity analysis is presented in ). In all sensitivity analysis scenarios, differences in the deaths/1,000,0000 children and CFR between HIC and LMIC were maintained (deaths/1,000,0000 children in HICs = 1.26–1.32 and in LMICs = 2.43–2.77; p < 0.001 and CFR in HICs = 0.01–0.01% and CFR in LMICs = 0.23–0.24%; p < 0.001).
To our knowledge, this study is the largest and most comprehensive systematic review of severe pediatric COVID-19 outcomes to date. The fact that the majority of pediatric COVID-19 fatalities were reported from LMIC, and COVID-19 death using age-specific population as the denominator is greater in LMIC, indicates that the impact of pediatric COVID-19 fatality may in fact be larger in LMIC than that of HIC. The larger impact of pediatric COVID-19 fatalities in LMICs may be in part a consequence of a lower capacity or quality of healthcare system overall. The findings of larger impact of pediatric COVID-19 fatalities for LMICs is also consistent with the higher total all cause child death numbers in these countries . Although many child deaths in LMIC occur without medical attention , whether undercounting deaths in LMIC is also an issue for pediatric COVID-19 deaths is unknown. Our results have shown the opposite to be true for global pediatric ICU admissions. This is in part reflective of better case identification of severe cases of COVID-19 in HIC and dramatically better access to pediatric ICU level care, instead of the difference of disease severity in children . We also showed that the CFR was significantly higher in LMICs compared to HICs (0.24% and 0.01%; p < 0.001), a finding not previously reported, and one that may in part be related to less testing of children with less severe disease in LMICs. Of note, LICs report the lowest COVID-19 deaths/1,000,000. Given the fact that LICs still have higher CFR compared to HICs and upper MICs, this may reflect differing epidemiology of the outbreak situation of COVID-19 in LICs, rather than children in LICs being less likely to have severe COVID-19 once infected. National CFRs can be influenced by many factors, including the age-distribution of the population, phase of the epidemic, and access to and capacity of the health care system . Given that children are more likely to be asymptomatic or paucisymptomatic than adults , calculations of rates using number of infected children as a denominator are likely to be overestimates. For example, nationwide seroprevalence of SARS-CoV-2 in Spain ranged from 1.1–4.0% in children , which means the estimated number of children infected with SARS-CoV-2 from serological tests is much larger than the number of PCR-diagnosed cases in the nation. However, we believe that using serologically-diagnosed cases, instead of PCR diagnosed cases, as a denominator to calculate CFR or ICU admission rate is less appropriate because of variable access to this test across countries during the period under study as well as significant variability in type of serologic tests, and performance of these tests . There are several limitations in our study. First, although our study also showed that among children, those younger than 1 year of age have the highest CFR and ICU admission rate (0.55% and 1.52%), many countries did not report disaggregated age-specific outcome data for children. Nevertheless, the rates we have found are higher than those previously reported . Not all geographic regions equally reported national data in children. While many American and European countries reported nationwide data in children, the data from Africa and Middle East were limited. The scarcity of national ICU admission data from lower MICs and LICs should also be noted. The large heterogeneity of included studies and national reports is another potential limitation of our study. Of note, because we specifically focused on fatality and ICU admission rate from acute COVID-19, the data regarding other important COVID-19 related outcomes, including MIS-C, as well as in-direct impacts of COVID-19 on child health such as disruption to routine childhood vaccination rates and the socio-developmental effects of school closures, were not evaluated in our study. While this study excluded known MIS-C cases, it is possible that some country’s data may include these cases. Finally, our study could not assess the effect of underlying medical conditions, including malnutrition, on severe COVID-19 outcomes. Although our search found a few case reports from LMIC about fatal COVID-19 cases in children with malnutrition , none of the study extracted in our systematic review investigated the impact of malnutrition on the severe COVID-19 outcomes in a population level.
This systematic review is the first to document the global magnitude of impact, specifically fatality and ICU admission, for pediatric COVID-19. The results suggest that there may be a larger impact of pediatric COVID-19 fatality in LMICs compared to HICs. Given the lack of uniform testing strategies among children across the globe, temporal and geographic comparisons among children should be carefully undertaken in order to discern the possible extent of undiagnosed SARS-CoV-2-related deaths. While the true CFR from COVID-19 in children is likely to be lower than the numbers presented here due to limitations in counting total number of infections in children, these data can be taken as a starting point for comparing LMIC to HIC. Efforts should be made both within countries and by the global community to rigorously document age-specific outcomes in a timely manner as such data are important both to fully understand the global epidemiology and also to demonstrate where mitigation efforts, vaccine distribution and improving access to care are warranted.
S1 Checklist PRISMA 2009 checklist. (DOC) Click here for additional data file. S1 File Supplementary methods. (DOCX) Click here for additional data file. S1 Fig World map of national pediatric case fatality rates. Abbreviations: CFR, case fatality rate. CFRs are presented in percentages . Countries of no pediatric case reported includes the country clearly report that there was no confirmed case in children in the national report as of December 7, 2020. National reports published more than 2 months before December 7 are not included, if the countries are CDC COVID-19 Level 2–4 since the date of report. (TIF) Click here for additional data file. S2 Fig Pediatric case fatality rate by country income. The ranges are presented by 95% confidence intervals of each proportion. Global includes all countries defined by World income. Abbreviations: HICs, high-income countries; LMICs, low- and middle-income countries; MICs, middle-income countries; LICs, low-income countries. (TIF) Click here for additional data file. S3 Fig Age-specific case fatality rate and ICU admission rate. A. Age-specific case fatality rate. B. Age-specific ICU admission rate. Age-specific national data with up to one year difference of age buckets were included. For example, age-specific national data reporting outcomes of 1–5 years and 10–15 years were included in our calculation of 1–4 years and 10–14 years. Abbreviations: y, years. (ZIP) Click here for additional data file. S1 Table Study design and risk assessment for included articles from database search. (DOCX) Click here for additional data file. S2 Table National fatality and ICU admission data in children aged 0–19 years with confirmed SARS-CoV-2 infection (Dec 7–11, 2020). (DOCX) Click here for additional data file. S3 Table Study design and risk assessment for included articles from database search. (DOCX) Click here for additional data file. S4 Table Age-specific fatality and ICU admission (per 1,000,000 children), case fatality rate and ICU admission rate. (DOCX) Click here for additional data file. S5 Table ICU cases in children aged 0–19 years with confirmed SARS-CoV-2 infection cases who have known respiratory or circulatory support outcome. (DOCX) Click here for additional data file. S6 Table Sensitivity analysis using alternative data source. (DOCX) Click here for additional data file.
|
Primary Melanoma of the Urinary Bladder: Clinical, Histopathologic, and Comprehensive Molecular Analysis of a Rare Tumor | c918dbe4-63af-4fcc-aa35-009fa922cf92 | 11915768 | Anatomy[mh] | Mucosal melanomas are rare tumors, consisting of about 1% of all melanomas. They arise from melanocytes that reside in mucosal tissues and are mostly seen in oral, nasal, genital, and rectal mucosae. In contrast to cutaneous melanoma, ultraviolet radiation is not involved in its pathogenesis. The molecular landscape of mucosal melanoma is characterized by a low point mutation burden, with high numbers of copy number and structural variants and mutations in mitogen-activated protein kinase (MAPK) pathway genes such as NRAS, BRAF, NF1, and KIT , as well as SF3B1 , TP53 , SPRED1, ATRX, HLA-A, CDH8, and CTNNB1 . – Overall, mucosal melanomas have a significantly worse outcome compared to cutaneous melanomas with 5-year survival rates ranging from 20% to 25%. , The genitourinary tract is a rare location for mucosal melanomas, the urethra being the most common site. There are <50 examples of primary urinary bladder melanomas reported in the literature. – Tumors usually present with macroscopic hematuria and dysuria. Cystoscopy is the primary modality for recognition and usually shows a dark-pigmented mass with varying dimensions. Primary bladder melanomas can have variable histomorphology and can be misdiagnosed as urothelial carcinoma, sarcoma, or metastatic cutaneous melanoma. Melanocytic markers such as SOX10, Melan-A, and HMB-45 are usually positive. Molecular alterations specific to primary urinary bladder melanomas are rarely reported in the literature and are mainly composed of a few case reports and small case series with limited molecular analysis. , More comprehensive genomic studies are needed to understand the pathogenesis of primary urinary bladder melanomas and identify possible targeted treatment modalities. Here we report a primary bladder melanoma, describing clinical, morphological, and immunohistochemical findings and molecular analysis.
A 70-year-old woman presented with complaints of gross hematuria and was evaluated with computed tomography (CT) urogram, which revealed a focal linear filling defect in the proximal right ureter with a heterogeneous mass in the inferior border. Positron emission tomography/CT showed multifocal hyperenhancing lesions in the bladder, the largest at the anterior bladder neck and orifice with likely extension into the urethra (measuring 1.9 × 1.5 cm) and an additional lesion near the right posterior bladder base (1.4 cm in maximum diameter) ( ). No definite evidence of hypermetabolic metastatic disease outside of the bladder was seen. Subsequent dermatological examination revealed no suspicious skin lesions or sites of a non-urologic primary. The patient underwent transurethral resection followed by radical cystectomy with urethrectomy and pelvic lymph node dissections. Macroscopic Findings The cystectomy specimen showed multiple foci of brown–gray, friable, nodular masses (approximately 8 × 7 cm in aggregate area) throughout the trigone, dome, anterior, right lateral, and posterior wall with scattered small foci in the left lateral and posterior wall; overall covering ∼75% of the bladder mucosa ( ). The maximum depth of the masses measured 0.6 cm, with possible invasion into the muscularis propria. The lesions extended to the urethral margin and were within 0.1 cm of the anterior perivesical soft tissue margin. A separately submitted urethral meatus resection specimen was firm with no distinct masses. Microscopic Findings Hematoxylin and eosin (H&E) stained sections of the mass showed sheets of cells forming intramural nodules ( ), exophytic masses protruding toward the lumen ( ) as well as flat lesions demonstrating pagetoid spread of neoplastic cells throughout the urothelium ( and ). The neoplastic cells were slightly dyscohesive, spindled to epithelioid, with moderate amounts of amphophilic cytoplasm, irregular and hyperchromatic nuclei with abundant mitotic figures, apoptotic debris, and an inflammatory cell infiltrate ( and ). The dark brown pigment was occasionally present in neoplastic cells and intermixed macrophages ( , , and ). Immunohistochemical studies showed diffusely positive SOX10 ( ), Melan-A ( ), HMB-45 ( ), preferentially expressed antigen in melanoma (PRAME) ( ) and patchy S100 ( ) immunoreactivity and were negative for p63 ( ) and GATA3, supporting the diagnosis of melanoma. A CD163 stain highlighted admixed pigmented macrophages (melanophages) ( ). The tumor invaded into lamina propria but no definite muscularis propria invasion was identified. The separately submitted urethral meatus resection specimen showed a small subepithelial nodule of melanoma and melanoma in situ extending to the mucosal resection margin. No tumor was identified in the submitted lymph nodes. Molecular Studies Hybrid-capture-based next-generation sequencing was performed at the University of California San Francisco Clinical Cancer Genomics Laboratory, using an assay targeting the coding regions of 479 cancer genes, as well as select introns of 47 genes frequently involved in rearrangements (UCSF500 Cancer Gene Panel). Analysis showed numerous copy number changes and multiple focused amplifications including a high-level amplification of the KIT locus with PDGFRA and KDR coamplifications and amplifications of EP300, SOX10, and CRKL genes. Single nucleotide variant analysis showed a frameshift mutation in NF1 . No ultraviolet signature was identified. Clinical Outcome On follow-up, adjuvant immunotherapy was recommended although ultimately not pursued. The patient was followed up elsewhere with systemic therapy status unknown to us. She later developed widely metastatic disease and died within 43 months of initial diagnosis.
The cystectomy specimen showed multiple foci of brown–gray, friable, nodular masses (approximately 8 × 7 cm in aggregate area) throughout the trigone, dome, anterior, right lateral, and posterior wall with scattered small foci in the left lateral and posterior wall; overall covering ∼75% of the bladder mucosa ( ). The maximum depth of the masses measured 0.6 cm, with possible invasion into the muscularis propria. The lesions extended to the urethral margin and were within 0.1 cm of the anterior perivesical soft tissue margin. A separately submitted urethral meatus resection specimen was firm with no distinct masses.
Hematoxylin and eosin (H&E) stained sections of the mass showed sheets of cells forming intramural nodules ( ), exophytic masses protruding toward the lumen ( ) as well as flat lesions demonstrating pagetoid spread of neoplastic cells throughout the urothelium ( and ). The neoplastic cells were slightly dyscohesive, spindled to epithelioid, with moderate amounts of amphophilic cytoplasm, irregular and hyperchromatic nuclei with abundant mitotic figures, apoptotic debris, and an inflammatory cell infiltrate ( and ). The dark brown pigment was occasionally present in neoplastic cells and intermixed macrophages ( , , and ). Immunohistochemical studies showed diffusely positive SOX10 ( ), Melan-A ( ), HMB-45 ( ), preferentially expressed antigen in melanoma (PRAME) ( ) and patchy S100 ( ) immunoreactivity and were negative for p63 ( ) and GATA3, supporting the diagnosis of melanoma. A CD163 stain highlighted admixed pigmented macrophages (melanophages) ( ). The tumor invaded into lamina propria but no definite muscularis propria invasion was identified. The separately submitted urethral meatus resection specimen showed a small subepithelial nodule of melanoma and melanoma in situ extending to the mucosal resection margin. No tumor was identified in the submitted lymph nodes.
Hybrid-capture-based next-generation sequencing was performed at the University of California San Francisco Clinical Cancer Genomics Laboratory, using an assay targeting the coding regions of 479 cancer genes, as well as select introns of 47 genes frequently involved in rearrangements (UCSF500 Cancer Gene Panel). Analysis showed numerous copy number changes and multiple focused amplifications including a high-level amplification of the KIT locus with PDGFRA and KDR coamplifications and amplifications of EP300, SOX10, and CRKL genes. Single nucleotide variant analysis showed a frameshift mutation in NF1 . No ultraviolet signature was identified.
On follow-up, adjuvant immunotherapy was recommended although ultimately not pursued. The patient was followed up elsewhere with systemic therapy status unknown to us. She later developed widely metastatic disease and died within 43 months of initial diagnosis.
Primary mucosal melanoma of the urinary bladder is an uncommon malignancy often presenting with hematuria and cystoscopy findings of a pigmented lesion/mass which are not entirely specific and could be seen in a variety of benign or malignant conditions. At the benign end of the differential diagnosis spectrum is the melanosis of the urinary bladder, which is another rare entity of unknown pathogenesis. Although melanin pigment is present, melanosis is not thought to be composed of melanocytes. The morphologic findings of pigmented bland urothelium with no invasive/mass-like lesion along with negative melanocytic markers are helpful clues to differentiate this entity from melanomas. , Like melanomas at other anatomical locations, primary melanomas of the urinary bladder can display varying morphology including epithelioid, clear, spindled, and rhabdoid cells with nested, diffuse, and fascicular growth patterns. , Histological diagnosis can be challenging especially when melanin pigment is absent. The variable morphology brings up the differential of high-grade or poorly differentiated urothelial carcinoma, sarcomas, and metastatic cutaneous melanoma, all of which occur more frequently in the urinary bladder than primary mucosal melanoma. Immunohistochemistry can assist in the diagnosis, particularly of amelanotic tumors. Cutaneous melanoma metastatic to the bladder occurs in 18% of patients who die from metastatic melanoma; therefore, careful physical examination is required to assess this possibility. The following criteria were suggested for considering melanoma a primary lesion in urinary bladder: (1) no history of a previous cutaneous lesion; (2) no evidence of regressed cutaneous malignant melanoma; (3) no evidence of other visceral primary melanoma; (4) the pattern of recurrence should be consistent with the findings in the region of initial malignant melanoma; and (5) margins of bladder lesion should contain atypical melanocytes similar to those seen in the periphery of primary mucous membrane lesions. , BRAF V600E mutations are uncommon in mucosal melanoma and immunohistochemistry for the BRAF V600E, although not entirely specific, can be a helpful tool for distinguishing primary mucosal melanoma from metastases originating from cutaneous melanoma. Assessment of these criteria relies significantly on a complete and accurate clinical history with a thorough radiological and physical examination. It is possible that a portion of the primary bladder melanoma tumors presented in the literature, especially those with widespread metastasis, could represent metastases from undetected or regressed cutaneous melanomas. In the current patient, dermatological examination showed no foci of cutaneous melanocytic lesions and there were no additional foci detected by radiological imaging. While a regressed cutaneous melanoma cannot be entirely excluded, the presence of an in situ portion at the margin of the tumor was in keeping with primary mucosal melanoma. We cannot be entirely sure whether the tumor originated from the urinary bladder or the proximal urethra as it involved both locations at presentation, although the majority of the mass was centered in the urinary bladder favoring the urinary bladder as the most likely origin. Molecular studies are helpful in differentiating metastatic versus primary mucosal melanomas. In contrast to the vast majority of cutaneous melanomas, mucosal melanomas do not contain characteristic ultraviolet-induced C > T nucleotide transition signatures and have low mutational burden and a greater number of structural chromosomal variants. Whole exome sequencing studies in a cohort of various mucosal melanoma subtypes revealed frequent mutations in MAPK pathway genes such as NRAS, BRAF, NF1, and KIT as well as SF3B1, TP53, SPRED1, ATRX, HLA-A, CDH8, and CTNNB1 genes. , Mutational profile may differ among the mucosal melanomas depending on their anatomical location, for example, mucosal melanomas of the urethra show a higher frequency of TP53 mutations compared to vulvar/vaginal melanomas. There are rare case reports and a small case series of primary melanomas of the urinary bladder with limited molecular analysis, revealing alterations in BRAF, FGFR1, and ERBB2 genes. , BRAF V600E mutations are rare in mucosal melanomas, although it was reported in 3 out of 5 patients in a case series of primary melanomas of the urinary bladder , , – ( ). Without additional evidence, this raises the possibility that these may represent metastases rather than primary mucosal melanoma. Mucosal melanomas are genetically characterized by highly rearranged genomes with numerous copy number changes, including multiple focused amplifications with a low mutation burden, as seen in our patient. , , Somatic mutations in our patient included a frameshift mutation in NF1 as is common in mucosal melanoma. In addition, there were amplifications of the loci encoding EP300, SOX10, and CRKL, all genes recurrently amplified in mucosal melanoma NF1 inactivating mutations are common in mucosal melanomas and frequently co-occur with KIT activation. , In this tumor, there is a focused high-level amplification of the KIT locus. While PDGFRA and KDR are co-amplified, KIT is considered the driver oncogene at this locus as it is recurrently mutated in mucosal melanomas. The frequent co-occurrence of KIT activating mutations or amplification with NF1 or SPRED1 inactivation has led to the suggestion of using combination inhibitors of KIT and MEK for tumors driven by these alterations. , Mucosal melanomas are generally detected at advanced stages and have a poor prognosis. Unlike cutaneous melanoma, mucosal melanomas don’t respond well to immunotherapy and to date, only limited actionable driver mutations have been identified. There is no established staging guideline and no standardized treatment for primary urinary bladder melanomas. Treatment options include transurethral resection or radical cystectomy followed by chemotherapy, radiotherapy, or immunotherapy. Molecular studies can help explore possible targeted treatment options such as KIT and MEK inhibitor therapy in this situation. More studies are needed to establish reliable staging and treatment guidelines for primary urinary bladder melanoma. With a further understanding of the genomic landscape of primary urinary bladder melanomas, actionable mutations for targeted treatment strategies could be identified.
|
Academic Ophthalmology during and after the COVID-19 Pandemic | 278243ca-f693-43d6-a9ef-e65605acc340 | 7194607 | Ophthalmology[mh] | Lectures have been converted rapidly from group meetings in conference rooms to online video conferences (e.g., Cisco WebEx, Zoom). This practice is convenient for faculty and residents, who may be dispersed in satellite clinics or segregated teams, allowing lectures to start at more convenient times (e.g., 7:00 am rather than 6:30 am because travel is eliminated). The long-term impact is that online video lectures are likely to continue after the pandemic. Grand rounds are conducted via online platforms because of restricted movements, that is, complete or partial lockdowns. This approach also is easily adopted and increases the availability of invited national or international speakers (no need for travel), with reduced costs. A disadvantage is that online grand rounds reduce the opportunity for residents to network with senior ophthalmologists, some of whom may become fellowship preceptors. The long-term impact is that online grand rounds are likely to continue, but only as a component of the curriculum, because physical interaction and networking remain important. International conferences (e.g., the Association for Research in Vision and Ophthalmology, World Ophthalmology Congress) also have moved online. This approach allows dissemination of scientific findings by international speakers without the need for travel. The long-term impact is that online large conferences may not continue, because conferences play key roles for interaction and networking. However, a virtual option likely will be a regular feature of all major international conferences.
Outpatient volume has decreased (some >75%) and is restricted to urgent care. Personal protective equipment is required in clinics to prevent possible infection of patients and staff. A consequence of the decreased volume is that many patients are losing sight irreversibly. Before COVID-19, many AMCs previously had developed virtual clinics and telemedicine programs. , Now patient acceptance has increased, including coding and billing for telemedicine services by insurers, which may hasten the adoption of such telemedicine programs. For patients with mobile phones or computers, some AMCs can conduct face-to-face interviews. Others have adopted new technologies for visual acuity and visual field tests via telemedicine. Home monitoring equipment such as OCT may be possible for patients who can afford it. To reassure patients who fear contagion at the AMC and to re-establish follow-up, some AMCs are scheduling appointments in safe environments outside the hospital. The long-term impact is that digital and telehealth initiatives are likely to be sustained, because virtual clinics and telemedicine will be established clinical practice for screening and monitoring of stable patients. This approach may address some important problems in patient management, such as the failure of patients to receive the degree of clinical care in practice environments (e.g., rigorous follow-up for management of age-related macular degeneration) that they are required to receive in registration clinical trials. Surgical volume has decreased (some >75%) during the pandemic and also is restricted to urgent or emergent conditions. However, because many AMCs are level 1 trauma units, complex cases (e.g., orbital cellulitis with abscess, intraocular foreign bodies) continue to receive surgery. In many AMCs, ambulatory surgery centers are closed, and ophthalmology cases are performed in the main operating rooms. The need for appropriate equipment for eye surgery in the main operating rooms thus has been demonstrated. Academic medical centers also have increased the rigor of protocols to ensure health and safety of operating room personnel during anesthesia induction, particularly for general anesthesia cases (with aerosol generation), before, during, and after surgery. The long-term impact is that routine elective surgeries will increase after the pandemic. Heightened safety standards are likely to persist indefinitely.
Because of changes in outpatient practice, residents will become experts in telemedicine and remote monitoring. This will accelerate incorporation of artificial intelligence into clinical practice. The pandemic also has had significant impact on surgical training for residents (e.g., 4–6 months of reduced surgical volume). Given the travel restrictions, no alternative surgical sites exist for residents. Training may need to rely more heavily on virtual reality surgical simulators. During this pandemic, medical students likely will not have exposure to ophthalmology. The long-term impact on clinical standards resulting from a reduction in clinical and surgical exposure on this cohort of graduating ophthalmology trainees is unclear. A cohort of graduating general physicians will have little to no ophthalmology exposure. Academic medical centers will need to modify residency curriculum to include data science, informatics, virtual reality training, and telemedicine.
Only essential clinical research has been permitted in most AMCs (e.g., clinical trials for sight-threatening conditions). A decrease in patient follow-up and treatment because of fear of acquiring COVID-19 in the clinic may impact clinical trial outcomes. Telemedicine has been used to contact patients, but remote diagnostics (e.g., vision, OCT), although useful for routine clinical care, may not be suitable for clinical trials. Regulatory agencies (e.g., the Food and Drug Administration) may have to accept novel data acquisition strategies. Some AMCs are establishing mobile vans for clinical trial patients for assessment and treatment. The long-term impact is that clinical research protocols likely will need to adapt to account for future pandemics, including novel data acquisition, telemedicine, online questionnaires, and remote monitoring. Regulatory agencies may modify their requirements regarding data capture.
Similarly, only essential basic laboratory research activities are permitted in most AMCs. Experiments have stopped at reasonable pause points in their protocols. Automated support systems (e.g., power, temperature control) need to operate reliably, but a need exists to maintain core activities (e.g., cell culture, animal care). The long-term impact is that some changes to basic research protocols are likely.
All nonessential nonclinical personnel have started to work from home. Most academic work can be carried out remotely, including conducting meetings, telephone management, scheduling, grant administration, and budget preparation. New electronic and secure means for working from home are being implemented. Nonessential meetings have been cancelled without impact on operations, suggesting that many nonessential meetings could be cancelled permanently. The long-term impact is that acceptance of working from home will increase. Many staff members have children and elderly family members at home, and society value and impetus to maintain work from home programs may increase in the future. Over the century, AMCs have evolved and adapted to changes to meet their triple mission of clinical care, teaching, and research. Many changes in practices during this pandemic will be accelerated and sustained and will become part of the new normal after the COVID-19 pandemic.
|
Exploring trauma-informed prenatal care preferences through diverse pregnant voices | e497a087-dde6-433e-aa36-051ff3c4318f | 11951521 | Community Health Services[mh] | Prenatal care presents an opportunity for promoting positive physical and mental health outcomes during pregnancy, offering a critical and unique partnership between patients and providers. Some barriers to prenatal care can be attributed to a patient’s past history or current exposure to trauma, such as intimate partner violence , adverse childhood experiences (ACEs), and sexual assault . Furthermore, the healthcare system can be a source of trauma or re-traumatization, particularly in a prenatal care setting where sensitive exams are routinely performed . Trauma is associated with negative reproductive health outcomes, including pregnancy complications, such as gestational hypertension and diabetes, perinatal mental health issues (e.g., anxiety, depression, and posttraumatic stress disorder), as well as preterm birth and poor neonatal outcomes [ – ]. Trauma, including intergenerational trauma, is experienced at higher rates by underrepresented patient populations, exacerbating disparities in prenatal care [ – ]. Previous research showed that the life-time exposure to trauma and PTSD is highest among Black (8.7%) and moderate among Hispanic people (7%) while 70% of trauma symptoms were associated with racial discrimination . Trauma-informed care (TIC) is a strengths-based approach to care that actively seeks to address trauma in a healthcare setting by emphasizing safety, trustworthiness and transparency, peer support, collaboration and mutuality, empowerment, voice, and choice, as well as cultural, historical, and gender issues . TIC is a way for providers to realize the impact of trauma on health, recognize patient signs and symptoms of trauma, respond to trauma with resources and treatment plans, and resist re-traumatization . In prenatal care settings, TIC can promote equitable reproductive health outcomes. For example, TIC may improve patient trust in providers; patient adherence to evidence-based prenatal care recommendations; compassionate provider response to patient disclosure of trauma; and connection to critical, interdisciplinary resources to promote resilience, identify strengths, and disrupt intergenerational trauma [ – ]. While there exists strong evidence that TIC can promote widespread, generational reproductive health outcomes, there are no standards of care for integrating TIC into prenatal care in a patient-centered manner. This gap represents a missed opportunity to optimize and enhance patient-centered, prenatal care outcomes and promote reproductive health equity, especially among pregnant people from underrepresented communities. This study seeks to fill this gap by exploring the preferences of Black and Hispanic pregnant people, with the aim of informing the development of trauma-informed prenatal care practices.
Design Qualitative methods were used to explore patient preferences regarding trauma-informed prenatal care. This data was adjunctively collected from participants who were concurrently enrolled in a randomized controlled trial (RCT) testing a health promotion and wellness skills intervention for reducing stress in pregnancy. RCT participants were randomized (1:1) to either a trauma-informed prenatal intervention group focusing on behavioral change and regulation skills or a control group who received prenatal education. The two groups received four weekly, individually-delivered sessions with assessments of psychological and socioemotional functioning at baseline, post-intervention, 4 weeks prenatal post-intervention, and 6 weeks postpartum. The primary outcomes paper of quantitative self-report measures is featured in a separate publication . This study was approved by the University of Illinois Chicago Institutional Review Board (2022 − 1175).The qualitative results were reported in accordance with the Consolidated Criteria For Reporting Qualitative Research (COREQ) guidelines . Sample and setting A purposive sampling strategy was used to recruit 40 pregnant individuals aged 18 and older between 10 and 24 weeks gestation from a university-affiliated federally qualified health center and multi-specialty clinic. Initial chart screening of medical records was used to confirm pregnancy status. Participants were screened via phone for complete eligibility requirements, and exclusion criteria consisted of the inability to reliably or safely participate in the study due to self-reported serious or persistent mental health disorder. Data collection Of the 40 enrolled RCT participants, 27 participants completed either the intervention or prenatal education program. Thereafter, structured, individual interviews (lasting 15–20 min) were administered within one week of completing the program at the post-intervention assessment timepoint. Responses were simultaneously transcribed by the interviewer as the participant answered each question. Qualitative data collection extended from June 30, 2023 through April 11, 2024. The interview questions asked about participants’ preferences regarding prenatal care, prenatal care providers, trauma inquiry and response, and resources (Table ). Analysis Inductive thematic analysis was employed to generate codes and identify emergent themes derived from the participants’ responses. Researchers analyzed the data following Braun and Clark’s six-stage process for identifying patterns and themes within the qualitative data. The researchers read the transcripts multiple times to immerse themselves in the data. Thereafter, two researchers independently sorted the data by dissecting significant statements into meaning units that were entered into Microsoft Excel. Subsequently, the data were manually and inductively coded according to the codebook (Supplemental Table ). Analyst triangulation was used to strengthen and corroborate the research findings by a third researcher (MS) who cross-checked the analyses performed by the other team members to mitigate potential biases . The three researchers reconciled differences during the coding phase and achieved consensus in formulating the themes and representative excerpts for the codes. Supplemental Table depicts the codebook, providing succinct descriptions that define the breadth of codes assigned to the meaning units. Subsequently, deductive reasoning was used to map the emergent themes to one or more of the Center for Disease Control (CDC)’s six principles of trauma-informed care for further analysis, interpretation, and application . Rigor Analyst triangulation was used to contest the pre-existing assumptions, biases, and knowledge of the research team regarding trauma-informed care and avoid over-interpretation of the participants’ responses . The analysis was supported by exhaustive descriptions and quotations from the data to provide the participants with a voice and to ensure that the analysis was grounded in the data. In addition, the participant characteristics were clearly delineated, thus enhancing the possibility of transferring the findings to a similar patient population. This study captured the perspectives of the participants who expressed their concerns and preferences regarding their prenatal care, which is in accordance with the qualitative analysis principle of authenticity, and enables advocating for the patients’ needs through dissemination of the results, furthering research development and policy actions .
Qualitative methods were used to explore patient preferences regarding trauma-informed prenatal care. This data was adjunctively collected from participants who were concurrently enrolled in a randomized controlled trial (RCT) testing a health promotion and wellness skills intervention for reducing stress in pregnancy. RCT participants were randomized (1:1) to either a trauma-informed prenatal intervention group focusing on behavioral change and regulation skills or a control group who received prenatal education. The two groups received four weekly, individually-delivered sessions with assessments of psychological and socioemotional functioning at baseline, post-intervention, 4 weeks prenatal post-intervention, and 6 weeks postpartum. The primary outcomes paper of quantitative self-report measures is featured in a separate publication . This study was approved by the University of Illinois Chicago Institutional Review Board (2022 − 1175).The qualitative results were reported in accordance with the Consolidated Criteria For Reporting Qualitative Research (COREQ) guidelines .
A purposive sampling strategy was used to recruit 40 pregnant individuals aged 18 and older between 10 and 24 weeks gestation from a university-affiliated federally qualified health center and multi-specialty clinic. Initial chart screening of medical records was used to confirm pregnancy status. Participants were screened via phone for complete eligibility requirements, and exclusion criteria consisted of the inability to reliably or safely participate in the study due to self-reported serious or persistent mental health disorder.
Of the 40 enrolled RCT participants, 27 participants completed either the intervention or prenatal education program. Thereafter, structured, individual interviews (lasting 15–20 min) were administered within one week of completing the program at the post-intervention assessment timepoint. Responses were simultaneously transcribed by the interviewer as the participant answered each question. Qualitative data collection extended from June 30, 2023 through April 11, 2024. The interview questions asked about participants’ preferences regarding prenatal care, prenatal care providers, trauma inquiry and response, and resources (Table ).
Inductive thematic analysis was employed to generate codes and identify emergent themes derived from the participants’ responses. Researchers analyzed the data following Braun and Clark’s six-stage process for identifying patterns and themes within the qualitative data. The researchers read the transcripts multiple times to immerse themselves in the data. Thereafter, two researchers independently sorted the data by dissecting significant statements into meaning units that were entered into Microsoft Excel. Subsequently, the data were manually and inductively coded according to the codebook (Supplemental Table ). Analyst triangulation was used to strengthen and corroborate the research findings by a third researcher (MS) who cross-checked the analyses performed by the other team members to mitigate potential biases . The three researchers reconciled differences during the coding phase and achieved consensus in formulating the themes and representative excerpts for the codes. Supplemental Table depicts the codebook, providing succinct descriptions that define the breadth of codes assigned to the meaning units. Subsequently, deductive reasoning was used to map the emergent themes to one or more of the Center for Disease Control (CDC)’s six principles of trauma-informed care for further analysis, interpretation, and application .
Analyst triangulation was used to contest the pre-existing assumptions, biases, and knowledge of the research team regarding trauma-informed care and avoid over-interpretation of the participants’ responses . The analysis was supported by exhaustive descriptions and quotations from the data to provide the participants with a voice and to ensure that the analysis was grounded in the data. In addition, the participant characteristics were clearly delineated, thus enhancing the possibility of transferring the findings to a similar patient population. This study captured the perspectives of the participants who expressed their concerns and preferences regarding their prenatal care, which is in accordance with the qualitative analysis principle of authenticity, and enables advocating for the patients’ needs through dissemination of the results, furthering research development and policy actions .
Sociodemographic data Participants ( N = 27) were aged ( M = 28, SD = 4.5; range = 19–38) years old. The majority of participants self-identified as either Black (77.8%) or Hispanic (18.5%) and had some college (55.6%). Slightly more than half of participants reported having had a previous live birth (51.9%), being single (55.6%), and having received mental health services/counseling (55.6%). On average, participants reported an income-to-need ratio of one point above poverty level and nearly two basic needs, including housing, food, transportation, utilities, and/or personal safety. Participants also reported having experienced discrimination less than once a year because of their race, ethnicity, or skin color. Furthermore, participants reported M = 6 ( SD = 4.3) adverse childhood experiences and M = 9 ( SD = 1.2) benevolent childhood experiences (Table ). Qualitative results Table shows the domains, themes, codes, and exemplar excerpts from study participants. Domain 1: Preferences regarding prenatal care and providers Seven themes emerged regarding participant preferences for prenatal care and care providers: (1) Agency and Choice (2), Emphasis on Maternal and Child Health and Wellbeing , and (3) Universal and Personalized Provision of Information and Resources (4), Familiar and Experienced (5) Personally Engaging (6), Emotionally Safe and Supportive , and (7) Concordant Care. Agency and choice Twenty participants emphasized the importance of their voices being heard as integral partners in the therapeutic relationship and decision-making process, rather than being told what to do by “an autocratic healthcare provider” as one participant noted (P18). Participants valued having options regarding their care team, treatments, care procedures, delivery methods, and resources/information. Participants specifically indicated that they wanted these options to be genuine. For example, one participant noted that if options were presented, it had to come with an agency of choice: “When the provider asks whether the student can enter the room and observe and then perform the check, there is really no choice because I do not feel comfortable saying no. I would prefer they do not. Sometimes asking for permission is not actually providing a choice.” (P1) Participants indicated they did not feel comfortable refusing certain aspects of care. Furthermore, they highlighted the need for flexibility in scheduling appointments. Emphasis on maternal and child health and wellbeing Fifteen participants wanted their providers to demonstrate interest in their wellbeing and six participants wanted them to check on the baby as well. For instance, some participants commented their midwives asked them about health behaviors as well as tracking blood pressure and checking for preeclampsia. Additionally, participants wanted to be asked about their home life, sleeping habits, and any questions or concerns they may have. Three participants also expressed a desire for their providers to assess for psychological health needs, such as coping with stress. Furthermore, participants reported that they did not want to feel rushed or pressured in the care interaction. Rather, they wanted adequate time to voice concerns and to make informed decisions. While participants highlighted their need to feel prioritized in prenatal visits, they also indicated that they appreciated aspects of routine care, such as listening to the baby’s heartbeat and movements, and knowing their baby was healthy. For instance, one participant indicated that her favorite part of the visit was listening to the heartbeat of the baby, assuring her that everything was going well with her pregnancy. Familiar and experienced Eight participants valued familiarity and continuity of care. Participants preferred to “…stick with the person I am comfortable, familiar, and understands me…” (P8) and “a provider that checks in with me and notices when something is off.” (P25) Additionally, participants indicated that having the same provider could help in detecting any emerging health issues and tracking existing ones, since the provider was already familiar with their health and pregnancy history. Four participants also valued training and experience in their prenatal providers. For instance, participants explicitly mentioned “…number of years of experience in training…” as desirable qualities for their prenatal providers. Personally engaging Participants wanted their prenatal providers to be personally engaging and approachable. For example, participants said that they want their providers to be “happy people who want to be at work” and not have “mean people in the space” (P1) nor be “pushy and rigid.” (P18) Furthermore, participants wanted their providers to motivate them to actively participate in their own care by encouraging questions and collaborating with them in planning care goals. Participants wanted their providers to convey genuine interest in their perspective and concerns by actively listening to what they had to say and not dismissing them for having too many questions. Emotionally safe and supportive Participants shared that they want their providers to make them feel emotionally safe and supported. They wanted their provider to be “…validating of their experience and reassuring that they are not crazy for thinking and experiencing it that way…however it manifested in their life is okay…their experience is their experience, and it is not wrong in how they feel” (P14). Furthermore, participants wanted providers to be “offering good support and asking questions about if they needed support” (P21). Eleven participants stressed the need for prenatal providers to be non-judgmental: “I don’t want the experience of being hounded with questions and the experience of feeling like I have to walk on eggshells around my providers…”. Another participant said that she felt ashamed when her provider said that “she should be used to this by now” (P1), referring to conducting a cervical exam. Participants also highlighted the importance of providers being open-minded and respectful: “…respecting my body and the things that are happening with my body…respecting how I feel about certain things and my body…” (P14). Overall, the participants wanted to feel that their prenatal providers were supportive through their journey of pregnancy and childbirth. Concordant care One participant shared that she would prefer to have a prenatal provider who is racially concordant with her; as a Black woman, it was important to have a Black midwife. Two participants preferred a provider who had children or an older woman. Universal and personalized provision of information and resources All participants expressed a need for information and resources. Participants desired providers to present them with all of the available resources: “People should not have to ask for resources. Providers need to be more forthcoming and verbal about the resources available” (P1). Provision of targeted information was important for those who were pregnant for the first time as well as those with previous childbirth experiences. For instance, participants who were a first-time parent indicated that they did not know where to look for information or what kinds of questions to ask. On the other hand, one of the participants with a prior pregnancy experience commented on how she wanted to learn more about breastfeeding, considering that her previous experience was painful. Participants indicated that information made their pregnancy experience easier and empowered them to make good decisions for their children and themselves. In addition, participants wanted healthcare providers to provide detailed explanations of procedures, treatments, and next steps in their care plan, in order for them to feel more comfortable and safe. Five participants mentioned the need for nutritional support during their pregnancy and after childbirth, including access to dieticians and the Special Supplemental Nutrition Program for Women, Infants, Children (WIC). Nine also desired access to basic needs, such as hygiene kits and diapers. Some participants mentioned housing and financial support as well as transportation resources to help them attend their appointments. Domain 2: Preferences in addressing trauma Three themes emerged around participant preferences regarding trauma inquiry and disclosure: (1) Variable Value for Addressing Trauma (2), Variable Approaches to Addressing Trauma , and (3) Sensitive and Empathetic Inquiry and Response . Variable value for addressing trauma in prenatal care Participants thought that it was important for their prenatal provider to ask them questions about childhood history and trauma as it relates to their mental health, demonstrates caring, and affects prenatal care. One participant noted, “The provider asks the basic questions, but more personal questions would demonstrate caring, such as what a birth doula might do to get to know you” (P1). Further, participants noted the importance of certain kinds of trauma on pregnancy: “It would help their providers to know what they experienced, especially if it were sexual trauma and abuse” (P9). Additionally, participants noted that discussing trauma “can remind us where we came from and how we don’t want to treat our kids. How can I change this? I think it is an important part of prenatal care to remind you where you came from and what your main goal is” (P19). Three participants did not want to be asked about trauma for a variety of reasons, such as not trusting institutions: “I am very guarded, and I do not trust institutions, so I am not sure if I would want them to touch on or bring the question up, and even if they asked, I am not sure that I would share. That is due to me knowing that institutions are often obligated to report a lot, and I just want to protect my autonomy as much as possible” (P6). Participants who did not want to discuss trauma wanted to protect their anonymity, thinking that their provider did not need to know their childhood history, or felt they were not able to assess the genuineness of the person asking about trauma to feel comfortable answering. Variable approaches to addressing trauma in prenatal care When asked how they would like their providers to explore important life events, participants shared varied preferences. Regarding the method of asking, twelve participants preferred a verbal approach accompanied by a personal conversation, while also twelve preferred a written questionnaire that allows them to be more comfortable, open, and honest. Regarding the scope of screening, eight participants felt it was important to ask everyone, regardless of history: “If you don’t ask everyone, you could be avoiding a big impact that happened to that person” (P19). However, few participants preferred a case-finding approach, such as noticing what health problems patients have, and if they have many or unusual health problems, to inquire further about past experiences. Sensitive and empathetic inquiry and response In addition to the scope and method of asking about trauma, participants preferred providers to be sensitive and empathetic when asking about personal topics: “Providers should be sensitive and caring when speaking about childhood experiences. I don’t like abrasive attitudes or rough and quick responses from my provider. Sensitivity and a gentle approach are so important” (P26). Participants appreciated when providers connected asking about trauma to their prenatal care: “Preface it with, I am going to ask, and there are studies that show why it is important; afterwards, what did your experiences teach you, and do you want to do anything differently? Explain how it relates to them today” (P19). Regarding the nature of the conversation, participants said that they wanted the conversation to be “natural,” and they didn’t want their providers to have “overdo it” (P19). Mapping to the six principles Pf TIC Further analysis was carried out to deductively match the emergent themes to the six principles of TIC (Table ). This was done to highlight practices that can aid in the fulfillment of the TIC principles and the eventual development of a TIC perinatal standard of care. The analysis shows that the desire for a certain aspect of care aligned with one or more TIC principles .
Participants ( N = 27) were aged ( M = 28, SD = 4.5; range = 19–38) years old. The majority of participants self-identified as either Black (77.8%) or Hispanic (18.5%) and had some college (55.6%). Slightly more than half of participants reported having had a previous live birth (51.9%), being single (55.6%), and having received mental health services/counseling (55.6%). On average, participants reported an income-to-need ratio of one point above poverty level and nearly two basic needs, including housing, food, transportation, utilities, and/or personal safety. Participants also reported having experienced discrimination less than once a year because of their race, ethnicity, or skin color. Furthermore, participants reported M = 6 ( SD = 4.3) adverse childhood experiences and M = 9 ( SD = 1.2) benevolent childhood experiences (Table ).
Table shows the domains, themes, codes, and exemplar excerpts from study participants.
Seven themes emerged regarding participant preferences for prenatal care and care providers: (1) Agency and Choice (2), Emphasis on Maternal and Child Health and Wellbeing , and (3) Universal and Personalized Provision of Information and Resources (4), Familiar and Experienced (5) Personally Engaging (6), Emotionally Safe and Supportive , and (7) Concordant Care. Agency and choice Twenty participants emphasized the importance of their voices being heard as integral partners in the therapeutic relationship and decision-making process, rather than being told what to do by “an autocratic healthcare provider” as one participant noted (P18). Participants valued having options regarding their care team, treatments, care procedures, delivery methods, and resources/information. Participants specifically indicated that they wanted these options to be genuine. For example, one participant noted that if options were presented, it had to come with an agency of choice: “When the provider asks whether the student can enter the room and observe and then perform the check, there is really no choice because I do not feel comfortable saying no. I would prefer they do not. Sometimes asking for permission is not actually providing a choice.” (P1) Participants indicated they did not feel comfortable refusing certain aspects of care. Furthermore, they highlighted the need for flexibility in scheduling appointments. Emphasis on maternal and child health and wellbeing Fifteen participants wanted their providers to demonstrate interest in their wellbeing and six participants wanted them to check on the baby as well. For instance, some participants commented their midwives asked them about health behaviors as well as tracking blood pressure and checking for preeclampsia. Additionally, participants wanted to be asked about their home life, sleeping habits, and any questions or concerns they may have. Three participants also expressed a desire for their providers to assess for psychological health needs, such as coping with stress. Furthermore, participants reported that they did not want to feel rushed or pressured in the care interaction. Rather, they wanted adequate time to voice concerns and to make informed decisions. While participants highlighted their need to feel prioritized in prenatal visits, they also indicated that they appreciated aspects of routine care, such as listening to the baby’s heartbeat and movements, and knowing their baby was healthy. For instance, one participant indicated that her favorite part of the visit was listening to the heartbeat of the baby, assuring her that everything was going well with her pregnancy. Familiar and experienced Eight participants valued familiarity and continuity of care. Participants preferred to “…stick with the person I am comfortable, familiar, and understands me…” (P8) and “a provider that checks in with me and notices when something is off.” (P25) Additionally, participants indicated that having the same provider could help in detecting any emerging health issues and tracking existing ones, since the provider was already familiar with their health and pregnancy history. Four participants also valued training and experience in their prenatal providers. For instance, participants explicitly mentioned “…number of years of experience in training…” as desirable qualities for their prenatal providers. Personally engaging Participants wanted their prenatal providers to be personally engaging and approachable. For example, participants said that they want their providers to be “happy people who want to be at work” and not have “mean people in the space” (P1) nor be “pushy and rigid.” (P18) Furthermore, participants wanted their providers to motivate them to actively participate in their own care by encouraging questions and collaborating with them in planning care goals. Participants wanted their providers to convey genuine interest in their perspective and concerns by actively listening to what they had to say and not dismissing them for having too many questions. Emotionally safe and supportive Participants shared that they want their providers to make them feel emotionally safe and supported. They wanted their provider to be “…validating of their experience and reassuring that they are not crazy for thinking and experiencing it that way…however it manifested in their life is okay…their experience is their experience, and it is not wrong in how they feel” (P14). Furthermore, participants wanted providers to be “offering good support and asking questions about if they needed support” (P21). Eleven participants stressed the need for prenatal providers to be non-judgmental: “I don’t want the experience of being hounded with questions and the experience of feeling like I have to walk on eggshells around my providers…”. Another participant said that she felt ashamed when her provider said that “she should be used to this by now” (P1), referring to conducting a cervical exam. Participants also highlighted the importance of providers being open-minded and respectful: “…respecting my body and the things that are happening with my body…respecting how I feel about certain things and my body…” (P14). Overall, the participants wanted to feel that their prenatal providers were supportive through their journey of pregnancy and childbirth. Concordant care One participant shared that she would prefer to have a prenatal provider who is racially concordant with her; as a Black woman, it was important to have a Black midwife. Two participants preferred a provider who had children or an older woman. Universal and personalized provision of information and resources All participants expressed a need for information and resources. Participants desired providers to present them with all of the available resources: “People should not have to ask for resources. Providers need to be more forthcoming and verbal about the resources available” (P1). Provision of targeted information was important for those who were pregnant for the first time as well as those with previous childbirth experiences. For instance, participants who were a first-time parent indicated that they did not know where to look for information or what kinds of questions to ask. On the other hand, one of the participants with a prior pregnancy experience commented on how she wanted to learn more about breastfeeding, considering that her previous experience was painful. Participants indicated that information made their pregnancy experience easier and empowered them to make good decisions for their children and themselves. In addition, participants wanted healthcare providers to provide detailed explanations of procedures, treatments, and next steps in their care plan, in order for them to feel more comfortable and safe. Five participants mentioned the need for nutritional support during their pregnancy and after childbirth, including access to dieticians and the Special Supplemental Nutrition Program for Women, Infants, Children (WIC). Nine also desired access to basic needs, such as hygiene kits and diapers. Some participants mentioned housing and financial support as well as transportation resources to help them attend their appointments.
Twenty participants emphasized the importance of their voices being heard as integral partners in the therapeutic relationship and decision-making process, rather than being told what to do by “an autocratic healthcare provider” as one participant noted (P18). Participants valued having options regarding their care team, treatments, care procedures, delivery methods, and resources/information. Participants specifically indicated that they wanted these options to be genuine. For example, one participant noted that if options were presented, it had to come with an agency of choice: “When the provider asks whether the student can enter the room and observe and then perform the check, there is really no choice because I do not feel comfortable saying no. I would prefer they do not. Sometimes asking for permission is not actually providing a choice.” (P1) Participants indicated they did not feel comfortable refusing certain aspects of care. Furthermore, they highlighted the need for flexibility in scheduling appointments.
Fifteen participants wanted their providers to demonstrate interest in their wellbeing and six participants wanted them to check on the baby as well. For instance, some participants commented their midwives asked them about health behaviors as well as tracking blood pressure and checking for preeclampsia. Additionally, participants wanted to be asked about their home life, sleeping habits, and any questions or concerns they may have. Three participants also expressed a desire for their providers to assess for psychological health needs, such as coping with stress. Furthermore, participants reported that they did not want to feel rushed or pressured in the care interaction. Rather, they wanted adequate time to voice concerns and to make informed decisions. While participants highlighted their need to feel prioritized in prenatal visits, they also indicated that they appreciated aspects of routine care, such as listening to the baby’s heartbeat and movements, and knowing their baby was healthy. For instance, one participant indicated that her favorite part of the visit was listening to the heartbeat of the baby, assuring her that everything was going well with her pregnancy.
Eight participants valued familiarity and continuity of care. Participants preferred to “…stick with the person I am comfortable, familiar, and understands me…” (P8) and “a provider that checks in with me and notices when something is off.” (P25) Additionally, participants indicated that having the same provider could help in detecting any emerging health issues and tracking existing ones, since the provider was already familiar with their health and pregnancy history. Four participants also valued training and experience in their prenatal providers. For instance, participants explicitly mentioned “…number of years of experience in training…” as desirable qualities for their prenatal providers.
Participants wanted their prenatal providers to be personally engaging and approachable. For example, participants said that they want their providers to be “happy people who want to be at work” and not have “mean people in the space” (P1) nor be “pushy and rigid.” (P18) Furthermore, participants wanted their providers to motivate them to actively participate in their own care by encouraging questions and collaborating with them in planning care goals. Participants wanted their providers to convey genuine interest in their perspective and concerns by actively listening to what they had to say and not dismissing them for having too many questions.
Participants shared that they want their providers to make them feel emotionally safe and supported. They wanted their provider to be “…validating of their experience and reassuring that they are not crazy for thinking and experiencing it that way…however it manifested in their life is okay…their experience is their experience, and it is not wrong in how they feel” (P14). Furthermore, participants wanted providers to be “offering good support and asking questions about if they needed support” (P21). Eleven participants stressed the need for prenatal providers to be non-judgmental: “I don’t want the experience of being hounded with questions and the experience of feeling like I have to walk on eggshells around my providers…”. Another participant said that she felt ashamed when her provider said that “she should be used to this by now” (P1), referring to conducting a cervical exam. Participants also highlighted the importance of providers being open-minded and respectful: “…respecting my body and the things that are happening with my body…respecting how I feel about certain things and my body…” (P14). Overall, the participants wanted to feel that their prenatal providers were supportive through their journey of pregnancy and childbirth.
One participant shared that she would prefer to have a prenatal provider who is racially concordant with her; as a Black woman, it was important to have a Black midwife. Two participants preferred a provider who had children or an older woman.
All participants expressed a need for information and resources. Participants desired providers to present them with all of the available resources: “People should not have to ask for resources. Providers need to be more forthcoming and verbal about the resources available” (P1). Provision of targeted information was important for those who were pregnant for the first time as well as those with previous childbirth experiences. For instance, participants who were a first-time parent indicated that they did not know where to look for information or what kinds of questions to ask. On the other hand, one of the participants with a prior pregnancy experience commented on how she wanted to learn more about breastfeeding, considering that her previous experience was painful. Participants indicated that information made their pregnancy experience easier and empowered them to make good decisions for their children and themselves. In addition, participants wanted healthcare providers to provide detailed explanations of procedures, treatments, and next steps in their care plan, in order for them to feel more comfortable and safe. Five participants mentioned the need for nutritional support during their pregnancy and after childbirth, including access to dieticians and the Special Supplemental Nutrition Program for Women, Infants, Children (WIC). Nine also desired access to basic needs, such as hygiene kits and diapers. Some participants mentioned housing and financial support as well as transportation resources to help them attend their appointments.
Three themes emerged around participant preferences regarding trauma inquiry and disclosure: (1) Variable Value for Addressing Trauma (2), Variable Approaches to Addressing Trauma , and (3) Sensitive and Empathetic Inquiry and Response . Variable value for addressing trauma in prenatal care Participants thought that it was important for their prenatal provider to ask them questions about childhood history and trauma as it relates to their mental health, demonstrates caring, and affects prenatal care. One participant noted, “The provider asks the basic questions, but more personal questions would demonstrate caring, such as what a birth doula might do to get to know you” (P1). Further, participants noted the importance of certain kinds of trauma on pregnancy: “It would help their providers to know what they experienced, especially if it were sexual trauma and abuse” (P9). Additionally, participants noted that discussing trauma “can remind us where we came from and how we don’t want to treat our kids. How can I change this? I think it is an important part of prenatal care to remind you where you came from and what your main goal is” (P19). Three participants did not want to be asked about trauma for a variety of reasons, such as not trusting institutions: “I am very guarded, and I do not trust institutions, so I am not sure if I would want them to touch on or bring the question up, and even if they asked, I am not sure that I would share. That is due to me knowing that institutions are often obligated to report a lot, and I just want to protect my autonomy as much as possible” (P6). Participants who did not want to discuss trauma wanted to protect their anonymity, thinking that their provider did not need to know their childhood history, or felt they were not able to assess the genuineness of the person asking about trauma to feel comfortable answering. Variable approaches to addressing trauma in prenatal care When asked how they would like their providers to explore important life events, participants shared varied preferences. Regarding the method of asking, twelve participants preferred a verbal approach accompanied by a personal conversation, while also twelve preferred a written questionnaire that allows them to be more comfortable, open, and honest. Regarding the scope of screening, eight participants felt it was important to ask everyone, regardless of history: “If you don’t ask everyone, you could be avoiding a big impact that happened to that person” (P19). However, few participants preferred a case-finding approach, such as noticing what health problems patients have, and if they have many or unusual health problems, to inquire further about past experiences. Sensitive and empathetic inquiry and response In addition to the scope and method of asking about trauma, participants preferred providers to be sensitive and empathetic when asking about personal topics: “Providers should be sensitive and caring when speaking about childhood experiences. I don’t like abrasive attitudes or rough and quick responses from my provider. Sensitivity and a gentle approach are so important” (P26). Participants appreciated when providers connected asking about trauma to their prenatal care: “Preface it with, I am going to ask, and there are studies that show why it is important; afterwards, what did your experiences teach you, and do you want to do anything differently? Explain how it relates to them today” (P19). Regarding the nature of the conversation, participants said that they wanted the conversation to be “natural,” and they didn’t want their providers to have “overdo it” (P19).
Participants thought that it was important for their prenatal provider to ask them questions about childhood history and trauma as it relates to their mental health, demonstrates caring, and affects prenatal care. One participant noted, “The provider asks the basic questions, but more personal questions would demonstrate caring, such as what a birth doula might do to get to know you” (P1). Further, participants noted the importance of certain kinds of trauma on pregnancy: “It would help their providers to know what they experienced, especially if it were sexual trauma and abuse” (P9). Additionally, participants noted that discussing trauma “can remind us where we came from and how we don’t want to treat our kids. How can I change this? I think it is an important part of prenatal care to remind you where you came from and what your main goal is” (P19). Three participants did not want to be asked about trauma for a variety of reasons, such as not trusting institutions: “I am very guarded, and I do not trust institutions, so I am not sure if I would want them to touch on or bring the question up, and even if they asked, I am not sure that I would share. That is due to me knowing that institutions are often obligated to report a lot, and I just want to protect my autonomy as much as possible” (P6). Participants who did not want to discuss trauma wanted to protect their anonymity, thinking that their provider did not need to know their childhood history, or felt they were not able to assess the genuineness of the person asking about trauma to feel comfortable answering.
When asked how they would like their providers to explore important life events, participants shared varied preferences. Regarding the method of asking, twelve participants preferred a verbal approach accompanied by a personal conversation, while also twelve preferred a written questionnaire that allows them to be more comfortable, open, and honest. Regarding the scope of screening, eight participants felt it was important to ask everyone, regardless of history: “If you don’t ask everyone, you could be avoiding a big impact that happened to that person” (P19). However, few participants preferred a case-finding approach, such as noticing what health problems patients have, and if they have many or unusual health problems, to inquire further about past experiences.
In addition to the scope and method of asking about trauma, participants preferred providers to be sensitive and empathetic when asking about personal topics: “Providers should be sensitive and caring when speaking about childhood experiences. I don’t like abrasive attitudes or rough and quick responses from my provider. Sensitivity and a gentle approach are so important” (P26). Participants appreciated when providers connected asking about trauma to their prenatal care: “Preface it with, I am going to ask, and there are studies that show why it is important; afterwards, what did your experiences teach you, and do you want to do anything differently? Explain how it relates to them today” (P19). Regarding the nature of the conversation, participants said that they wanted the conversation to be “natural,” and they didn’t want their providers to have “overdo it” (P19).
Further analysis was carried out to deductively match the emergent themes to the six principles of TIC (Table ). This was done to highlight practices that can aid in the fulfillment of the TIC principles and the eventual development of a TIC perinatal standard of care. The analysis shows that the desire for a certain aspect of care aligned with one or more TIC principles .
Agency, choice, control In accordance with the TIC principles of empowerment, voice, and choice, our findings suggest that agency, choice, and control are interdependent . This is a consistent finding with previous studies that reported loss of power can sometimes be experienced as a violation by pregnant people with trauma histories, especially when their desire to be heard and to be in charge of their body and their care is dismissed or ignored [ , – ]. Previous qualitative studies showed that pregnant people with trauma histories wanted to have the agency to decide on certain aspects of care, such as the frequency of cervical examinations, gender of the prenatal provider, and who should be involved in their care and delivery . They wanted a physically and psychologically safe experience that was free of coercion [ , , – ]. Pregnancy and childbirth can be stressful experiences due to the physical and physiological changes, functional role changes (e.g., navigating new responsibilities), and, in some cases, mental health challenges [ – ]. Furthermore, the stress of the pregnancy experience can be exacerbated by a traumatic history . This study’s participants reported a mean score of 6 on the expanded ACEs questionnaire, indicating multiple past adversities that increases risk for poor pregnancy and birth outcomes . Pregnant people with a history of trauma may feel a sense of powerlessness over their body and other important aspects of their life during the perinatal period [ , , ]. In addition, most of the sample in our study identified as people from racial/ethnic backgrounds that have been historically subjected to structural racism and discrimination. Consequently, being in control may mean taking back their voice that has been silenced for generations. Familiar and experienced Concordant with previous literature, participants wanted to have a consistent prenatal provider with whom they could develop a sense of trust, safety, and comfort [ , , – , – ]. Familiarity of providers and continuity of care allows patients to not repeatedly share a traumatic history [ , , , , ]. Similarly, participants’ preferences aligned with prior evidence that pregnant people do not want to be exposed to different providers during cervical examinations . Participants emphasized that they would trust a prenatal provider who is highly trained and competent in caring for pregnant people [ , , ]. Emphasis on maternal wellbeing and mental health Our results are comparable with previous studies demonstrating that pregnant people appreciated a genuine interest from providers in their physical and mental health in addition to the health of their baby. Previous research shows that pregnant people, especially those with trauma histories, want to be treated like a whole person [ , , , ], and they want their providers to ask about their self-care and provide psychoeducation to develop coping skills [ , , , ]. Engaging, emotionally safe, and supportive Participants preferred a prenatal provider who is engaging, supportive, and helps them to feel emotionally safe. These values are also at the intersection of collaboration and mutuality as well as safety in TIC . Our study found that pregnant people felt safer and more empowered when their provider validated their experience, did not judge them, respected their body, and used sensitive language while communicating with them [ , , , , ]. Previous literature has demonstrated the importance of having a collaborative and therapeutic relationship with prenatal providers [ , , , ], which can lead to better maternal mental health, prenatal health behaviors and enhanced birth outcomes, such as reduced preterm birth and low birth weight . Concordant care Our results also highlighted the significance of considering various cultural, historical, and gender issues in prenatal care that align with patients’ preferences and values . For instance, gender concordance and reciprocal relatability could make a provider more relatable, personal, and trustworthy. This is supported by studies that show that midwives found their personal life experiences were an aiding factor in assessing a pregnant individual’s mental wellbeing , and pregnant people tended to trust providers with whom they shared a lived experience . Extensive research has been published documenting the desire for and the benefit of having racially concordant obstetricians, midwives, and doulas [ – ]. While this desire may stem from the mistrust produced by structural racism, racial trauma, and obstetric violence that people of color experience in the healthcare system , having a racially concordant provider does not insinuate a universal mistrust of racially discordant providers . However, for some pregnant people of color, especially those with trauma histories, having a racially concordant prenatal provider means having a trustworthy person who profoundly understands their needs and perspectives, competently advocates for them, and provides them with a sense of engagement, satisfaction, safety, and comfort [ , , – ]. Provision of information and resources Our findings highlight the importance of providing pregnant people with information and resources to support them throughout the pregnancy and empower them to adequately care for themselves and their children. Additionally, explanations of procedures were found to promote feelings of comfort and safety [ , – , , , ]. In accordance with our results, the provision of information and resources should be universally communicated and customized according to the needs and preferences of pregnant individuals, reinforcing a non-reductionistic whole-person care model that is patient-centered . This approach requires providers to get to know and collaborate with the individual to furnish an equitable access to resources and subsequently enhanced outcomes . In addition to basic needs and nutritional support, this study’s participants displayed a strong emphasis on the need for mental health support. Available evidence shows that the need for mental health services among pregnant people often exceeds reported access to treatment . Socioeconomic disadvantage acts as barrier to access and is associated with poor maternal mental health and birth outcomes [ , – ]. This study’s sample of participants had an income-to-need ratio of 2 (i.e., one point above poverty line). Thus, integrated, co-located, behavioral health services are recommended to enhance access to mental health counseling and behavioral health services for this population to improve maternal and child health outcomes [ – ]. Addressing trauma in prenatal care Study participants perceived trauma screening as important for their health and mental wellbeing, since it helped providers tailor care to their needs [ , , , , ]. They also associated trauma screening with breaking the cycle of intergenerational trauma . The acceptance of trauma screening and inquiry in our sample might be explained by the fact that the participants had a mean of 9 on the BCEs questionnaire, which may also be indicative of greater promotive factors and less trauma symptomatology. Findings from the literature have shown that those with higher ACE scores are more likely to want longer conversations, and those with less resilience preferred to be screened by a mental health counselor . However, a few participants reported that they did not want to be asked about past trauma, since it was not related to their current health concerns, nor did they trust the provider with such sensitive information. These findings were supported by multiple studies in which pregnant trauma survivors stressed the importance of having a trusting therapeutic relationship with the provider as a facilitator of disclosure [ , , , ]. Our findings resonate with the available body of evidence, which does not favor a single evidence-based strategy for trauma assessment. This is demonstrated by mixed preferences [ , – ] for either administering a trauma screening that involves the standardized use of a validated tool or using trauma inquiry, which involves patient-provider dialogue . This study reinforces the need for providers to first demonstrate the importance and universality of trauma assessment , and then provide pregnant people with the autonomy to choose how they want to address the topic, rather than standardizing the use of one method. In addition, incorporating resilience screening into the assessment , and being tactful in asking questions and responding empathetically and authentically to disclosures without judgment, can reduce feelings of stigma and promote help-seeking [ , , , , ]. Limitations The findings of this study should be considered in light of some limitations. This was a convenience sample of pregnant participants who had opted into an ongoing randomized trial for reducing stress in pregnancy, thus potentially affecting participant perspectives. Participant responses were not audio or video recorded, and handwritten field notes were recorded by the research staff; however, not recording may be an optimal, rapport- and trust-building approach for collecting qualitative data . Furthermore, since this study used structured questions, no follow-up or probing questions were used to elicit further detail, pointing to the need for further exploration of patient experiences in prenatal care. In addition, the number of participants who were provided a structured interview was dependent upon those who completed the RCT intervention or prenatal education control sessions. Considering this study focuses on the perceptions of pregnant people in a single metropolitan area, future studies involving pregnant individuals from diverse racial/ethnic and vulnerable populations (e.g., refugees, immigrants, etc.) and under-resourced settings (e.g., high-poverty neighborhoods in urban areas, rural communities, etc.) can be significant in focusing on the cultural differences in disclosing or discussing trauma, as well the different perceptions regarding the importance of emotional support versus clinical care during the prenatal period. Implications This study demonstrates the need for a shift in organizational culture and clinical practices towards trauma-informed prenatal care, which provides pregnant people with more control and prioritizes their physical and emotional safety. In accordance with the 2017 Guidelines for Perinatal Care developed by the American Academy of Pediatrics (AAP) and the American College of Obstetrics (ACOG), our preliminary findings echo the need to develop a comprehensive, continuity of care model that ensures an integrated service delivery assessing for medical and psychosocial risk . Subsequently, implementing trauma screening and inquiry that focuses on patients’ strengths and needs would allow for the provision of appropriately matching care due to early and ongoing assessments, particularly for higher-risk patients. Furthermore, our study highlights the need to elicit prenatal provider perspectives on trauma-informed prenatal care, identifying the barriers and facilitators of incorporating TIC into individual practice, as well as implementing trauma-informed systems . An important next step will be to design, develop, implement, and test TIC training programs for prenatal providers that focus on trauma knowledge and its impact on health, and trauma-sensitive, therapeutic patient-provider relationships and communication. TIC programs can be designed based on perceived gaps in provider knowledge and practices, and in collaboration with patient advisors, who bring a valuable perspective to inform policies and practices according to patient preferences. The trainings should focus on the provision of information in addition to including simulated skills training to facilitate the transference of TIC knowledge into practice . Moreover, clinics should focus on building community partnerships to enhance the provision of resources for pregnant people, especially those from underrepresented and under-resourced communities. Finally, feasibility, time, and workflow studies can be valuable in exploring and attempting to overcome barriers in implementing TIC .
In accordance with the TIC principles of empowerment, voice, and choice, our findings suggest that agency, choice, and control are interdependent . This is a consistent finding with previous studies that reported loss of power can sometimes be experienced as a violation by pregnant people with trauma histories, especially when their desire to be heard and to be in charge of their body and their care is dismissed or ignored [ , – ]. Previous qualitative studies showed that pregnant people with trauma histories wanted to have the agency to decide on certain aspects of care, such as the frequency of cervical examinations, gender of the prenatal provider, and who should be involved in their care and delivery . They wanted a physically and psychologically safe experience that was free of coercion [ , , – ]. Pregnancy and childbirth can be stressful experiences due to the physical and physiological changes, functional role changes (e.g., navigating new responsibilities), and, in some cases, mental health challenges [ – ]. Furthermore, the stress of the pregnancy experience can be exacerbated by a traumatic history . This study’s participants reported a mean score of 6 on the expanded ACEs questionnaire, indicating multiple past adversities that increases risk for poor pregnancy and birth outcomes . Pregnant people with a history of trauma may feel a sense of powerlessness over their body and other important aspects of their life during the perinatal period [ , , ]. In addition, most of the sample in our study identified as people from racial/ethnic backgrounds that have been historically subjected to structural racism and discrimination. Consequently, being in control may mean taking back their voice that has been silenced for generations.
Concordant with previous literature, participants wanted to have a consistent prenatal provider with whom they could develop a sense of trust, safety, and comfort [ , , – , – ]. Familiarity of providers and continuity of care allows patients to not repeatedly share a traumatic history [ , , , , ]. Similarly, participants’ preferences aligned with prior evidence that pregnant people do not want to be exposed to different providers during cervical examinations . Participants emphasized that they would trust a prenatal provider who is highly trained and competent in caring for pregnant people [ , , ].
Our results are comparable with previous studies demonstrating that pregnant people appreciated a genuine interest from providers in their physical and mental health in addition to the health of their baby. Previous research shows that pregnant people, especially those with trauma histories, want to be treated like a whole person [ , , , ], and they want their providers to ask about their self-care and provide psychoeducation to develop coping skills [ , , , ].
Participants preferred a prenatal provider who is engaging, supportive, and helps them to feel emotionally safe. These values are also at the intersection of collaboration and mutuality as well as safety in TIC . Our study found that pregnant people felt safer and more empowered when their provider validated their experience, did not judge them, respected their body, and used sensitive language while communicating with them [ , , , , ]. Previous literature has demonstrated the importance of having a collaborative and therapeutic relationship with prenatal providers [ , , , ], which can lead to better maternal mental health, prenatal health behaviors and enhanced birth outcomes, such as reduced preterm birth and low birth weight .
Our results also highlighted the significance of considering various cultural, historical, and gender issues in prenatal care that align with patients’ preferences and values . For instance, gender concordance and reciprocal relatability could make a provider more relatable, personal, and trustworthy. This is supported by studies that show that midwives found their personal life experiences were an aiding factor in assessing a pregnant individual’s mental wellbeing , and pregnant people tended to trust providers with whom they shared a lived experience . Extensive research has been published documenting the desire for and the benefit of having racially concordant obstetricians, midwives, and doulas [ – ]. While this desire may stem from the mistrust produced by structural racism, racial trauma, and obstetric violence that people of color experience in the healthcare system , having a racially concordant provider does not insinuate a universal mistrust of racially discordant providers . However, for some pregnant people of color, especially those with trauma histories, having a racially concordant prenatal provider means having a trustworthy person who profoundly understands their needs and perspectives, competently advocates for them, and provides them with a sense of engagement, satisfaction, safety, and comfort [ , , – ].
Our findings highlight the importance of providing pregnant people with information and resources to support them throughout the pregnancy and empower them to adequately care for themselves and their children. Additionally, explanations of procedures were found to promote feelings of comfort and safety [ , – , , , ]. In accordance with our results, the provision of information and resources should be universally communicated and customized according to the needs and preferences of pregnant individuals, reinforcing a non-reductionistic whole-person care model that is patient-centered . This approach requires providers to get to know and collaborate with the individual to furnish an equitable access to resources and subsequently enhanced outcomes . In addition to basic needs and nutritional support, this study’s participants displayed a strong emphasis on the need for mental health support. Available evidence shows that the need for mental health services among pregnant people often exceeds reported access to treatment . Socioeconomic disadvantage acts as barrier to access and is associated with poor maternal mental health and birth outcomes [ , – ]. This study’s sample of participants had an income-to-need ratio of 2 (i.e., one point above poverty line). Thus, integrated, co-located, behavioral health services are recommended to enhance access to mental health counseling and behavioral health services for this population to improve maternal and child health outcomes [ – ].
Study participants perceived trauma screening as important for their health and mental wellbeing, since it helped providers tailor care to their needs [ , , , , ]. They also associated trauma screening with breaking the cycle of intergenerational trauma . The acceptance of trauma screening and inquiry in our sample might be explained by the fact that the participants had a mean of 9 on the BCEs questionnaire, which may also be indicative of greater promotive factors and less trauma symptomatology. Findings from the literature have shown that those with higher ACE scores are more likely to want longer conversations, and those with less resilience preferred to be screened by a mental health counselor . However, a few participants reported that they did not want to be asked about past trauma, since it was not related to their current health concerns, nor did they trust the provider with such sensitive information. These findings were supported by multiple studies in which pregnant trauma survivors stressed the importance of having a trusting therapeutic relationship with the provider as a facilitator of disclosure [ , , , ]. Our findings resonate with the available body of evidence, which does not favor a single evidence-based strategy for trauma assessment. This is demonstrated by mixed preferences [ , – ] for either administering a trauma screening that involves the standardized use of a validated tool or using trauma inquiry, which involves patient-provider dialogue . This study reinforces the need for providers to first demonstrate the importance and universality of trauma assessment , and then provide pregnant people with the autonomy to choose how they want to address the topic, rather than standardizing the use of one method. In addition, incorporating resilience screening into the assessment , and being tactful in asking questions and responding empathetically and authentically to disclosures without judgment, can reduce feelings of stigma and promote help-seeking [ , , , , ].
The findings of this study should be considered in light of some limitations. This was a convenience sample of pregnant participants who had opted into an ongoing randomized trial for reducing stress in pregnancy, thus potentially affecting participant perspectives. Participant responses were not audio or video recorded, and handwritten field notes were recorded by the research staff; however, not recording may be an optimal, rapport- and trust-building approach for collecting qualitative data . Furthermore, since this study used structured questions, no follow-up or probing questions were used to elicit further detail, pointing to the need for further exploration of patient experiences in prenatal care. In addition, the number of participants who were provided a structured interview was dependent upon those who completed the RCT intervention or prenatal education control sessions. Considering this study focuses on the perceptions of pregnant people in a single metropolitan area, future studies involving pregnant individuals from diverse racial/ethnic and vulnerable populations (e.g., refugees, immigrants, etc.) and under-resourced settings (e.g., high-poverty neighborhoods in urban areas, rural communities, etc.) can be significant in focusing on the cultural differences in disclosing or discussing trauma, as well the different perceptions regarding the importance of emotional support versus clinical care during the prenatal period.
This study demonstrates the need for a shift in organizational culture and clinical practices towards trauma-informed prenatal care, which provides pregnant people with more control and prioritizes their physical and emotional safety. In accordance with the 2017 Guidelines for Perinatal Care developed by the American Academy of Pediatrics (AAP) and the American College of Obstetrics (ACOG), our preliminary findings echo the need to develop a comprehensive, continuity of care model that ensures an integrated service delivery assessing for medical and psychosocial risk . Subsequently, implementing trauma screening and inquiry that focuses on patients’ strengths and needs would allow for the provision of appropriately matching care due to early and ongoing assessments, particularly for higher-risk patients. Furthermore, our study highlights the need to elicit prenatal provider perspectives on trauma-informed prenatal care, identifying the barriers and facilitators of incorporating TIC into individual practice, as well as implementing trauma-informed systems . An important next step will be to design, develop, implement, and test TIC training programs for prenatal providers that focus on trauma knowledge and its impact on health, and trauma-sensitive, therapeutic patient-provider relationships and communication. TIC programs can be designed based on perceived gaps in provider knowledge and practices, and in collaboration with patient advisors, who bring a valuable perspective to inform policies and practices according to patient preferences. The trainings should focus on the provision of information in addition to including simulated skills training to facilitate the transference of TIC knowledge into practice . Moreover, clinics should focus on building community partnerships to enhance the provision of resources for pregnant people, especially those from underrepresented and under-resourced communities. Finally, feasibility, time, and workflow studies can be valuable in exploring and attempting to overcome barriers in implementing TIC .
Preliminary findings underscore the importance of addressing the psychological well-being of pregnant individuals by routinely incorporating TIC principles into all aspects of prenatal care. By integrating patient preferences into practice, obstetric providers can promote tailored support and empathetic engagement for pregnant people. Subsequently, TIC enables the delivery of high-quality, comprehensive, and effective care that can enhance outcomes for pregnant individuals and their children. This is especially important in delivering trauma-informed and culturally-responsive care for pregnant people from underrepresented communities. Future studies should explore the perspectives of both patients and providers to illuminate best practices, perceived gaps in provider knowledge and practice, and strengths and opportunities to improve practice and organizational culture related to providing trauma-informed prenatal care.
Supplementary Material 1.
|
Genetic Etiology Influences the Low‐Frequency Components of Globus Pallidus Internus Electrophysiology in Dystonia | 5d73c533-15ee-43cc-9c2d-a66537a8d7b3 | 11891763 | Surgical Procedures, Operative[mh] | Introduction Dystonia is a movement disorder causing abnormal, often repetitive movements or postures due to muscle contractions, with genetic forms resulting from pathogenic mutations in causative genes leading to diverse clinical presentations . Deep brain stimulation (DBS) targeting the globus pallidus internus (GPi) is an effective therapy for the drug‐resistant form of dystonia . Intraoperative microelectrode recordings (MERs) are routinely used to confirm the DBS lead placements, also reveal the pathophysiology of movement disorders . Elevated low‐frequency (4–12 Hz) activity in local field potentials (LFPs) of the GPi and subthalamic nucleus (STN) has been consistently associated with dystonia. This activity could potentially serve as a biomarker for closed‐loop DBS in dystonia . Therefore, examining the impact of genetic etiology on low‐frequency GPi electrophysiology could enable personalized adaptive DBS (aDBS) treatments. Herein, we compared the properties of low‐frequency GPi activity extracted from MERs collected during GPi‐DBS surgery from patients with various genetic and idiopathic forms of dystonia. A significant reduction in low‐frequency activity (4–12 Hz) was observed in patients with DYT‐ SGCE compared to DYT‐ THAP1 and idiopathic dystonia, highlighting the potential for personalized aDBS based on genetic factors in dystonia patients. Finally, our findings indicate that genetic etiology has no significant impact on the spatial characteristics of GPi electrophysiology. Therefore, MER‐based DBS lead placement can be performed independently of the genetic etiology of dystonia.
Materials and Methods The study was conducted with genetic and idiopathic dystonia patients who underwent bilateral GPi‐DBS surgery under propofol and remifentanil anesthesia (Table ) at Fondazione IRCCS Istituto Neurologico Carlo Besta. Before 2000, only TOR1A was tested in patients. Between 2000 and 2015, individuals who were negative for TOR1A were retested via Sanger sequencing as new genes were identified. Since 2015, an NGS‐customized dystonia gene panel has been employed , (Table ). The surgeries were conducted bilaterally and under stereotactic conditions with the Leksell (Elekta Inc.) or Maranello (Eidos22) frames. A thorough description of our standard surgical procedure is available elsewhere . Identification of the nuclei borders, the sensorimotor GPi, and MER depths was performed by using the Distal Atlas in Montreal Neurological Institute space ( p > 0.5 threshold) with Lead DBS v2.3 and an expert electrophysiologist. At the preoperative and postoperative follow‐up evaluations, the Burke–Fahn–Marsden Dystonia Rating Scale (BFMDRS) was employed to assess the motor severity. The demographic and clinical profiles of the patients are presented in Table . We adopted the methodology used to compare STN electrophysiology in monogenetic and idiopathic forms of Parkinson's disease (PD) . Briefly, we divided MERs into 50 ms segments, computed the root mean square (RMS), and labeled segments stable if their RMS values were within three standard deviations of the median RMS. The longest stable section of each recording was selected for further analysis . To estimate the power spectral density (PSD), we rectified the stable raw signal and subtracted the mean to reveal the low‐frequency envelope. The PSD was estimated with a resolution of 1/3 Hz and normalized to the total power within the analysis range (2–200 Hz) to mitigate the influence of varying RMS values across patients . The length of the detected GPi region was normalized to 1, where 0 represents the GPi entry . Trajectories were included in analyses if they had a minimum GPi length of 2 mm and recordings from at least four distinct depths (Figure ). Theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and gamma (30–100), band activity was extracted using a four‐pole Butterworth band‐pass filter. The fraction of power and the ratio between these bands were measured as electrophysiological features. Kruskal–Wallis and post hoc Dunn's test with Holm–Bonferroni correction were used for group comparison. Spearman's correlation with permutation testing was used to assess the significance of linear relationships between variables, with Benjamini/Hochberg (FDR) correction applied to the p values for measured correlations. All tests were two‐tailed, and statistical significance was defined as p ≤ 0.05. We employed a linear mixed model (LMM) with the Wald test to investigate the interaction between normalized depths and spectral properties within the GPi by considering genetic factors as random effects. This approach allowed us to assess the influence of GPi depth on spectral properties while accounting for random variation from genetic factors and individual differences.
Results In total, 597 MERs were collected from 70 trajectories across 30 patients in the cohort (Table ). All analyzed MER trajectories passed through the sensorimotor portion of the GPi. Therefore, we do not anticipate spatial confounders affecting the electrophysiology (Figure ). Elevated relative power in the theta and alpha bands compared to the baseline (median power of the whole spectrum up to 100 Hz) was observed across dystonia syndromes (Figures and ). We detected statistically significant differences between patient groups in all spectral features except for the alpha/theta ratio ( p ≤ 0.05, Kruskal–Wallis test with Holm–Bonferroni correction) (Table ). The fraction of alpha band activity remained significantly lower for DYT‐ SGCE (2.97%) compared to iDYT (4.44%, p = 0.006, Dunn's test with Holm–Bonferroni correction) and DYT‐ THAP1 (4.51%, p = 0.011) patients (Figure ). Similarly, the fraction of power in the theta band was significantly lower in DYT‐ SGCE (4.42%) compared to the iDYT (7.91%, p = 0.002) and DYT‐ THAP1 (7.00%, p = 0.019) (Figure ). The fraction of gamma power for the DYT‐ VPS16 group (37.04%) was significantly higher than the remaining groups, apart from DYT‐ SGCE (Figure ). We investigated whether the observed electrophysiological differences were linked to motor symptom severity rather than genetic etiology. This was accomplished by calculating Spearman's correlation between the median values of electrophysiological features for patients and their corresponding baseline BFMDRS scores, as well as the percentage change observed in these scores between the preoperative and postoperative evaluations (Figure ). We observed a moderate effect size (0.4–0.6) in the relationship between certain clinical and spectral feature pairs, but none reached statistical significance ( p > 0.05, Spearman's correlation with FDR correction) (Figure ). Although no significant correlation was found, the power spectral characteristics of the GPi in iDYT cases appeared to be more associated with disease severity compared to genetic dystonia syndromes (Figure ). Additionally, we sought to elucidate the spatial characteristics of GPi electrophysiology along MER trajectories by measuring the linear relationship between the normalized depths (GPi entry = 0, GPi exit = 1) and spectral features. The beta/theta and beta/alpha ratios ( r s = 0.30, p = 0.049) demonstrated significant linear characteristics between the borders of the GPi in DYT‐ VPS16 patients (Figure ). Finally, no evidence was found to support the role of genetic etiology in spatially modulating the electrophysiological properties of MER recordings within the GPi (LMM, Wald test with Holm–Bonferroni correction, p > 0.05) (Table ).
Discussion Here, we presented the potential of dystonia genetic etiology to influence the low‐frequency components of GPi electrophysiology. Reductions in theta and alpha band activities were observed in DYT‐ SGCE compared to DYT‐ THAP1 and iDYT patients. These differences cannot be attributed to the severity of dystonia, as no significant correlation was found between these components and the severity of motor symptoms. We further demonstrated that the genetic etiology of dystonia is irrelevant to MER‐based lead localization in GPi‐DBS surgeries, as the spatial characteristics of spectral features remain consistent when genetic factors are treated as random effects in LMM. Weill and colleagues proposed that genetic heterogeneity in PD is not associated with robust electrophysiological differences in STN . In our case, we found significant differences between the low‐frequency activity of DYT‐ SGCE from DYT‐ THAP1 and iDYT in the GPi. It has been proposed that distinct oscillatory circuit disruptions may underlie dystonia and parkinsonism, even in the same anatomical structure . In this context, the effects of genetic etiology on different movement disorders and brain regions can exhibit differences. Our results align with this study, as we did not observe any impact of genetic etiology on the spatial characteristics of GPi electrophysiology. We previously analyzed the pallidal single‐unit activity in a larger genetic dystonia cohort , including the patients analyzed here. We observed convergence among dystonia genes toward either strong bursts or tonic behavior, with SGCE and THAP1 exhibiting opposite behaviors. This is consistent with the differing low‐frequency activity of the GPi associated with these two genes at the population level in the present work . It should be noted that our study has several limitations. The limited sample size may impact the findings, though the high number of single MER epochs could mitigate this to some extent. The limited number of trajectories per patient prevented a consistent evaluation of potential spatial confounding factors along the anteroposterior and mediolateral directions. Lastly, electrophysiological activity recorded from the GPi using microelectrodes reflects considerably smaller neuronal populations compared to LFPs recorded with DBS macroelectrodes. Therefore, the potential effects of genetic etiology should also be investigated in LFP recordings. In conclusion, we suggest the genetic etiology may potentially impact low‐frequency activity within the GPi, especially in patients with DYT‐ SGCE , where these bands seem to be underrepresented. An adaptive DBS paradigm for dystonia based on the spectral fluctuations of recorded activity should then take into account potential differences between genetic dystonia profiles. However, further studies with larger sample sizes and a broader range of dystonia genes are needed to evaluate the clinical relevance of our hypothesis.
Ahmet Kaymak: conceptualization, methodology, software, data curation, investigation, validation, formal analysis, visualization, project administration, writing – original draft, writing – review and editing. Luigi M. Romito: conceptualization, methodology, data curation, validation, supervision, project administration, resources, writing – original draft, writing – review and editing. Fabiana Colucci: data curation, investigation, writing – review and editing. Nico Golfrè Andreasi: data curation, writing – review and editing. Roberta Telese: data curation. Sara Rinaldo: data curation. Vincenzo Levi: data curation, writing – review and editing. Giovanna Zorzi: data curation. Zvi Israel: supervision. David Arkadir: supervision, writing – review and editing. Hagai Bergman: supervision. Miryam Carecchio: supervision. Holger Prokisch: supervision. Michael Zech: supervision, writing – review and editing. Barbara Garavaglia: data curation, investigation, validation. Alberto Mazzoni: conceptualization, methodology, validation, supervision, funding acquisition, project administration, writing – original draft, writing – review and editing. Roberto Eleopra: supervision.
The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Neurologico Carlo Besta.
Informed consent was obtained from patients and their legal representatives.
The authors declare no conflicts of interest.
Data S1.
|
Decision-making participation of people with mental health difficulties at a community rehabilitation center in Taiwan | 55cc6cc7-8d0b-484c-9585-0509fc7b029a | 11924254 | Community Health Services[mh] | The human rights model, as advocated by the Convention on the Rights of Persons with Disabilities (CRPD), emphasizes the importance of respecting the autonomy and decision-making rights of people with mental health difficulties. In the case of Taiwan, however, despite its commitment to a recovery-oriented approach, decision-making within mental health services often remains dominated by medical professionals and service providers. This raises a critical question: why do people with mental health difficulties still face significant barriers in exercising their rights to participate in decisions regarding their care? This study aims to address the following three research questions. What support or obstacles do people with mental health difficulties encounter in decision-making? What social, institutional, and power dynamics shape mental health services? How do people respond to these power relations? By using institutional ethnography (IE), this research examines how the recovery model and the human rights framework are being implemented within Taiwan’s mental health services, revealing critical gaps between policy and practice.
In the rehabilitation of people with mental health issues, there has been a shift from the medical to a recovery model (Scholz et al., ). Recovery is described as “deeply personal,” with “values, feelings, goals, roles, satisfaction, even with limitations caused by illness” (Anthony, , p. 15). One element of the recovery model is “empowerment,” which advocates that people with mental disabilities can “make their own decisions.” Consumer empowerment includes two levels: individual and systemic (Zimmerman & Warschausky, ). In the recovery model, the relationship between professionals and clients is “intersubjectivity.” The model emphasizes that everyone has a different recovery journey. People with mental disabilities are no longer considered passive recipients of services but have the right to participate in decisions (Wang, ). All services should respect the different needs of individuals. This concept can be seen as a continuation of the human rights perspective, in which people with disabilities are seen as extension of human diversity. With deinstitutionalization (1960s) and the consumer movement (1970s), the community-oriented model of mental healthcare became mainstream. Countries have successively increased community-based service facilities, enabling patients to have meaningful lives based on the recovery model approach (Hsieh & Shiau, ; Wang & Ouyang, ). People with mental health difficulties can use services and participate in decision-making about themselves while feeling respected. However, the service model for people with disabilities is still based on individual model addressing personal care services under neoliberalism. The rights and interests of people with disabilities must be based on individualism and encourage self-supporting (Tien, , p. 16). Thus, the establishment and development of mental healthcare services have ignored the problems of social structure. Instead, it has focused on the individualized treatment. The development of mental healthcare in Taiwan In 1935, the Japanese colonial government introduced the Home Custody Act of Mental Patients from Japan, which designated mental illness care as a family responsibility (Tien, ; Wu, ). In 1980, the Welfare Law for Handicapped Persons was implemented and Taiwan conducted its first large-scale medical appraisal. The categorization of people with physical and mental disabilities became the basis for obtaining welfare measures (Tien, ). In contrast to the institutionalization and de-institutionalization movements of European countries and the US, Taiwan experienced Japanese colonialism and national government governance. Mental healthcare institutions were highly heterogeneous, including government-run hospitals, religiously influenced facilities, and privately-operated shelters. After the 1980s, psychiatrists established professional authority in psychiatry and practices of mental healthcare institutions were standardized (Tang, ). In 1986, the Department of Health planned a national psychiatric network to improve welfare services for mental health patients and encourage returns to the community and family (National Health Research Institutes, ). In the 1990s, psychiatric medicine was institutionalized and a classification table for patients and a system of rights and responsibilities were developed. In March 1995, Taiwan formally implemented a National Health Insurance (NHI) system, including community rehabilitation services. In July 1999, the mental health system, the Table of Division of Authority and Responsibility for the Care System of Mental Illness, delineated six categories of people with mental illness according to their conditions and functions. The policy framework established after the 1990s serves as the foundation for contemporary mental health care in Taiwan. The treatment and rehabilitation of mental illnesses are categorized as items covered by the NHI and fall under the jurisdiction of the health regulatory authorities. This implies that services related to mental illnesses must adhere to the standards set by the health regulatory authorities. In 2007, scholars began introducing the concept of “recovery” in psychiatric community rehabilitation (Song, ). Unlike the West, Taiwan has not experienced institutionalization or de-institutionalization. It also lacks voices of those who have experienced mental illness. The concept of recovery started from a policy perspective and advanced to practical work. However, because of Taiwan’s historical background, its mental care system remains dominated by the medical profession, and demand and service supply are limited to existing resource allocation (Tien, ). Taiwan’s mental healthcare system According to Table of Division of Authority and Responsibility for the Care System of Mental Illness ( ), Taiwan’s six categories of mental illness are based on symptoms and treatment. The services belong to different responsibility units, including health, social, and labour administrations. Services still operate within this structure. Regular evaluations are conducted for service quality (C. J. Hsieh & Shiau, ). Community rehabilitation centres serve people with a stable mental illness, partial functional degradation, potential for rehabilitation, and no need for full-time hospitalization, but requiring active rehabilitation treatment. Compared to other types of people with mental health difficulties, the fourth type served by community rehabilitation centres should have more autonomy and decision-making participation with fewer restrictions when receiving services. Furthermore, the evaluation criteria of mental rehabilitation institutions in Taiwan mention “recovery” to focus on the autonomy of service users. In addition to the formulation and revision of national policies, the implementation of rights of people with disabilities requires that service providers practice in various regional units. Decisions under professionals Under the biopsychosocial model, doctors, nurses, psychologists, occupational therapists, and social workers are included in a mental health professional team to provide multifaceted professional knowledge and services (Rosen & Callaly, ). The roles of professionals in communities differ significantly from those in hospitals (Hsieh & Shiau, ). Professionals have a “cooperative” relationship with people with mental disabilities, and the therapeutic relationship should be closer and more equal. According to the Establishment and Management Measures/Regulations of the Psychiatric Rehabilitation Agency, a full-time case manager must be appointed for life care and training of service users. The decision-making process of people with mental disabilities in community rehabilitation services is complex because of the multiple personnel involved. In situations with many participants, decision-making is prone to dilemmas, especially the knowledge and power gaps between patients and non-patients. Wu and Ching ( ) found that patients with chronic schizophrenia might not have the opportunity to make their own decisions during the recovery process and are affected by many family restrictions. In a qualitative study conducted in Taiwan, Lin et al. ( ) found that persons with mental health difficulties had varying degrees of negative experiences with shared decision-making, often with paternalism. Although the recovery model values personal decisions, people with mental disorders tend to leave decisions to others with authorities. These diverse and intertwined opinions complicate cooperative relationships between professionals, family members, and individuals with mental disabilities. Institutional texts of psychiatric care According to the division of the care system for patients with mental illnesses, patients go through different units and professionals. In the process of executing the service, all decisions regarding rehabilitation are organized using various documents that can be checked and delivered easily. This type of service highlights the importance of documents. Kuo ( ) and Chen ( ) noted that staff redesigned the work form and regulated each member’s participation in the necessary training to respond to the standards of evaluation. The documents designed in response to the system assist the work process yet limit staff freedom to provide services. In the psychiatric care system, there is high reliance on documentation circulated around units and professionals to facilitate understanding of clients. These documents not only provide information, but also present knowledge and power, implying class and governance. In this power dynamic, the participation and autonomy of people with mental health difficulties may be influenced and suppressed. The practical process and conflicts of participation in decision-making People with mental health disabilities face many difficulties in participating in decision-making, as the following subsections outline. Translation of the concept of human rights The process of localizing human rights conventions requires consideration of local cultural values (Kelly et al., ) and the appropriateness of the conventions for translation (Zwingel, ). For Taiwan, Chiang et al. ( ) found that even though the government’s quota policy guarantees the employment of people with disabilities, people still think that people with mental disabilities are dangerous and refuse to employ them. Therefore, the practice of the concept of human rights is not limited to revising the rule of law. The process of adaptation between workers and other people is even more difficult and important. Taking care of budget allocation Mental illness is characterized by early ageing and lower average life expectancy (Hsieh et al., ). However, governments usually give “low priority” to mental health (Burns, , cited from Molas, ). For Taiwan, Wang ( ) noted that current mental care resources are invested in medical services, and the demand for community care is about five times that of medical services, whereas the funds received are only one-tenth of the demands. Mental health should not be limited to hospitalization or medication use. Education, stigma reduction, and provision of community resources are all included (Pathare et al., ). Thus, mental health resources should be used not only for medical treatment but also to promote community living. Service restrictions under the system and evaluation The concept of the rights and autonomy of people with mental disabilities clearly advanced following the consumer movement in the 1970s. In Taiwan, the rights of people with mental disabilities underwent initial policy revisions at the end of the 20th century. However, the choices of people with mental disabilities in the treatment process and their status in professional interactions are rarely mentioned; therefore, it is difficult to assess whether people with mental disabilities have equal human rights. Devi et al. ( ) found that application forms for these social services are mostly in professional language, less in ways that users can understand or use, and through repeated meetings and management. Thus, rights are superficial. Chen ( ) noted that the services were mostly extended from psychiatric medicine, focusing on “disease” and “treatment,” and professional staff controlled the service use. Kuo ( ) found that text-mediated domination makes people disappear and focuses only on “needs” and “problems” under the requirements of the documents for evaluation in the clubhouse. If organizations cannot pass the evaluation, they will lose health insurance benefits, but more worryingly, the question arises as to where service users should go. Taiwan began implementing the CRPD Enforcement Law in 2014. However, according to the researcher’s observations, not all services in mental rehabilitation institutions today can be determined by the client or with support. Paternalism and shared decision-making Mental illness affects life in multiple unpredictable ways and prevents people from exercising their rights in the same way as people do in general (Molas, ). In Chinese Confucian culture, the family is the basic unit of social structure, and paternalism plays an important role in the decision-making process (Tai & Tsai, ). When treating persons with mental health difficulties, doctors tend to reconsider clients’ words carefully because of prejudice. Professionals skip communication with patients and seek consensus with the patient’s family. Therefore, alternative decision-making services remain widely accepted (Eaton, ). However, Heerings et al. ( ) pointed out that, in caring relationships, “relationship building” and “communication” are at the heart of the entire service. Therefore, it is necessary to empower participants. In recent years, the concept of shared decision-making has gained attention, whereby patients and medical staff make decisions together and respect patients’ autonomy (Elwyn et al., ). In a qualitative study conducted in Taiwan by Lin et al. ( ), all respondents were expected to play a more active role in the relationship. In light of this, this research aims to explore the challenges faced by people with mental health difficulties in participating in decision-making within community rehabilitation centres, focusing on how power dynamics and institutional constraints impact their rights and autonomy.
In 1935, the Japanese colonial government introduced the Home Custody Act of Mental Patients from Japan, which designated mental illness care as a family responsibility (Tien, ; Wu, ). In 1980, the Welfare Law for Handicapped Persons was implemented and Taiwan conducted its first large-scale medical appraisal. The categorization of people with physical and mental disabilities became the basis for obtaining welfare measures (Tien, ). In contrast to the institutionalization and de-institutionalization movements of European countries and the US, Taiwan experienced Japanese colonialism and national government governance. Mental healthcare institutions were highly heterogeneous, including government-run hospitals, religiously influenced facilities, and privately-operated shelters. After the 1980s, psychiatrists established professional authority in psychiatry and practices of mental healthcare institutions were standardized (Tang, ). In 1986, the Department of Health planned a national psychiatric network to improve welfare services for mental health patients and encourage returns to the community and family (National Health Research Institutes, ). In the 1990s, psychiatric medicine was institutionalized and a classification table for patients and a system of rights and responsibilities were developed. In March 1995, Taiwan formally implemented a National Health Insurance (NHI) system, including community rehabilitation services. In July 1999, the mental health system, the Table of Division of Authority and Responsibility for the Care System of Mental Illness, delineated six categories of people with mental illness according to their conditions and functions. The policy framework established after the 1990s serves as the foundation for contemporary mental health care in Taiwan. The treatment and rehabilitation of mental illnesses are categorized as items covered by the NHI and fall under the jurisdiction of the health regulatory authorities. This implies that services related to mental illnesses must adhere to the standards set by the health regulatory authorities. In 2007, scholars began introducing the concept of “recovery” in psychiatric community rehabilitation (Song, ). Unlike the West, Taiwan has not experienced institutionalization or de-institutionalization. It also lacks voices of those who have experienced mental illness. The concept of recovery started from a policy perspective and advanced to practical work. However, because of Taiwan’s historical background, its mental care system remains dominated by the medical profession, and demand and service supply are limited to existing resource allocation (Tien, ).
According to Table of Division of Authority and Responsibility for the Care System of Mental Illness ( ), Taiwan’s six categories of mental illness are based on symptoms and treatment. The services belong to different responsibility units, including health, social, and labour administrations. Services still operate within this structure. Regular evaluations are conducted for service quality (C. J. Hsieh & Shiau, ). Community rehabilitation centres serve people with a stable mental illness, partial functional degradation, potential for rehabilitation, and no need for full-time hospitalization, but requiring active rehabilitation treatment. Compared to other types of people with mental health difficulties, the fourth type served by community rehabilitation centres should have more autonomy and decision-making participation with fewer restrictions when receiving services. Furthermore, the evaluation criteria of mental rehabilitation institutions in Taiwan mention “recovery” to focus on the autonomy of service users. In addition to the formulation and revision of national policies, the implementation of rights of people with disabilities requires that service providers practice in various regional units. Decisions under professionals Under the biopsychosocial model, doctors, nurses, psychologists, occupational therapists, and social workers are included in a mental health professional team to provide multifaceted professional knowledge and services (Rosen & Callaly, ). The roles of professionals in communities differ significantly from those in hospitals (Hsieh & Shiau, ). Professionals have a “cooperative” relationship with people with mental disabilities, and the therapeutic relationship should be closer and more equal. According to the Establishment and Management Measures/Regulations of the Psychiatric Rehabilitation Agency, a full-time case manager must be appointed for life care and training of service users. The decision-making process of people with mental disabilities in community rehabilitation services is complex because of the multiple personnel involved. In situations with many participants, decision-making is prone to dilemmas, especially the knowledge and power gaps between patients and non-patients. Wu and Ching ( ) found that patients with chronic schizophrenia might not have the opportunity to make their own decisions during the recovery process and are affected by many family restrictions. In a qualitative study conducted in Taiwan, Lin et al. ( ) found that persons with mental health difficulties had varying degrees of negative experiences with shared decision-making, often with paternalism. Although the recovery model values personal decisions, people with mental disorders tend to leave decisions to others with authorities. These diverse and intertwined opinions complicate cooperative relationships between professionals, family members, and individuals with mental disabilities. Institutional texts of psychiatric care According to the division of the care system for patients with mental illnesses, patients go through different units and professionals. In the process of executing the service, all decisions regarding rehabilitation are organized using various documents that can be checked and delivered easily. This type of service highlights the importance of documents. Kuo ( ) and Chen ( ) noted that staff redesigned the work form and regulated each member’s participation in the necessary training to respond to the standards of evaluation. The documents designed in response to the system assist the work process yet limit staff freedom to provide services. In the psychiatric care system, there is high reliance on documentation circulated around units and professionals to facilitate understanding of clients. These documents not only provide information, but also present knowledge and power, implying class and governance. In this power dynamic, the participation and autonomy of people with mental health difficulties may be influenced and suppressed.
Under the biopsychosocial model, doctors, nurses, psychologists, occupational therapists, and social workers are included in a mental health professional team to provide multifaceted professional knowledge and services (Rosen & Callaly, ). The roles of professionals in communities differ significantly from those in hospitals (Hsieh & Shiau, ). Professionals have a “cooperative” relationship with people with mental disabilities, and the therapeutic relationship should be closer and more equal. According to the Establishment and Management Measures/Regulations of the Psychiatric Rehabilitation Agency, a full-time case manager must be appointed for life care and training of service users. The decision-making process of people with mental disabilities in community rehabilitation services is complex because of the multiple personnel involved. In situations with many participants, decision-making is prone to dilemmas, especially the knowledge and power gaps between patients and non-patients. Wu and Ching ( ) found that patients with chronic schizophrenia might not have the opportunity to make their own decisions during the recovery process and are affected by many family restrictions. In a qualitative study conducted in Taiwan, Lin et al. ( ) found that persons with mental health difficulties had varying degrees of negative experiences with shared decision-making, often with paternalism. Although the recovery model values personal decisions, people with mental disorders tend to leave decisions to others with authorities. These diverse and intertwined opinions complicate cooperative relationships between professionals, family members, and individuals with mental disabilities.
According to the division of the care system for patients with mental illnesses, patients go through different units and professionals. In the process of executing the service, all decisions regarding rehabilitation are organized using various documents that can be checked and delivered easily. This type of service highlights the importance of documents. Kuo ( ) and Chen ( ) noted that staff redesigned the work form and regulated each member’s participation in the necessary training to respond to the standards of evaluation. The documents designed in response to the system assist the work process yet limit staff freedom to provide services. In the psychiatric care system, there is high reliance on documentation circulated around units and professionals to facilitate understanding of clients. These documents not only provide information, but also present knowledge and power, implying class and governance. In this power dynamic, the participation and autonomy of people with mental health difficulties may be influenced and suppressed.
People with mental health disabilities face many difficulties in participating in decision-making, as the following subsections outline. Translation of the concept of human rights The process of localizing human rights conventions requires consideration of local cultural values (Kelly et al., ) and the appropriateness of the conventions for translation (Zwingel, ). For Taiwan, Chiang et al. ( ) found that even though the government’s quota policy guarantees the employment of people with disabilities, people still think that people with mental disabilities are dangerous and refuse to employ them. Therefore, the practice of the concept of human rights is not limited to revising the rule of law. The process of adaptation between workers and other people is even more difficult and important. Taking care of budget allocation Mental illness is characterized by early ageing and lower average life expectancy (Hsieh et al., ). However, governments usually give “low priority” to mental health (Burns, , cited from Molas, ). For Taiwan, Wang ( ) noted that current mental care resources are invested in medical services, and the demand for community care is about five times that of medical services, whereas the funds received are only one-tenth of the demands. Mental health should not be limited to hospitalization or medication use. Education, stigma reduction, and provision of community resources are all included (Pathare et al., ). Thus, mental health resources should be used not only for medical treatment but also to promote community living. Service restrictions under the system and evaluation The concept of the rights and autonomy of people with mental disabilities clearly advanced following the consumer movement in the 1970s. In Taiwan, the rights of people with mental disabilities underwent initial policy revisions at the end of the 20th century. However, the choices of people with mental disabilities in the treatment process and their status in professional interactions are rarely mentioned; therefore, it is difficult to assess whether people with mental disabilities have equal human rights. Devi et al. ( ) found that application forms for these social services are mostly in professional language, less in ways that users can understand or use, and through repeated meetings and management. Thus, rights are superficial. Chen ( ) noted that the services were mostly extended from psychiatric medicine, focusing on “disease” and “treatment,” and professional staff controlled the service use. Kuo ( ) found that text-mediated domination makes people disappear and focuses only on “needs” and “problems” under the requirements of the documents for evaluation in the clubhouse. If organizations cannot pass the evaluation, they will lose health insurance benefits, but more worryingly, the question arises as to where service users should go. Taiwan began implementing the CRPD Enforcement Law in 2014. However, according to the researcher’s observations, not all services in mental rehabilitation institutions today can be determined by the client or with support. Paternalism and shared decision-making Mental illness affects life in multiple unpredictable ways and prevents people from exercising their rights in the same way as people do in general (Molas, ). In Chinese Confucian culture, the family is the basic unit of social structure, and paternalism plays an important role in the decision-making process (Tai & Tsai, ). When treating persons with mental health difficulties, doctors tend to reconsider clients’ words carefully because of prejudice. Professionals skip communication with patients and seek consensus with the patient’s family. Therefore, alternative decision-making services remain widely accepted (Eaton, ). However, Heerings et al. ( ) pointed out that, in caring relationships, “relationship building” and “communication” are at the heart of the entire service. Therefore, it is necessary to empower participants. In recent years, the concept of shared decision-making has gained attention, whereby patients and medical staff make decisions together and respect patients’ autonomy (Elwyn et al., ). In a qualitative study conducted in Taiwan by Lin et al. ( ), all respondents were expected to play a more active role in the relationship. In light of this, this research aims to explore the challenges faced by people with mental health difficulties in participating in decision-making within community rehabilitation centres, focusing on how power dynamics and institutional constraints impact their rights and autonomy.
The process of localizing human rights conventions requires consideration of local cultural values (Kelly et al., ) and the appropriateness of the conventions for translation (Zwingel, ). For Taiwan, Chiang et al. ( ) found that even though the government’s quota policy guarantees the employment of people with disabilities, people still think that people with mental disabilities are dangerous and refuse to employ them. Therefore, the practice of the concept of human rights is not limited to revising the rule of law. The process of adaptation between workers and other people is even more difficult and important.
Mental illness is characterized by early ageing and lower average life expectancy (Hsieh et al., ). However, governments usually give “low priority” to mental health (Burns, , cited from Molas, ). For Taiwan, Wang ( ) noted that current mental care resources are invested in medical services, and the demand for community care is about five times that of medical services, whereas the funds received are only one-tenth of the demands. Mental health should not be limited to hospitalization or medication use. Education, stigma reduction, and provision of community resources are all included (Pathare et al., ). Thus, mental health resources should be used not only for medical treatment but also to promote community living.
The concept of the rights and autonomy of people with mental disabilities clearly advanced following the consumer movement in the 1970s. In Taiwan, the rights of people with mental disabilities underwent initial policy revisions at the end of the 20th century. However, the choices of people with mental disabilities in the treatment process and their status in professional interactions are rarely mentioned; therefore, it is difficult to assess whether people with mental disabilities have equal human rights. Devi et al. ( ) found that application forms for these social services are mostly in professional language, less in ways that users can understand or use, and through repeated meetings and management. Thus, rights are superficial. Chen ( ) noted that the services were mostly extended from psychiatric medicine, focusing on “disease” and “treatment,” and professional staff controlled the service use. Kuo ( ) found that text-mediated domination makes people disappear and focuses only on “needs” and “problems” under the requirements of the documents for evaluation in the clubhouse. If organizations cannot pass the evaluation, they will lose health insurance benefits, but more worryingly, the question arises as to where service users should go. Taiwan began implementing the CRPD Enforcement Law in 2014. However, according to the researcher’s observations, not all services in mental rehabilitation institutions today can be determined by the client or with support.
Mental illness affects life in multiple unpredictable ways and prevents people from exercising their rights in the same way as people do in general (Molas, ). In Chinese Confucian culture, the family is the basic unit of social structure, and paternalism plays an important role in the decision-making process (Tai & Tsai, ). When treating persons with mental health difficulties, doctors tend to reconsider clients’ words carefully because of prejudice. Professionals skip communication with patients and seek consensus with the patient’s family. Therefore, alternative decision-making services remain widely accepted (Eaton, ). However, Heerings et al. ( ) pointed out that, in caring relationships, “relationship building” and “communication” are at the heart of the entire service. Therefore, it is necessary to empower participants. In recent years, the concept of shared decision-making has gained attention, whereby patients and medical staff make decisions together and respect patients’ autonomy (Elwyn et al., ). In a qualitative study conducted in Taiwan by Lin et al. ( ), all respondents were expected to play a more active role in the relationship. In light of this, this research aims to explore the challenges faced by people with mental health difficulties in participating in decision-making within community rehabilitation centres, focusing on how power dynamics and institutional constraints impact their rights and autonomy.
This study employs IE to explore interactions and governance within service networks for individuals with mental health difficulties. Unlike general qualitative research, IE not only documents participants’ lived experiences but also maps how these experiences are shaped by social and institutional structures (Devault, ). The primary goal of IE is to uncover how institutional processes govern individuals’ lives, often in ways that might not be immediately visible to those affected. To achieve this, the study prioritizes the experiences and interpretations of service users themselves, adhering to IE’s principle of starting from the standpoint of those most directly impacted by institutional structures. Recruitment for this study was conducted at a community rehabilitation centre located in a metropolitan area of northern Taiwan. The study used purposive sampling to interview individuals over 20 years old who were using services at a community rehabilitation centre. It aimed to understand their process of selecting services, experiences in decision-making, and related social interactions. Additionally, the study interviewed relevant staff, such as case managers and professionals, based on the patients’ mentioned work knowledge and processes. Important documents, like manuals, referral forms, assessment reports, and rehabilitation plans, were collected and analysed to clarify relationships, power dynamics, and cross-boundary interactions, providing a detailed account of the policies and ideologies behind “participation in decision-making.” Relevant regulations and policies were reviewed. The study was approved by the Research Ethics Committee of National Taiwan University (Approval No.: 202206ES055), with the approval period from 29 July 2022, to 30 June 2023. Data were collected through interviews, text analysis, and participatory observation by the researcher. Interviews involved three service users, two case managers, and the centre leader (see ). To ensure confidentiality, all participants are referred to by pseudonyms throughout the article. Additionally, the documents analysed were completed by relevant participants and reviewed to ensure accuracy ( ). Interviews were semi-structured and aimed to explore participants’ experiences with decision-making, power dynamics, and their interactions with service providers. These interviews were conducted in a private setting within the rehabilitation centre to ensure confidentiality and comfort for the participants. Participants were also given the opportunity to review their transcripts to verify accuracy. The analysis followed a multi-step process common in IE. Initially, interviews were transcribed and coded to identify key themes related to decision-making and power dynamics within the service network. These themes were then cross-referenced with textual data, such as policy documents and manuals, to trace how institutional practices shape participants’ experiences. Field notes from participatory observation were also integrated into the analysis to provide a richer understanding of the power relations at play. Regarding reflexivity, it is crucial to acknowledge the dual role of the researcher as both a part-time occupational therapist and an observer. This dual role allowed for in-depth engagement with the field but also introduced potential biases. Throughout the study, the researcher remained aware of these roles and actively reflected on how they might influence data collection and interpretation. To mitigate these biases, standardized work interactions were maintained during observations, and formal interviews were conducted using informed consent protocols. Reflexive field notes were kept to document the researcher’s thoughts, potential biases, and ethical challenges encountered during the study. By maintaining a critical perspective on the researcher’s positionality and continuously reflecting on the influence of this dual role, the study ensures that the voices of service users are foregrounded, aligning with the empowerment and recovery principles central to this research.
This study utilizes IE to explore the lived experiences of people with mental health difficulties within service networks. Unlike general qualitative research, which often focuses on participants’ narratives to derive meaning, IE goes a step further by mapping how these experiences are shaped and constrained by broader social and institutional structures (Devault, ). The aim is not only to document individual stories but also to trace the power dynamics and institutional processes that influence daily actions and decisions. By examining the disconnect between personal experiences and institutional practices, IE provides a lens to uncover hidden relations of power that might otherwise remain obscured in more traditional qualitative approaches. Taiwan’s mental healthcare system Community rehabilitation centres are daytime service organizations. The responsible agency was the healthcare unit. This service is subsidized by health insurance ( ). What is an individual service plan? The research field of this study is a community rehabilitation centre serving 23 individuals with mental illness. In this service unit, the individual service plan (ISP) is the core of the users’ rehabilitation operations. The ISP is a plan generated through a professional and holistic assessment to assist service users in rehabilitation and achieve personal goals. The content of the ISP is derived from the regulations of the health authority, dividing services for clients into five dimensions, including independent living, social functioning, occupational functioning, physical and mental health, and family and social support systems. Community rehabilitation services are expected to provide comprehensive services to individuals, often referred to as “holistic services.” As the service design involves various aspects of human life and engages multiple staff members, the service is expected to be coherent. The operation of the service begins with an assessment. The generation of ISP initially comes from observations of issues or challenges by the case manager, family members, or the users themselves. It is then organized by the case manager into the five dimensions and assigned to different professionals for assessment, forming an integrated rehabilitation plan. The plan includes goals and corresponding training strategies, which are discussed and confirmed by the case manager and the user. According to regulatory requirements, the rehabilitation plan needs to be updated at least every 3 months. Regular updates indicate modifications to the plan in response to changes in the user’s condition. Each update requires a re-evaluation and discussion to ensure a coherent progression of the plan. Correspondingly, the ISP involves diverse, interrelated, and coherent documents ( ). The ISP is a result of collaboration among multiple members and professionals, conducted in stages involving assessment and implementation. According to regulatory standards, professionals are entrusted with the essential assessment tasks. The case manager is responsible for integrating information and conducting daily training, serving as the most familiar staff member to the individual. Social relations within the individual service plan Who participates in the individual service plan? The case manager assumes a dual role in document management, serving as both the creator (document writer) and participant (contributor to discussions, provider of client’s daily performance insights, and content reviewer). Documents composed by the case manager, including the Rehabilitation Assessment Integration Form and ISP, constitute pivotal elements in formulating the client’s rehabilitation content. The case manager plays a crucial role in synthesizing information. This capability arises from their prolonged interactions with the client during the rehabilitation period and their expertise as a professional with a foundation in mental health services and case management. Furthermore, the case manager is an indispensable specialist within the community rehabilitation centre’s service framework. Positioned between the client and professionals, the case manager functions as a client advocate, conveying updates and potential issues to professionals while implementing professionally analysed problems and suggestions into the client’s services. Thus, before document creation, the case manager engages in discussions with various professionals, family members, and clients to scrutinize the rationality and feasibility of issues and recommendations. However, given that a case manager may handle up to 15 clients, their direct participation in routine assessments (Step 2) or meetings (Step 6) conducted by professionals each month may not be feasible. In such instances, the case manager needs to continually monitor the dynamics of each case and provide individualized assistance with training. Consequently, for cases undergoing professional assessments, the case manager acquires information by perusing records from professionals, using it as a foundation for adjusting the rehabilitation plan. The client assumes the central role in the ISP, actively participating in each planning step. However, the client’s involvement primarily pertains to the completion of forms related to Step 5. The design of Step 5 documents serves two primary functions: (1) aiding the client in reviewing the rehabilitation process to consciously engage in daily training and enhance effectiveness, and (2) substantiating the client’s engagement in diverse rehabilitation activities to meet the supervisory authority’s requirements for institutional service diversity. Most documents are crafted by the case manager, and individuals hold distinct record forms based on diverse training content. For instance, the diet record form is conceived by the case manager, considering the client’s health check values, physical fitness tests, professional assessments, etc., to facilitate health improvement. However, the content documented by clients may not authentically depict the rehabilitation situation. For example, in the case of service user KK, struggling with poor blood sugar control and financial management, the case manager guided them to use a budget sheet to simultaneously record expenses and dietary choices. Despite the client consistently documenting the same items daily and meeting expectations, overspending on pocket money and abnormal spikes in blood sugar levels were identified. A similar situation was observed with service user ZZ, who completed the water intake record and household chores record based on ideal conditions but often responded with silence when verbally questioned, revealing incomplete tasks. These instances highlight that the act of documenting frames the client’s daily life, treating rehabilitation as an “exam,” thereby undermining the “individual” and “comprehensive” aspects of the rehabilitation plan. According to the Regulations on the Establishment and Management of Mental Rehabilitation Institutions, professionals include occupational therapists, psychologists, social workers, and nurses, among others. The regulations mandate specific educational backgrounds, professional experience, and seniority for these professionals. Professionals are anticipated to offer precise assessments and recommendations in community rehabilitation institutions. Hence, professionals serve as experts providing refined advice to the case manager for more effective rehabilitation. They are responsible for documenting assessment results, goals, and recommendations primarily for the case manager’s review. The language used not only analyzes client issues but also provides instructive explanations. Each document features a signature field to represent the involvement of relevant individuals. According to regulations established by the supervisory authority, the client is the main figure in the rehabilitation plan. However, most documents are completed by case managers and professionals. To symbolize the inclusion of the client in the documented material, documents authored by individuals other than the client necessitate the client’s attendance, reading, and signature. The clients become “recorded” individuals. Workflow and division of responsibilities The formulation process of the ISP is dictated by the regulations set forth by the supervisory authority. It stipulates that the assessment is the responsibility of professionals, with full-time managers providing rehabilitation observations and guiding participants in self-observation and feedback. These become references for revising the rehabilitation service plan and simultaneously encourage participant involvement. (2.1 Rehabilitation Assessment [Note]1) Adhering to regulations, the hiring of professionals is based on a unit of 15 individuals, with hiring calculated in multiples of 15. Taking our centre, which serves 23 individuals, as an example, the centre is required to employ personnel based on a capacity of 30 individuals. According to the Standards for the Establishment of Mental Rehabilitation Institutions, our centre employs part-time occupational therapists for 10 hours per week, social workers for 4 hours per week, and a nurse (who also serves as the head of the centre, contributing 8 hours per week). In addition to the nurse, who doubles as the institutional head, part-time professionals have staggered schedules to avoid scheduling conflicts in counselling spaces. Furthermore, the assessment by professionals is expected to cover five dimensions, reflecting the holistic approach of people-centred services. Case managers observe specific dimensions in the client’s daily life and provide relevant information to professionals during their shifts. Subsequently, professionals engage in individual conversations or assessments with the clients. For example, family-related issues observed by case managers may be conveyed to a social worker for assessment. Rehabilitation assessment includes evaluations of independent living functions, social functions, occupational functions, mental and physical health, and assessment of family and social support systems. (Chapter 2 [Note]2) Full-time managers act as daily facilitators and serve as interpreters between professional language and common language. Because of the large number of service recipients, case managers do not directly participate but, instead, engage in subsequent interventions based on documents written by professionals. The content of these documents records the client’s status across the five dimensions. Case managers synthesize the professional analysis provided by various professionals, reconcile any contradictions, and design training content based on the overall recommendations and specific goals. This information is then translated into layperson’s terms for effective communication with the clients. Challenges in collaboration Within this community rehabilitation centre, the division of responsibilities between professionals and non-professionals, as well as among different specialities, not only separates job duties but also results in the segmentation of service recipients. The individual is categorized into five dimensions, each receiving different recommendations. However, owing to staggered working hours of staff, information relies on document transmission. The case manager responsible for daily training needs to integrate recommendations from various professionals by reading document data and formulate them into the ISP. However, the translation of text content may lead to misunderstandings owing to cognitive disparities. For instance, in the Rehabilitation Assessment Integration Form and the ISP for service user MM during the fourth quarter of the year 111, the case manager noted MM’s problem-solving ability as insufficient, often refusing attempts with the phrase “I can’t.” After assessment, professionals wrote: The client possesses problem-solving abilities, but because of low self-esteem, appears lacking in confidence and refuses attempts when faced with higher-demand tasks. Professional recommendations included: Staff can use a partnership approach to accompany the client in learning work skills, providing space and flexibility for attempts, and setting goals. The expectation is that the client can frequently discuss feelings and satisfaction after each activity to strengthen self-esteem. (Excerpt from Professional Assessment dated 2022.11.15) However, when the case manager incorporated these suggestions into the ISP, they failed to comprehend the professional assessment results and recommendations. The case manager still emphasized the case’s goal in “ learning self-problem-solving methods .” Several specific plans designed by the case manager primarily focused on requirements for the case ( e.g., requiring the case to research on their own when encountering problems rather than seeking help from staff, earning reward points for successful completion ), without recording the professional expectations of adjustments in service approaches, such as “ partnership relationships ” and “ providing space for attempts .” This outcome highlights three aspects of challenges in the workflow: (1) understanding of document content is influenced by staff’s knowledge; (2) when documents are written by case managers, the writing perspective tends to lean towards instructive, training-oriented language, directing the case on “how” to undergo rehabilitation; (3) the recommendations provided by professionals may be suitable for the relationship between professionals and the case but may not necessarily apply to the relationship between case managers and the case. In the observed phenomenon, we recognize that while the ISP is the rehabilitation plan for the individual, the involvement of other staff members plays a significant role in the workflow. Professional judgement and guidance from professionals and case managers are documented. The staff members have various duties, and it is difficult for them to find time and space for group communication. Relying on document transmission of information and the case manager’s integration leads to a lack of substantial “collaboration” among participants to assist the case in rehabilitation training. Professionalism first In the Establishment and Management Measures/Regulations of the Psychiatric Rehabilitation Agency, the standardization of qualifications has a hierarchy based on academic qualifications. Professionals have more power in community rehabilitation centres because of their academic qualifications and experience in specific majors. Even if professionals have expertise only in specific professional fields and part-time professionals have far fewer working hours than full-time managers do, the system is designed to place majors and academic qualifications first. Professionals often cooperate with case managers to conduct specific dimensional assessments. Based on the number of professionals and hours of service, each case can be allocated less than one hour of individual interaction with professionals per month. For more accurate evaluations and analyses, professionals rely heavily on case managers to determine a client’s situation. The evaluation results are transferred to the case manager for implementation in a rehabilitation routine. However, The time for professionals to come is cut into once a week, and some tasks (like group training) are required. I think there are really few discussions (case managers and professionals) can do unless the organization is willing to increase the working hour of them. At the economic level, it is difficult, which makes this thing not so easy to do. (Chi-Chun, head of the center, 2023.02.17) Therefore, “professionalism first” results in the centre designing a Rehabilitation Assessment Consolidation Form to distinguish the responsibilities of various professionals and case managers. This form is filled in by the case manager according to the daily observation of the client and then assigned to various professionals to collect and write data from the client during shift time. Even if a case manager is the closest and most familiar staff member to the client, a good relationship does not mean having the right to speak, but rather, decision is made based on whether the person is a “professional.” The rehabilitation service for service users produces a “staff-led” service model owing to the design of work forms in the system, instead of empowering clients. Textualization of participation How did you put those things in and let people know? … These are very routine. I have always made these routines a part of rehabilitation. Rehabilitation is such a routine. (Chi-Chun, head of the center, 2023.02.17) The ideal form of “participation” in rehabilitation and decision-making for people with mental health disabilities, based on the CRPD, should be actively engaging in rehabilitation activities related to themselves. However, constrained by the text of the policy, the service and power relations in the rehabilitation centre are organized, and the participation of people with mental disabilities is textualized. First, all rehabilitation projects must maintain records. For some individuals with mental disabilities, the ticked boxes in the record form are a type of homework that must be submitted; however, this does not mean that every item is achieved. This kind of participation is not only a response to “policy expectations,” but also reveals that people with mental illness do not understand, agree with, or dare question the arrangement of rehabilitation content. Second, the signature acts as proof. Most documents are designed or written by staff, but many documents lack signatures. From an evaluation perspective, signatures represent participation. Third, all related records are divided into categories according to the evaluation criteria, instead of being integrated according to the needs of service users. The service is arranged according to the needs of the service provider, and the participation of the service user is built into a “viewable” form to obtain the necessary resources for the organization’s operation. This research shows that, even though the CRPD and local policy support the ability of people with mental health difficulties to express their preferences and ideas, these processes are not easily recorded and are difficult to observe. Participation in community rehabilitation can only be textualized into superficial, easily inspectable forms with signatures and ticks. Finally, the service is stylized and lacks engagement. Possibility of resistance Although the work forms are designed according to the evaluation indicators, there is room for interpretation and creation. Chi-Chun, the head of the centre, said: At least like this, you can develop the rest independently… Otherwise, all community rehabilitation centers in Taiwan will look the same, but everyone seems to look a little different. (Chi-Chun, head of the center, 2023.02.17) When Chi-Chun supervises her colleagues, staff members have different professional training and knowledge. It is not easy for case managers to understand professionals’ suggestions or clients’ needs. However, this should not weaken anyone’s role. In the past, after explaining the situation to a professional, the case manager would take care of other cases. However, now, the person in charge would propose a meeting of the case manager, professional, and client. (Chi-Chun, 2023.02.17, head of the center) . After the change in March 2023, the researcher, a part-time occupational therapist, felt that the dialogue was smooth. This method reduces the distance between participants and empowers each participant. The case manager joined the meeting. I felt that there was nothing to hide, just like the “open dialogue” I had learned before. In the space, the client can hear our considerations and then express his feelings. Also, I can hear the relationship and life stories in the interaction of clients and case managers. I can write down clearer suggestions, or even just record the resolution we are discussing at this moment. (2023.03.14 field notes)
Community rehabilitation centres are daytime service organizations. The responsible agency was the healthcare unit. This service is subsidized by health insurance ( ).
The research field of this study is a community rehabilitation centre serving 23 individuals with mental illness. In this service unit, the individual service plan (ISP) is the core of the users’ rehabilitation operations. The ISP is a plan generated through a professional and holistic assessment to assist service users in rehabilitation and achieve personal goals. The content of the ISP is derived from the regulations of the health authority, dividing services for clients into five dimensions, including independent living, social functioning, occupational functioning, physical and mental health, and family and social support systems. Community rehabilitation services are expected to provide comprehensive services to individuals, often referred to as “holistic services.” As the service design involves various aspects of human life and engages multiple staff members, the service is expected to be coherent. The operation of the service begins with an assessment. The generation of ISP initially comes from observations of issues or challenges by the case manager, family members, or the users themselves. It is then organized by the case manager into the five dimensions and assigned to different professionals for assessment, forming an integrated rehabilitation plan. The plan includes goals and corresponding training strategies, which are discussed and confirmed by the case manager and the user. According to regulatory requirements, the rehabilitation plan needs to be updated at least every 3 months. Regular updates indicate modifications to the plan in response to changes in the user’s condition. Each update requires a re-evaluation and discussion to ensure a coherent progression of the plan. Correspondingly, the ISP involves diverse, interrelated, and coherent documents ( ). The ISP is a result of collaboration among multiple members and professionals, conducted in stages involving assessment and implementation. According to regulatory standards, professionals are entrusted with the essential assessment tasks. The case manager is responsible for integrating information and conducting daily training, serving as the most familiar staff member to the individual.
Who participates in the individual service plan? The case manager assumes a dual role in document management, serving as both the creator (document writer) and participant (contributor to discussions, provider of client’s daily performance insights, and content reviewer). Documents composed by the case manager, including the Rehabilitation Assessment Integration Form and ISP, constitute pivotal elements in formulating the client’s rehabilitation content. The case manager plays a crucial role in synthesizing information. This capability arises from their prolonged interactions with the client during the rehabilitation period and their expertise as a professional with a foundation in mental health services and case management. Furthermore, the case manager is an indispensable specialist within the community rehabilitation centre’s service framework. Positioned between the client and professionals, the case manager functions as a client advocate, conveying updates and potential issues to professionals while implementing professionally analysed problems and suggestions into the client’s services. Thus, before document creation, the case manager engages in discussions with various professionals, family members, and clients to scrutinize the rationality and feasibility of issues and recommendations. However, given that a case manager may handle up to 15 clients, their direct participation in routine assessments (Step 2) or meetings (Step 6) conducted by professionals each month may not be feasible. In such instances, the case manager needs to continually monitor the dynamics of each case and provide individualized assistance with training. Consequently, for cases undergoing professional assessments, the case manager acquires information by perusing records from professionals, using it as a foundation for adjusting the rehabilitation plan. The client assumes the central role in the ISP, actively participating in each planning step. However, the client’s involvement primarily pertains to the completion of forms related to Step 5. The design of Step 5 documents serves two primary functions: (1) aiding the client in reviewing the rehabilitation process to consciously engage in daily training and enhance effectiveness, and (2) substantiating the client’s engagement in diverse rehabilitation activities to meet the supervisory authority’s requirements for institutional service diversity. Most documents are crafted by the case manager, and individuals hold distinct record forms based on diverse training content. For instance, the diet record form is conceived by the case manager, considering the client’s health check values, physical fitness tests, professional assessments, etc., to facilitate health improvement. However, the content documented by clients may not authentically depict the rehabilitation situation. For example, in the case of service user KK, struggling with poor blood sugar control and financial management, the case manager guided them to use a budget sheet to simultaneously record expenses and dietary choices. Despite the client consistently documenting the same items daily and meeting expectations, overspending on pocket money and abnormal spikes in blood sugar levels were identified. A similar situation was observed with service user ZZ, who completed the water intake record and household chores record based on ideal conditions but often responded with silence when verbally questioned, revealing incomplete tasks. These instances highlight that the act of documenting frames the client’s daily life, treating rehabilitation as an “exam,” thereby undermining the “individual” and “comprehensive” aspects of the rehabilitation plan. According to the Regulations on the Establishment and Management of Mental Rehabilitation Institutions, professionals include occupational therapists, psychologists, social workers, and nurses, among others. The regulations mandate specific educational backgrounds, professional experience, and seniority for these professionals. Professionals are anticipated to offer precise assessments and recommendations in community rehabilitation institutions. Hence, professionals serve as experts providing refined advice to the case manager for more effective rehabilitation. They are responsible for documenting assessment results, goals, and recommendations primarily for the case manager’s review. The language used not only analyzes client issues but also provides instructive explanations. Each document features a signature field to represent the involvement of relevant individuals. According to regulations established by the supervisory authority, the client is the main figure in the rehabilitation plan. However, most documents are completed by case managers and professionals. To symbolize the inclusion of the client in the documented material, documents authored by individuals other than the client necessitate the client’s attendance, reading, and signature. The clients become “recorded” individuals. Workflow and division of responsibilities The formulation process of the ISP is dictated by the regulations set forth by the supervisory authority. It stipulates that the assessment is the responsibility of professionals, with full-time managers providing rehabilitation observations and guiding participants in self-observation and feedback. These become references for revising the rehabilitation service plan and simultaneously encourage participant involvement. (2.1 Rehabilitation Assessment [Note]1) Adhering to regulations, the hiring of professionals is based on a unit of 15 individuals, with hiring calculated in multiples of 15. Taking our centre, which serves 23 individuals, as an example, the centre is required to employ personnel based on a capacity of 30 individuals. According to the Standards for the Establishment of Mental Rehabilitation Institutions, our centre employs part-time occupational therapists for 10 hours per week, social workers for 4 hours per week, and a nurse (who also serves as the head of the centre, contributing 8 hours per week). In addition to the nurse, who doubles as the institutional head, part-time professionals have staggered schedules to avoid scheduling conflicts in counselling spaces. Furthermore, the assessment by professionals is expected to cover five dimensions, reflecting the holistic approach of people-centred services. Case managers observe specific dimensions in the client’s daily life and provide relevant information to professionals during their shifts. Subsequently, professionals engage in individual conversations or assessments with the clients. For example, family-related issues observed by case managers may be conveyed to a social worker for assessment. Rehabilitation assessment includes evaluations of independent living functions, social functions, occupational functions, mental and physical health, and assessment of family and social support systems. (Chapter 2 [Note]2) Full-time managers act as daily facilitators and serve as interpreters between professional language and common language. Because of the large number of service recipients, case managers do not directly participate but, instead, engage in subsequent interventions based on documents written by professionals. The content of these documents records the client’s status across the five dimensions. Case managers synthesize the professional analysis provided by various professionals, reconcile any contradictions, and design training content based on the overall recommendations and specific goals. This information is then translated into layperson’s terms for effective communication with the clients. Challenges in collaboration Within this community rehabilitation centre, the division of responsibilities between professionals and non-professionals, as well as among different specialities, not only separates job duties but also results in the segmentation of service recipients. The individual is categorized into five dimensions, each receiving different recommendations. However, owing to staggered working hours of staff, information relies on document transmission. The case manager responsible for daily training needs to integrate recommendations from various professionals by reading document data and formulate them into the ISP. However, the translation of text content may lead to misunderstandings owing to cognitive disparities. For instance, in the Rehabilitation Assessment Integration Form and the ISP for service user MM during the fourth quarter of the year 111, the case manager noted MM’s problem-solving ability as insufficient, often refusing attempts with the phrase “I can’t.” After assessment, professionals wrote: The client possesses problem-solving abilities, but because of low self-esteem, appears lacking in confidence and refuses attempts when faced with higher-demand tasks. Professional recommendations included: Staff can use a partnership approach to accompany the client in learning work skills, providing space and flexibility for attempts, and setting goals. The expectation is that the client can frequently discuss feelings and satisfaction after each activity to strengthen self-esteem. (Excerpt from Professional Assessment dated 2022.11.15) However, when the case manager incorporated these suggestions into the ISP, they failed to comprehend the professional assessment results and recommendations. The case manager still emphasized the case’s goal in “ learning self-problem-solving methods .” Several specific plans designed by the case manager primarily focused on requirements for the case ( e.g., requiring the case to research on their own when encountering problems rather than seeking help from staff, earning reward points for successful completion ), without recording the professional expectations of adjustments in service approaches, such as “ partnership relationships ” and “ providing space for attempts .” This outcome highlights three aspects of challenges in the workflow: (1) understanding of document content is influenced by staff’s knowledge; (2) when documents are written by case managers, the writing perspective tends to lean towards instructive, training-oriented language, directing the case on “how” to undergo rehabilitation; (3) the recommendations provided by professionals may be suitable for the relationship between professionals and the case but may not necessarily apply to the relationship between case managers and the case. In the observed phenomenon, we recognize that while the ISP is the rehabilitation plan for the individual, the involvement of other staff members plays a significant role in the workflow. Professional judgement and guidance from professionals and case managers are documented. The staff members have various duties, and it is difficult for them to find time and space for group communication. Relying on document transmission of information and the case manager’s integration leads to a lack of substantial “collaboration” among participants to assist the case in rehabilitation training.
The case manager assumes a dual role in document management, serving as both the creator (document writer) and participant (contributor to discussions, provider of client’s daily performance insights, and content reviewer). Documents composed by the case manager, including the Rehabilitation Assessment Integration Form and ISP, constitute pivotal elements in formulating the client’s rehabilitation content. The case manager plays a crucial role in synthesizing information. This capability arises from their prolonged interactions with the client during the rehabilitation period and their expertise as a professional with a foundation in mental health services and case management. Furthermore, the case manager is an indispensable specialist within the community rehabilitation centre’s service framework. Positioned between the client and professionals, the case manager functions as a client advocate, conveying updates and potential issues to professionals while implementing professionally analysed problems and suggestions into the client’s services. Thus, before document creation, the case manager engages in discussions with various professionals, family members, and clients to scrutinize the rationality and feasibility of issues and recommendations. However, given that a case manager may handle up to 15 clients, their direct participation in routine assessments (Step 2) or meetings (Step 6) conducted by professionals each month may not be feasible. In such instances, the case manager needs to continually monitor the dynamics of each case and provide individualized assistance with training. Consequently, for cases undergoing professional assessments, the case manager acquires information by perusing records from professionals, using it as a foundation for adjusting the rehabilitation plan. The client assumes the central role in the ISP, actively participating in each planning step. However, the client’s involvement primarily pertains to the completion of forms related to Step 5. The design of Step 5 documents serves two primary functions: (1) aiding the client in reviewing the rehabilitation process to consciously engage in daily training and enhance effectiveness, and (2) substantiating the client’s engagement in diverse rehabilitation activities to meet the supervisory authority’s requirements for institutional service diversity. Most documents are crafted by the case manager, and individuals hold distinct record forms based on diverse training content. For instance, the diet record form is conceived by the case manager, considering the client’s health check values, physical fitness tests, professional assessments, etc., to facilitate health improvement. However, the content documented by clients may not authentically depict the rehabilitation situation. For example, in the case of service user KK, struggling with poor blood sugar control and financial management, the case manager guided them to use a budget sheet to simultaneously record expenses and dietary choices. Despite the client consistently documenting the same items daily and meeting expectations, overspending on pocket money and abnormal spikes in blood sugar levels were identified. A similar situation was observed with service user ZZ, who completed the water intake record and household chores record based on ideal conditions but often responded with silence when verbally questioned, revealing incomplete tasks. These instances highlight that the act of documenting frames the client’s daily life, treating rehabilitation as an “exam,” thereby undermining the “individual” and “comprehensive” aspects of the rehabilitation plan. According to the Regulations on the Establishment and Management of Mental Rehabilitation Institutions, professionals include occupational therapists, psychologists, social workers, and nurses, among others. The regulations mandate specific educational backgrounds, professional experience, and seniority for these professionals. Professionals are anticipated to offer precise assessments and recommendations in community rehabilitation institutions. Hence, professionals serve as experts providing refined advice to the case manager for more effective rehabilitation. They are responsible for documenting assessment results, goals, and recommendations primarily for the case manager’s review. The language used not only analyzes client issues but also provides instructive explanations. Each document features a signature field to represent the involvement of relevant individuals. According to regulations established by the supervisory authority, the client is the main figure in the rehabilitation plan. However, most documents are completed by case managers and professionals. To symbolize the inclusion of the client in the documented material, documents authored by individuals other than the client necessitate the client’s attendance, reading, and signature. The clients become “recorded” individuals.
The formulation process of the ISP is dictated by the regulations set forth by the supervisory authority. It stipulates that the assessment is the responsibility of professionals, with full-time managers providing rehabilitation observations and guiding participants in self-observation and feedback. These become references for revising the rehabilitation service plan and simultaneously encourage participant involvement. (2.1 Rehabilitation Assessment [Note]1) Adhering to regulations, the hiring of professionals is based on a unit of 15 individuals, with hiring calculated in multiples of 15. Taking our centre, which serves 23 individuals, as an example, the centre is required to employ personnel based on a capacity of 30 individuals. According to the Standards for the Establishment of Mental Rehabilitation Institutions, our centre employs part-time occupational therapists for 10 hours per week, social workers for 4 hours per week, and a nurse (who also serves as the head of the centre, contributing 8 hours per week). In addition to the nurse, who doubles as the institutional head, part-time professionals have staggered schedules to avoid scheduling conflicts in counselling spaces. Furthermore, the assessment by professionals is expected to cover five dimensions, reflecting the holistic approach of people-centred services. Case managers observe specific dimensions in the client’s daily life and provide relevant information to professionals during their shifts. Subsequently, professionals engage in individual conversations or assessments with the clients. For example, family-related issues observed by case managers may be conveyed to a social worker for assessment. Rehabilitation assessment includes evaluations of independent living functions, social functions, occupational functions, mental and physical health, and assessment of family and social support systems. (Chapter 2 [Note]2) Full-time managers act as daily facilitators and serve as interpreters between professional language and common language. Because of the large number of service recipients, case managers do not directly participate but, instead, engage in subsequent interventions based on documents written by professionals. The content of these documents records the client’s status across the five dimensions. Case managers synthesize the professional analysis provided by various professionals, reconcile any contradictions, and design training content based on the overall recommendations and specific goals. This information is then translated into layperson’s terms for effective communication with the clients.
Within this community rehabilitation centre, the division of responsibilities between professionals and non-professionals, as well as among different specialities, not only separates job duties but also results in the segmentation of service recipients. The individual is categorized into five dimensions, each receiving different recommendations. However, owing to staggered working hours of staff, information relies on document transmission. The case manager responsible for daily training needs to integrate recommendations from various professionals by reading document data and formulate them into the ISP. However, the translation of text content may lead to misunderstandings owing to cognitive disparities. For instance, in the Rehabilitation Assessment Integration Form and the ISP for service user MM during the fourth quarter of the year 111, the case manager noted MM’s problem-solving ability as insufficient, often refusing attempts with the phrase “I can’t.” After assessment, professionals wrote: The client possesses problem-solving abilities, but because of low self-esteem, appears lacking in confidence and refuses attempts when faced with higher-demand tasks. Professional recommendations included: Staff can use a partnership approach to accompany the client in learning work skills, providing space and flexibility for attempts, and setting goals. The expectation is that the client can frequently discuss feelings and satisfaction after each activity to strengthen self-esteem. (Excerpt from Professional Assessment dated 2022.11.15) However, when the case manager incorporated these suggestions into the ISP, they failed to comprehend the professional assessment results and recommendations. The case manager still emphasized the case’s goal in “ learning self-problem-solving methods .” Several specific plans designed by the case manager primarily focused on requirements for the case ( e.g., requiring the case to research on their own when encountering problems rather than seeking help from staff, earning reward points for successful completion ), without recording the professional expectations of adjustments in service approaches, such as “ partnership relationships ” and “ providing space for attempts .” This outcome highlights three aspects of challenges in the workflow: (1) understanding of document content is influenced by staff’s knowledge; (2) when documents are written by case managers, the writing perspective tends to lean towards instructive, training-oriented language, directing the case on “how” to undergo rehabilitation; (3) the recommendations provided by professionals may be suitable for the relationship between professionals and the case but may not necessarily apply to the relationship between case managers and the case. In the observed phenomenon, we recognize that while the ISP is the rehabilitation plan for the individual, the involvement of other staff members plays a significant role in the workflow. Professional judgement and guidance from professionals and case managers are documented. The staff members have various duties, and it is difficult for them to find time and space for group communication. Relying on document transmission of information and the case manager’s integration leads to a lack of substantial “collaboration” among participants to assist the case in rehabilitation training.
In the Establishment and Management Measures/Regulations of the Psychiatric Rehabilitation Agency, the standardization of qualifications has a hierarchy based on academic qualifications. Professionals have more power in community rehabilitation centres because of their academic qualifications and experience in specific majors. Even if professionals have expertise only in specific professional fields and part-time professionals have far fewer working hours than full-time managers do, the system is designed to place majors and academic qualifications first. Professionals often cooperate with case managers to conduct specific dimensional assessments. Based on the number of professionals and hours of service, each case can be allocated less than one hour of individual interaction with professionals per month. For more accurate evaluations and analyses, professionals rely heavily on case managers to determine a client’s situation. The evaluation results are transferred to the case manager for implementation in a rehabilitation routine. However, The time for professionals to come is cut into once a week, and some tasks (like group training) are required. I think there are really few discussions (case managers and professionals) can do unless the organization is willing to increase the working hour of them. At the economic level, it is difficult, which makes this thing not so easy to do. (Chi-Chun, head of the center, 2023.02.17) Therefore, “professionalism first” results in the centre designing a Rehabilitation Assessment Consolidation Form to distinguish the responsibilities of various professionals and case managers. This form is filled in by the case manager according to the daily observation of the client and then assigned to various professionals to collect and write data from the client during shift time. Even if a case manager is the closest and most familiar staff member to the client, a good relationship does not mean having the right to speak, but rather, decision is made based on whether the person is a “professional.” The rehabilitation service for service users produces a “staff-led” service model owing to the design of work forms in the system, instead of empowering clients.
How did you put those things in and let people know? … These are very routine. I have always made these routines a part of rehabilitation. Rehabilitation is such a routine. (Chi-Chun, head of the center, 2023.02.17) The ideal form of “participation” in rehabilitation and decision-making for people with mental health disabilities, based on the CRPD, should be actively engaging in rehabilitation activities related to themselves. However, constrained by the text of the policy, the service and power relations in the rehabilitation centre are organized, and the participation of people with mental disabilities is textualized. First, all rehabilitation projects must maintain records. For some individuals with mental disabilities, the ticked boxes in the record form are a type of homework that must be submitted; however, this does not mean that every item is achieved. This kind of participation is not only a response to “policy expectations,” but also reveals that people with mental illness do not understand, agree with, or dare question the arrangement of rehabilitation content. Second, the signature acts as proof. Most documents are designed or written by staff, but many documents lack signatures. From an evaluation perspective, signatures represent participation. Third, all related records are divided into categories according to the evaluation criteria, instead of being integrated according to the needs of service users. The service is arranged according to the needs of the service provider, and the participation of the service user is built into a “viewable” form to obtain the necessary resources for the organization’s operation. This research shows that, even though the CRPD and local policy support the ability of people with mental health difficulties to express their preferences and ideas, these processes are not easily recorded and are difficult to observe. Participation in community rehabilitation can only be textualized into superficial, easily inspectable forms with signatures and ticks. Finally, the service is stylized and lacks engagement.
Although the work forms are designed according to the evaluation indicators, there is room for interpretation and creation. Chi-Chun, the head of the centre, said: At least like this, you can develop the rest independently… Otherwise, all community rehabilitation centers in Taiwan will look the same, but everyone seems to look a little different. (Chi-Chun, head of the center, 2023.02.17) When Chi-Chun supervises her colleagues, staff members have different professional training and knowledge. It is not easy for case managers to understand professionals’ suggestions or clients’ needs. However, this should not weaken anyone’s role. In the past, after explaining the situation to a professional, the case manager would take care of other cases. However, now, the person in charge would propose a meeting of the case manager, professional, and client. (Chi-Chun, 2023.02.17, head of the center) . After the change in March 2023, the researcher, a part-time occupational therapist, felt that the dialogue was smooth. This method reduces the distance between participants and empowers each participant. The case manager joined the meeting. I felt that there was nothing to hide, just like the “open dialogue” I had learned before. In the space, the client can hear our considerations and then express his feelings. Also, I can hear the relationship and life stories in the interaction of clients and case managers. I can write down clearer suggestions, or even just record the resolution we are discussing at this moment. (2023.03.14 field notes)
Institutional text and power Writing is widely used in daily life rehabilitation, including problem recording and writing plans. In addition, writing is the basis for the competent authority to judge the service quality of the institution. Writing and power in mental rehabilitation centres can be viewed on two levels: writing between services and writing under supervision. These two levels influence each other and are difficult to separate; at the same time, they also create a preset power relationship. Writing between services can function as a record of rehabilitation progress. Participating writers included clients, case managers, and professionals who completed the same or different documents. In the service relationship, according to the main writer of the document, the document can be divided into three categories: (1) written by the clients, the check and record form related to the implementation of rehabilitation; (2) written by the case manager, leading the plan form of the rehabilitation content; and (3) written by professionals, the results of the assessment guiding the case manager. From these document types, we observe a hierarchical relationship between knowledge and power. As Smith ( ) elaborated in her work on IE, texts are central to how institutional power is enacted and maintained. The staff holds the writing authority for most documents, and this reflects a broader system in which professionals dictate the terms of service delivery, reinforcing their power over both case managers and clients. Professionals’ documents can affect the content of the documents of the case manager and the client; the case manager belongs to the next level of knowledge and writing power, and can supervise or deploy the client’s work. The forms filled in by clients are the most basic source of information, and present the rehabilitation content in words as proof of implementation. The relationship between these three parties is assembled into a professional governance/management model through writing; professionals set the general direction of rehabilitation training, case managers formulate and initiate rehabilitation training, and clients implement and record the rehabilitation situation. As Huang ( ) discussed, users of rehabilitation services are fixed in this arrangement by writing records, which facilitates management, and the progress of rehabilitation can continue under staff control. Writing under supervision is mainly influenced by an external evaluation mechanism. The evaluation benchmark regulates how the service should be performed, and how to perform it “with quality.” Drawing on Campbell and Gregor ( ), the evaluation process shapes not only the content of the services but also the interactions between the staff and service users. The textualized process becomes a form of power that governs how rehabilitation is structured. The requirements of the assessment criteria permeate all daily documents of the rehabilitation centre and manage the relationship between staff and service users, service purposes, and matters that should be completed through the documents. In addition, regular assessment mechanisms re-examine the quality of operation of the rehabilitation institution. Through a review of these documents and on-site visits, the service content provided by the management institution and the suitability of the service supply are determined. The assessment results affect the operating funds of the rehabilitation institution, leading the staff to design documents according to the requirements. The person most affected is the case manager. Case managers are full-time workers in organizations responsible for “managing” client services. In addition to directly assisting clients, they must connect with professionals, coordinate community resources, and maintain normal operations to meet various service requirements in the evaluation norms. As the competent authority needs to “read the document,” it creates and enters the power relationship connected by text. The competent authority manages the organization through a regular review of documents. A textualized operation at the institution was developed for this supervision. Under this mechanism, non-recorded content means non-existence, and services that cannot be classified into the evaluation benchmark cannot be provided. Professional governance and double weakness The ideology of community rehabilitation not only affects the arrangement of service content but also the relationship between participants. Professionals are entrusted with most important evaluations and writing responsibilities during extremely short working hours. This implies the system’s affirmation of professionals. As Foucault ( ) discussed in his work on power/knowledge, professionals use their specialized knowledge to maintain control and authority, creating a power dynamic in which those with expertise, such as psychiatrists, are given greater influence over service delivery. However, even if a full-time case manager spends a long time with the client, they cannot assess the client’s condition because they might not be psychiatry professionals. Therefore, service planning must be conducted by professionals. However, based on the experience of professionals, every evaluation might not involve a full dialogue or mutual understanding. Although professionals can provide rehabilitation suggestions, they cannot assist in rehabilitation implementation. That needs to be entrusted to a case manager or the suggestions are not completely feasible. Case managers play a more important role in community rehabilitation, merging the experiences of clients and professionals, and implementing assisted rehabilitation more smoothly. When clients are unable to actively express themselves, the case manager can pay attention to their conditions and provide appropriate assistance. Professionals cannot replace case managers. However, from a systematic perspective, case managers do not belong to a psychiatric profession. Abbott ( ) highlighted the hierarchical nature of professional divisions, in which certain professions hold more authority because of their specialized knowledge. In this case, psychiatric professionals hold the ultimate decision-making power, while case managers, who play a crucial intermediary role, are marginalized. The case manager does not directly provide services according to the needs of the client but acts as an intermediary to provide services based on professional advice. By contrast, if clients speak for themselves, case managers will seek professional advice and do not make rash corrections. In such relationships, the case manager lacks real power. Although the recovery model has been employed at community rehabilitation centres, it coexists with old professional governance, and in practice, a hybrid model has developed that encourages clients’ participation but returns to professional judgement. Bourdieu ( ) argued that professional language is a form of symbolic power that maintains the authority of the professionals over clients and intermediaries, like case managers, by using terms and concepts that reinforce their expertise. Managers are caught in the middle and work under the guidance of other managers. From the perspective of Foucault’s “bio-power,” psychiatric medicine has developed into a persuasive and intuitive mode of dealing with mental distress (Perron et al., ). Many assessments go back to medical specialities. Subsequently, by writing documents, the person was tagged to facilitate follow-up tracking and management. In response to the requirements of higher authority for documents, professionals uphold their professional knowledge to disassemble and convert user questions into professional terms. The case manager must confirm that the writing of the relevant services is coherent and make the clients appear suitable for receiving services to pass the examination and obtain operating funds. Thus, clients are fragmented into professional terms under an institutional system that emphasizes writing and scoring. Case managers serve as administrative intermediaries, and professionals with more professional knowledge than case managers hold solid positions. Professional governance will face the above-mentioned contradictions in community mental rehabilitation services, resulting in the double disadvantage of case managers and clients. Gewirtz ( ) highlighted this double weakness in her analysis of managerial systems, where middle-level workers, like case managers, are squeezed between the demands of higher authorities and the needs of clients, resulting in a lack of real agency for both groups. Community rehabilitation services look forward to analysing various professional dimensions to produce the rehabilitation plans that clients need. However, analyses by many professionals have disassembled the entire appearance of service users into fragments. It is impossible to practice client participation in the decision-making process to complete professional services and support documents. Finally, most of the client rehabilitation content was arranged by the staff. However, although professionals have high power status in community rehabilitation and greater influence on services, their work content and beliefs might not be as they wish. With limited part-time hours, professionals do not have the time to participate in day-to-day rehabilitation activities or get to know clients. In addition, professional suggestions must be read, translated, and implemented by the case managers. Thus, professional interventions are both important and remote. As Foucault ( ) discussed, professionals maintain their influence through indirect control, such as documentation and decision-making, but remain distant from the daily implementation of services, leaving case managers with the burdens of execution without the power to make decisions. In community rehabilitation centres, professionals are like supervisors who stay out of the way. Although their profession was valued by other participants, they did not seem to be members of community rehabilitation centres. They only needed to complete the writing work on time and in accordance with regulations; they were not involved in the return to social life. Decision-making participation in community rehabilitation centres Participation in decision-making is the focus of this study. In community rehabilitation centres, “decision-making” refers to various decisions related to the rehabilitation and life of service users; while “participation” refers to various roles in the field. As both an insider (an occupational therapist) and an outsider (a researcher), I occupy a dual role that allows me to observe these interactions from multiple perspectives. Reflecting on this dual positionality, I recognize that my professional role might have shaped my interactions with both service users and case managers, influencing how I interpreted power dynamics within the institution. As an insider, I am aware of the limitations imposed on case managers and clients, but as an outsider, I can critically examine how these limitations are reinforced by institutional structures. This dual perspective helps me better understand the nuances of decision-making participation and the hierarchical relationships at play in community rehabilitation centres. According to research findings, professionals in community rehabilitation centres are paramount, have great decision-making power, and are indispensable under the regulations of the Department of Mental Health and Department of Health Insurance. The case manager and individuals with mental disabilities interact in the community rehabilitation centre. However, even though case managers have the closest relationship with service users, because of their low professionalism, they have a weak voice in decision-making. Meanwhile, the service user is the party who follows instructions, and is the source of the information provided through document writing. The recovery model emphasizes client autonomy; however, promoting it in community rehabilitation centres is difficult. In Taiwan, recovery is not initiated or advocated by those experiencing mental illness, as in the West. Thus, the concept of recovery is relatively foreign to service users and providers. Ethically, it is important to acknowledge the vulnerability of service users in this context. As a researcher, I am aware of the power imbalance between professionals and service users, and I have made efforts to ensure that participants’ voices were represented fairly and respectfully. In conducting interviews and observations, I was mindful of the ethical challenges in balancing their need for protection with their right to autonomy and participation. This ethical reflection is critical when working with a population that has historically been marginalized and subjected to institutional control. However, because of differences in knowledge and power, service users and case managers have little power to participate in decision-making, even if they implement rehabilitation content every day. Services are deeply influenced by institutions, echoing other research results from Taiwan (Chen, ; Kuo, ). Because of the system design, this type of working mode and interactive relationships has been developed: professionals, case managers, and clients are shown from top to bottom ( ). In addition to the vulnerable position of the client, the case manager plays another vulnerable role. The evaluation benchmark stipulates that full-time managers need only graduate from high school and receive specific education and training. In this institution, the case manager is not only a translator of professional knowledge and common people’s language, but also has the only role in integrating resources and managing service processes. Even, the case manager is the executor who really assists in rehabilitation. Case managers must perform many tasks; however, because they do not necessarily have professional knowledge, and are subordinate to professionals. Thus, the importance of case managers has not been recognized.
Writing is widely used in daily life rehabilitation, including problem recording and writing plans. In addition, writing is the basis for the competent authority to judge the service quality of the institution. Writing and power in mental rehabilitation centres can be viewed on two levels: writing between services and writing under supervision. These two levels influence each other and are difficult to separate; at the same time, they also create a preset power relationship. Writing between services can function as a record of rehabilitation progress. Participating writers included clients, case managers, and professionals who completed the same or different documents. In the service relationship, according to the main writer of the document, the document can be divided into three categories: (1) written by the clients, the check and record form related to the implementation of rehabilitation; (2) written by the case manager, leading the plan form of the rehabilitation content; and (3) written by professionals, the results of the assessment guiding the case manager. From these document types, we observe a hierarchical relationship between knowledge and power. As Smith ( ) elaborated in her work on IE, texts are central to how institutional power is enacted and maintained. The staff holds the writing authority for most documents, and this reflects a broader system in which professionals dictate the terms of service delivery, reinforcing their power over both case managers and clients. Professionals’ documents can affect the content of the documents of the case manager and the client; the case manager belongs to the next level of knowledge and writing power, and can supervise or deploy the client’s work. The forms filled in by clients are the most basic source of information, and present the rehabilitation content in words as proof of implementation. The relationship between these three parties is assembled into a professional governance/management model through writing; professionals set the general direction of rehabilitation training, case managers formulate and initiate rehabilitation training, and clients implement and record the rehabilitation situation. As Huang ( ) discussed, users of rehabilitation services are fixed in this arrangement by writing records, which facilitates management, and the progress of rehabilitation can continue under staff control. Writing under supervision is mainly influenced by an external evaluation mechanism. The evaluation benchmark regulates how the service should be performed, and how to perform it “with quality.” Drawing on Campbell and Gregor ( ), the evaluation process shapes not only the content of the services but also the interactions between the staff and service users. The textualized process becomes a form of power that governs how rehabilitation is structured. The requirements of the assessment criteria permeate all daily documents of the rehabilitation centre and manage the relationship between staff and service users, service purposes, and matters that should be completed through the documents. In addition, regular assessment mechanisms re-examine the quality of operation of the rehabilitation institution. Through a review of these documents and on-site visits, the service content provided by the management institution and the suitability of the service supply are determined. The assessment results affect the operating funds of the rehabilitation institution, leading the staff to design documents according to the requirements. The person most affected is the case manager. Case managers are full-time workers in organizations responsible for “managing” client services. In addition to directly assisting clients, they must connect with professionals, coordinate community resources, and maintain normal operations to meet various service requirements in the evaluation norms. As the competent authority needs to “read the document,” it creates and enters the power relationship connected by text. The competent authority manages the organization through a regular review of documents. A textualized operation at the institution was developed for this supervision. Under this mechanism, non-recorded content means non-existence, and services that cannot be classified into the evaluation benchmark cannot be provided.
The ideology of community rehabilitation not only affects the arrangement of service content but also the relationship between participants. Professionals are entrusted with most important evaluations and writing responsibilities during extremely short working hours. This implies the system’s affirmation of professionals. As Foucault ( ) discussed in his work on power/knowledge, professionals use their specialized knowledge to maintain control and authority, creating a power dynamic in which those with expertise, such as psychiatrists, are given greater influence over service delivery. However, even if a full-time case manager spends a long time with the client, they cannot assess the client’s condition because they might not be psychiatry professionals. Therefore, service planning must be conducted by professionals. However, based on the experience of professionals, every evaluation might not involve a full dialogue or mutual understanding. Although professionals can provide rehabilitation suggestions, they cannot assist in rehabilitation implementation. That needs to be entrusted to a case manager or the suggestions are not completely feasible. Case managers play a more important role in community rehabilitation, merging the experiences of clients and professionals, and implementing assisted rehabilitation more smoothly. When clients are unable to actively express themselves, the case manager can pay attention to their conditions and provide appropriate assistance. Professionals cannot replace case managers. However, from a systematic perspective, case managers do not belong to a psychiatric profession. Abbott ( ) highlighted the hierarchical nature of professional divisions, in which certain professions hold more authority because of their specialized knowledge. In this case, psychiatric professionals hold the ultimate decision-making power, while case managers, who play a crucial intermediary role, are marginalized. The case manager does not directly provide services according to the needs of the client but acts as an intermediary to provide services based on professional advice. By contrast, if clients speak for themselves, case managers will seek professional advice and do not make rash corrections. In such relationships, the case manager lacks real power. Although the recovery model has been employed at community rehabilitation centres, it coexists with old professional governance, and in practice, a hybrid model has developed that encourages clients’ participation but returns to professional judgement. Bourdieu ( ) argued that professional language is a form of symbolic power that maintains the authority of the professionals over clients and intermediaries, like case managers, by using terms and concepts that reinforce their expertise. Managers are caught in the middle and work under the guidance of other managers. From the perspective of Foucault’s “bio-power,” psychiatric medicine has developed into a persuasive and intuitive mode of dealing with mental distress (Perron et al., ). Many assessments go back to medical specialities. Subsequently, by writing documents, the person was tagged to facilitate follow-up tracking and management. In response to the requirements of higher authority for documents, professionals uphold their professional knowledge to disassemble and convert user questions into professional terms. The case manager must confirm that the writing of the relevant services is coherent and make the clients appear suitable for receiving services to pass the examination and obtain operating funds. Thus, clients are fragmented into professional terms under an institutional system that emphasizes writing and scoring. Case managers serve as administrative intermediaries, and professionals with more professional knowledge than case managers hold solid positions. Professional governance will face the above-mentioned contradictions in community mental rehabilitation services, resulting in the double disadvantage of case managers and clients. Gewirtz ( ) highlighted this double weakness in her analysis of managerial systems, where middle-level workers, like case managers, are squeezed between the demands of higher authorities and the needs of clients, resulting in a lack of real agency for both groups. Community rehabilitation services look forward to analysing various professional dimensions to produce the rehabilitation plans that clients need. However, analyses by many professionals have disassembled the entire appearance of service users into fragments. It is impossible to practice client participation in the decision-making process to complete professional services and support documents. Finally, most of the client rehabilitation content was arranged by the staff. However, although professionals have high power status in community rehabilitation and greater influence on services, their work content and beliefs might not be as they wish. With limited part-time hours, professionals do not have the time to participate in day-to-day rehabilitation activities or get to know clients. In addition, professional suggestions must be read, translated, and implemented by the case managers. Thus, professional interventions are both important and remote. As Foucault ( ) discussed, professionals maintain their influence through indirect control, such as documentation and decision-making, but remain distant from the daily implementation of services, leaving case managers with the burdens of execution without the power to make decisions. In community rehabilitation centres, professionals are like supervisors who stay out of the way. Although their profession was valued by other participants, they did not seem to be members of community rehabilitation centres. They only needed to complete the writing work on time and in accordance with regulations; they were not involved in the return to social life.
Participation in decision-making is the focus of this study. In community rehabilitation centres, “decision-making” refers to various decisions related to the rehabilitation and life of service users; while “participation” refers to various roles in the field. As both an insider (an occupational therapist) and an outsider (a researcher), I occupy a dual role that allows me to observe these interactions from multiple perspectives. Reflecting on this dual positionality, I recognize that my professional role might have shaped my interactions with both service users and case managers, influencing how I interpreted power dynamics within the institution. As an insider, I am aware of the limitations imposed on case managers and clients, but as an outsider, I can critically examine how these limitations are reinforced by institutional structures. This dual perspective helps me better understand the nuances of decision-making participation and the hierarchical relationships at play in community rehabilitation centres. According to research findings, professionals in community rehabilitation centres are paramount, have great decision-making power, and are indispensable under the regulations of the Department of Mental Health and Department of Health Insurance. The case manager and individuals with mental disabilities interact in the community rehabilitation centre. However, even though case managers have the closest relationship with service users, because of their low professionalism, they have a weak voice in decision-making. Meanwhile, the service user is the party who follows instructions, and is the source of the information provided through document writing. The recovery model emphasizes client autonomy; however, promoting it in community rehabilitation centres is difficult. In Taiwan, recovery is not initiated or advocated by those experiencing mental illness, as in the West. Thus, the concept of recovery is relatively foreign to service users and providers. Ethically, it is important to acknowledge the vulnerability of service users in this context. As a researcher, I am aware of the power imbalance between professionals and service users, and I have made efforts to ensure that participants’ voices were represented fairly and respectfully. In conducting interviews and observations, I was mindful of the ethical challenges in balancing their need for protection with their right to autonomy and participation. This ethical reflection is critical when working with a population that has historically been marginalized and subjected to institutional control. However, because of differences in knowledge and power, service users and case managers have little power to participate in decision-making, even if they implement rehabilitation content every day. Services are deeply influenced by institutions, echoing other research results from Taiwan (Chen, ; Kuo, ). Because of the system design, this type of working mode and interactive relationships has been developed: professionals, case managers, and clients are shown from top to bottom ( ). In addition to the vulnerable position of the client, the case manager plays another vulnerable role. The evaluation benchmark stipulates that full-time managers need only graduate from high school and receive specific education and training. In this institution, the case manager is not only a translator of professional knowledge and common people’s language, but also has the only role in integrating resources and managing service processes. Even, the case manager is the executor who really assists in rehabilitation. Case managers must perform many tasks; however, because they do not necessarily have professional knowledge, and are subordinate to professionals. Thus, the importance of case managers has not been recognized.
Through the method of IE, this study finds as follows. (1) Although the concepts of recovery and human rights are localized through academics and policies, the unchanging system of division of powers and responsibilities leaves no room for discussion on the subjectivity of people with disabilities. It continues to use the attitude of the medical model to organize services and establish the power status of professionals. Such a design leads to the division of multiple disciplines in practical work, and cooperation is difficult because of staggering time and space. (2) Case managers and service users are doubly disadvantaged in community rehabilitation centres. Case managers have no right to speak and need to work under professionals; people with mental health disabilities are fragmented, and their autonomy is lost in professional language. (3) Community rehabilitation centres rely on health insurance payments as operating funds. To maintain the operation of the institution, various compliance assessment tasks have emerged, oppressing the time and methods of implementing individual care services. (4) The community rehabilitation model might not positively promote service users’ return to their community. Instead, they may stay in institutions under professionalism and undergo repeated training until they meet society’s expectations. Taiwan’s mental healthcare system, which has been underdeveloped since the 1980s, requires further improvement to ensure the human rights of people with disabilities and their social participation. This study holds significant implications for both practice and research. On the practical side, it highlights the need for a more user-centred decision-making model in mental health services. Service providers, including community rehabilitation centres, could benefit from adopting strategies that empower people with mental health disabilities to participate more actively in their own care decisions, thereby enhancing their autonomy. Furthermore, the findings point to the necessity of reforming the interdisciplinary collaboration within rehabilitation centres to ensure that case managers and other non-professional staff are not marginalized. Such reforms could lead to more cohesive teamwork and improved service quality. In terms of research, this study demonstrates the potential of IE as a method to explore hidden power dynamics and structural inequalities within healthcare systems. Future studies could apply this method in other healthcare contexts to investigate how institutional practices shape user experiences. Additionally, this research provides a foundation for policy development aimed at enhancing mental health services. Policymakers can use these findings to address the limitations of the current healthcare system, such as over-reliance on health insurance payments and rigid compliance assessments, which detract from individualized care. Finally, this research opens avenues for examining how mental health service models can better align with supportive decision-making frameworks, which respect the rights and agency of people with disabilities, in line with international human rights standards, such as the CRPD.
This study has two main limitations. First, it lacks an in-depth discussion of the role of family members in decision-making processes. Because of the researcher’s role and the structure of institutional services, establishing contact with family members proved challenging, making it difficult to fully understand their influence on the service users. Family members significantly impact the experiences of many service users, yet the researcher, as a part-time occupational therapist, had limited opportunities for interaction with them. Additionally, as the researcher is not a social worker, there was less access to information regarding the service users’ family relationships and the level of family involvement, which are often shared during case manager handovers. This made it difficult to identify appropriate entry points for exploring the role of family members. Second, this study was conducted at a single institution in northern Taiwan, which limits the generalizability of the findings to other day-based psychiatric rehabilitation facilities across the country. While community rehabilitation services in Taiwan follow the same policies and standards, the implementation varies owing to such factors as participants’ knowledge, regional culture, and resource availability. For example, areas with well-developed public transportation and a service-oriented economy require different life skills and vocational training compared to regions where motorcycles are the primary mode of transportation and agriculture or industry predominates. Additionally, in densely populated areas where community ties are weaker and the NIMBY (Not In My Backyard) effect is more pronounced, the establishment and operation of community rehabilitation services must consider social and environmental factors. By contrast, in areas with lower resource availability, rehabilitation staff may need to rely more on local networks to connect service users with informal community resources. Future research should address the following areas: (1) the role and power dynamics of caregivers within institutional contexts; (2) the tensions and compromises between client participation in decision-making and adherence to regulations across different day-based psychiatric rehabilitation facilities; and (3) a comparison of the service facilities managed by different authorities or in various regions to understand how they respond to the agency and rights of service users. Such research could offer a more diverse understanding of community rehabilitation practices and foster dialogue between practice and policy.
|
Diagnostic value of fetal autopsy after early termination of pregnancy for fetal anomalies | db7b48d9-b7b8-4bf2-9391-ee035aead272 | 9581376 | Forensic Medicine[mh] | According to French law, a fetal anomaly can lead to a termination of pregnancy (TOP) if the couple so requests, after an opinion by a prenatal diagnostic center that there is "a strong probability that the child to be born is affected by a particularly severe condition recognized as incurable at the time of diagnosis." Two techniques can be used for early TOP before 16 weeks: either surgical vacuum aspiration, or medically induced vaginal expulsion. Surgical aspiration has the advantage of being a rapid procedure, most often performed́ on an outpatient basis. Its disadvantage is that it does not preserve the whole fetus; the resulting fragmentation makes a fetal autopsy difficult and potentially of little value. Moreover, like all surgery, it involves risks, especially that of uterine perforation. Medical induction of vaginal expulsion has the advantage of preserving the bodily integrity of the fetus, but the procedure is longer and may be more psychologically traumatic for the woman . It also involves the risk of trophoblastic retention and thus of a secondary aspiration. For these reasons, surgical aspiration is often preferred when cytogenetic examinations before TOP have established the etiologic diagnosis, so that a fetal autopsy for this purpose is unnecessary. This is the case for aneuploidies and for chromosomal imbalances identified by array-based comparative genomic hybridization (aCGH). When no etiologic diagnosis is available before the TOP, medically induced vaginal expulsion is systematically proposed to enable the performance of a fetal autopsy. The mother or the couple receives medical information about the advantages and disadvantages of each method and makes the final decision about the procedure and about the autopsy. Requests for autopsies have diminished over the past several years , associated with cultural factors or religious beliefs, but also because of the simplicity of the vacuum aspiration procedure. Ultrasound does not have a 100% sensitivity rate for screening early anomalies , nor is its specificity for diagnosis 100%. Most authors and teams therefore advise a systematic fetal autopsy after TOP to identify additional abnormalities, guide possible subsequent genetic exploration, and refine genetic counseling for future pregnancies. The diagnostic value of fetal autopsies after 18 weeks has been studied́ extensively [ , – ]; much less is known about the value of earlier autopsies . In 2011, we examined early TOPFA before 16 weeks in our department and showed that genetic counseling benefited from TOP by medically induced vaginal expulsion. In view of the technological advances in ultrasound and in fetopathology, we decided to continue this research. We therefore examined the value of fetal autopsy in case of early TOPFA before 16 weeks in cases without any fetal cytogenetic abnormality. The principal objective of the study was to assess the diagnostic value of an autopsy over prenatal ultrasound and its impact on genetic counselling, overall and by method of termination: surgical aspiration or medically induced vaginal expulsion. The secondary objective was to compare the complication rate by method of termination.
Design-case selection This retrospective observational study took place at the Port Royal Maternity Hospital, which has both a prenatal diagnosis department and a multidisciplinary prenatal diagnostic center. The study reviewed records for the calendar years 2013 through 2017. To assess the diagnostic utility of fetal autopsies, we included women with a fetal anomaly diagnosed at the first-trimester ultrasound that resulted in a TOPFA before 16 weeks of gestation. We selected all TOPFA performed from 11 weeks to 16 weeks because it allowed us to include the TOP performed for anomalies screened during the first-trimester ultrasound, while leaving time for women to think their decision through thoroughly and to schedule and complete the TOPFA. Exclusion criteria were TOP for a maternal indication or for genetic or chromosomal abnormalities (e.g., aneuploidy and pathogenic imbalances identified by karyotype or array-based comparative genomic hybridization (aCGH)) diagnosed before TOP. For the analysis of main outcome, we further excluded cases where no autopsy was performed. All women underwent a first-trimester ultrasound examination between 11 weeks and 13 weeks+6 days. The pregnancy was dated by measuring the crown-rump length at this examination. After the identification of an anomaly during this ultrasound, a second ultrasound scan was routinely performed by an expert ultrasonographer to confirm the anomaly and search for additional associated abnormalities. Depending on the abnormalities found, a trophoblast biopsy for karyotype analysis by direct examination and aCGH (with a resolution of 1Mb) was proposed. The time to obtain the results was 3 days for the karyotype and about 15 days for the CGH. There was no Fluorescent In Situ Hybridization (FISH) performed. After the request for TOPFA, in the absence of an etiological chromosomal or genetic diagnosis, a medically induced vaginal expulsion followed by a fetal autopsy was recommended, but the final choice for the method of termination belonged to the woman. An autopsy of the products of conception was nonetheless routinely suggested for the women choosing vacuum aspiration for TOPFA. For women who did not want a fetal autopsy, external examination of the fetus and radiography could be performed. Protocol for methods of termination Whatever the method, given the term, fetal demise was not induced prior to the procedure. The protocol used for each procedure was identical between the gestational ages of 11 weeks of gestation (WG) and 16 WG. Method for termination of pregnancy by aspiration Women who chose aspiration for early TOP (before 16 weeks) received 200 mg of oral mifepristone two days before the procedure. The aspiration was performed on an outpatient basis. The morning of the procedure, the woman received 400 μg of misoprostol vaginally, 2 hours before the aspiration. The procedure took place under general anesthesia in the operating room. Mechanical cervical dilatation was performed with a Hegar dilator set; the aspiration was then performed with a rigid cannula and, if necessary, Winter forceps. Dilatation and aspiration were performed under ultrasound control with systematic verification that the uterus was empty at the end of the procedure. Most of these fetuses were fragmented. Method for termination of pregnancy by induction of labor Women who choose medically induced vaginal expulsion also received 200 mg of mifepristone 2 days before the procedure. They were hospitalized the morning of the induction. In the delivery room they received misoprostol, administered́ vaginally, 400 μg every three hours, under epidural anesthesia. After the expulsion, the medical team verified the placental delivery and performed systematically a manual uterine examination, routinely followed by ultrasound confirmation that the uterus was empty. A secondary aspiration was performed if indicated. Most of these fetuses are intact. Protocol for the fetal autopsy After the mother or couple consented in writing, the autopsy was performed́ according to a standardized protocol. The products of conception or the fetus were transported in a fresh state to the laboratory in the hours after the procedure. Pathologic sampling of specimens obtained by aspiration For TOP by aspiration, the products of conception were washed, fixed in formalin, and analyzed. The macroscopic examination consisted of various fetal and extraembryonic structures (fetal organs, placenta, and umbilical cord). The identifiable items underwent a morphologic examination with photographs and radiography. A histologic analysis was then performed on all the elements collected. Pathologic sampling of specimens obtained by induction of labor A fetus expelled after medical induction was first weighed́ and measured, then examined externally, photographed, and radiographed. The organs were then separated and examined whole. Two final macroscopic examinations followed one internal, of the viscera, to search for malformations, and one neuropathological. Then all the organs were removed and analyzed histologically. Finally, the placental, membranes, and umbilical cord were analyzed macroscopically and then histologically. All fetal autopsies throughout the study period were performed́ by one of only two different specialists. Variables collected and variables of interest We collected the women’s characteristics, the ultrasound, autopsy, and cytogenetic data, as well as the contents of the follow-up visits, including the delivery of the post-TOPFA results, especially the genetic consultations. The data about the TOPFA, its indication, gestational age at performance, method of termination, any complications, and length of hospital stay were also collected. The indications for TOPFA were grouped in several categories: cerebral abnormalities, comprising all head malformations, including exencephaly, bone abnormalities, hygromas, isolated increased nuchal translucency, abdominal wall defects, neural tube defects, cardiac, lumbosacral, and other abnormalities, and multiple malformation syndromes. As required by French law and regulations, this study was approved by the national data protection authority (Commission Nationale de l’Informatique et des Libertés, CNIL n° 1755849) and by the appropriate ethics committees, i.e. the advisory committee on the treatment of personal health data for research purposes (CCTIRS: Comité Consultatif sur le Traitement de l’Information en matière de Recherche, approval granted November 18, 2010; reference number 10.626). Women were informed that their records could be used for the evaluation of medical practices and were provided the option to opt out of these studies. All data were anonymized before the analysis. Endpoints The principal endpoint was the diagnostic value of the fetal autopsy, compared with̀ the ultrasound, defined by the demonstration of at least one supplementary abnormality (major or minor) not detected by ultrasound. This analysis was conducted for all cases that had autopsies and by method of termination. We also sought to assess whether the autopsy results modified the genetic counseling: we considered that it did so if the additional abnormalities that the autopsy identified or did not exclude, guided the diagnosis toward a specific etiology and provided specific information for the follow-up of subsequent pregnancies. This information could concern the risk of recurrence or indicate a potential diagnosis to be tested in subsequent pregnancies by an invasive sample or a specific ultrasound follow-up. At the time of the study, there was no whole genome done in a systematic way. Only targeted genetic research was carried out according to the anomalies found on ultrasound and autopsy. When the fetal autopsy concluded in favor of an anomaly of sporadic onset without any indication for a specific follow-up for subsequent pregnancies, only ultrasounds for reassurance were recommended. We chose not to categorize these situations as modifications of genetic counseling as such, especially as they could have been implemented in some cases based only on the conclusions of the ultrasound. In order to limit the potential confusion bias linked to the gestational age (14–16 weeks) at which the TOPFA is performed, we carried out a sensitivity analysis by evaluating the same endpoints on a population including only pregnancy terminated < = 14 weeks. Secondary analyses for these same outcome measures were conducted in the subgroup with cerebral abnormalities. We also compared the complication rates according to the method of termination: the performance of a secondary aspiration, maternal hemorrhage (blood loss greater than 500 ml), intrauterine retention (with an anteroposterior diameter more than 15 mm), or infection (that required antibiotic treatment for suspected postpartum endometritis). We also compared length of stay by method of termination. Statistical analysis We first described our population by their means (and standard deviations, SDs) for the continuous variables and by percentages for the categorical variables. The categorical variables of interest were then compared either by Chi-2 or Fisher’s exact tests. Student’s t test was used to analyze the quantitative variables. All statistical analyses were performed with Stata 16 software. Differences were considered significant when p< 0.05.
This retrospective observational study took place at the Port Royal Maternity Hospital, which has both a prenatal diagnosis department and a multidisciplinary prenatal diagnostic center. The study reviewed records for the calendar years 2013 through 2017. To assess the diagnostic utility of fetal autopsies, we included women with a fetal anomaly diagnosed at the first-trimester ultrasound that resulted in a TOPFA before 16 weeks of gestation. We selected all TOPFA performed from 11 weeks to 16 weeks because it allowed us to include the TOP performed for anomalies screened during the first-trimester ultrasound, while leaving time for women to think their decision through thoroughly and to schedule and complete the TOPFA. Exclusion criteria were TOP for a maternal indication or for genetic or chromosomal abnormalities (e.g., aneuploidy and pathogenic imbalances identified by karyotype or array-based comparative genomic hybridization (aCGH)) diagnosed before TOP. For the analysis of main outcome, we further excluded cases where no autopsy was performed. All women underwent a first-trimester ultrasound examination between 11 weeks and 13 weeks+6 days. The pregnancy was dated by measuring the crown-rump length at this examination. After the identification of an anomaly during this ultrasound, a second ultrasound scan was routinely performed by an expert ultrasonographer to confirm the anomaly and search for additional associated abnormalities. Depending on the abnormalities found, a trophoblast biopsy for karyotype analysis by direct examination and aCGH (with a resolution of 1Mb) was proposed. The time to obtain the results was 3 days for the karyotype and about 15 days for the CGH. There was no Fluorescent In Situ Hybridization (FISH) performed. After the request for TOPFA, in the absence of an etiological chromosomal or genetic diagnosis, a medically induced vaginal expulsion followed by a fetal autopsy was recommended, but the final choice for the method of termination belonged to the woman. An autopsy of the products of conception was nonetheless routinely suggested for the women choosing vacuum aspiration for TOPFA. For women who did not want a fetal autopsy, external examination of the fetus and radiography could be performed.
Whatever the method, given the term, fetal demise was not induced prior to the procedure. The protocol used for each procedure was identical between the gestational ages of 11 weeks of gestation (WG) and 16 WG. Method for termination of pregnancy by aspiration Women who chose aspiration for early TOP (before 16 weeks) received 200 mg of oral mifepristone two days before the procedure. The aspiration was performed on an outpatient basis. The morning of the procedure, the woman received 400 μg of misoprostol vaginally, 2 hours before the aspiration. The procedure took place under general anesthesia in the operating room. Mechanical cervical dilatation was performed with a Hegar dilator set; the aspiration was then performed with a rigid cannula and, if necessary, Winter forceps. Dilatation and aspiration were performed under ultrasound control with systematic verification that the uterus was empty at the end of the procedure. Most of these fetuses were fragmented. Method for termination of pregnancy by induction of labor Women who choose medically induced vaginal expulsion also received 200 mg of mifepristone 2 days before the procedure. They were hospitalized the morning of the induction. In the delivery room they received misoprostol, administered́ vaginally, 400 μg every three hours, under epidural anesthesia. After the expulsion, the medical team verified the placental delivery and performed systematically a manual uterine examination, routinely followed by ultrasound confirmation that the uterus was empty. A secondary aspiration was performed if indicated. Most of these fetuses are intact.
Women who chose aspiration for early TOP (before 16 weeks) received 200 mg of oral mifepristone two days before the procedure. The aspiration was performed on an outpatient basis. The morning of the procedure, the woman received 400 μg of misoprostol vaginally, 2 hours before the aspiration. The procedure took place under general anesthesia in the operating room. Mechanical cervical dilatation was performed with a Hegar dilator set; the aspiration was then performed with a rigid cannula and, if necessary, Winter forceps. Dilatation and aspiration were performed under ultrasound control with systematic verification that the uterus was empty at the end of the procedure. Most of these fetuses were fragmented.
Women who choose medically induced vaginal expulsion also received 200 mg of mifepristone 2 days before the procedure. They were hospitalized the morning of the induction. In the delivery room they received misoprostol, administered́ vaginally, 400 μg every three hours, under epidural anesthesia. After the expulsion, the medical team verified the placental delivery and performed systematically a manual uterine examination, routinely followed by ultrasound confirmation that the uterus was empty. A secondary aspiration was performed if indicated. Most of these fetuses are intact.
After the mother or couple consented in writing, the autopsy was performed́ according to a standardized protocol. The products of conception or the fetus were transported in a fresh state to the laboratory in the hours after the procedure. Pathologic sampling of specimens obtained by aspiration For TOP by aspiration, the products of conception were washed, fixed in formalin, and analyzed. The macroscopic examination consisted of various fetal and extraembryonic structures (fetal organs, placenta, and umbilical cord). The identifiable items underwent a morphologic examination with photographs and radiography. A histologic analysis was then performed on all the elements collected. Pathologic sampling of specimens obtained by induction of labor A fetus expelled after medical induction was first weighed́ and measured, then examined externally, photographed, and radiographed. The organs were then separated and examined whole. Two final macroscopic examinations followed one internal, of the viscera, to search for malformations, and one neuropathological. Then all the organs were removed and analyzed histologically. Finally, the placental, membranes, and umbilical cord were analyzed macroscopically and then histologically. All fetal autopsies throughout the study period were performed́ by one of only two different specialists.
For TOP by aspiration, the products of conception were washed, fixed in formalin, and analyzed. The macroscopic examination consisted of various fetal and extraembryonic structures (fetal organs, placenta, and umbilical cord). The identifiable items underwent a morphologic examination with photographs and radiography. A histologic analysis was then performed on all the elements collected.
A fetus expelled after medical induction was first weighed́ and measured, then examined externally, photographed, and radiographed. The organs were then separated and examined whole. Two final macroscopic examinations followed one internal, of the viscera, to search for malformations, and one neuropathological. Then all the organs were removed and analyzed histologically. Finally, the placental, membranes, and umbilical cord were analyzed macroscopically and then histologically. All fetal autopsies throughout the study period were performed́ by one of only two different specialists.
We collected the women’s characteristics, the ultrasound, autopsy, and cytogenetic data, as well as the contents of the follow-up visits, including the delivery of the post-TOPFA results, especially the genetic consultations. The data about the TOPFA, its indication, gestational age at performance, method of termination, any complications, and length of hospital stay were also collected. The indications for TOPFA were grouped in several categories: cerebral abnormalities, comprising all head malformations, including exencephaly, bone abnormalities, hygromas, isolated increased nuchal translucency, abdominal wall defects, neural tube defects, cardiac, lumbosacral, and other abnormalities, and multiple malformation syndromes. As required by French law and regulations, this study was approved by the national data protection authority (Commission Nationale de l’Informatique et des Libertés, CNIL n° 1755849) and by the appropriate ethics committees, i.e. the advisory committee on the treatment of personal health data for research purposes (CCTIRS: Comité Consultatif sur le Traitement de l’Information en matière de Recherche, approval granted November 18, 2010; reference number 10.626). Women were informed that their records could be used for the evaluation of medical practices and were provided the option to opt out of these studies. All data were anonymized before the analysis. Endpoints The principal endpoint was the diagnostic value of the fetal autopsy, compared with̀ the ultrasound, defined by the demonstration of at least one supplementary abnormality (major or minor) not detected by ultrasound. This analysis was conducted for all cases that had autopsies and by method of termination. We also sought to assess whether the autopsy results modified the genetic counseling: we considered that it did so if the additional abnormalities that the autopsy identified or did not exclude, guided the diagnosis toward a specific etiology and provided specific information for the follow-up of subsequent pregnancies. This information could concern the risk of recurrence or indicate a potential diagnosis to be tested in subsequent pregnancies by an invasive sample or a specific ultrasound follow-up. At the time of the study, there was no whole genome done in a systematic way. Only targeted genetic research was carried out according to the anomalies found on ultrasound and autopsy. When the fetal autopsy concluded in favor of an anomaly of sporadic onset without any indication for a specific follow-up for subsequent pregnancies, only ultrasounds for reassurance were recommended. We chose not to categorize these situations as modifications of genetic counseling as such, especially as they could have been implemented in some cases based only on the conclusions of the ultrasound. In order to limit the potential confusion bias linked to the gestational age (14–16 weeks) at which the TOPFA is performed, we carried out a sensitivity analysis by evaluating the same endpoints on a population including only pregnancy terminated < = 14 weeks. Secondary analyses for these same outcome measures were conducted in the subgroup with cerebral abnormalities. We also compared the complication rates according to the method of termination: the performance of a secondary aspiration, maternal hemorrhage (blood loss greater than 500 ml), intrauterine retention (with an anteroposterior diameter more than 15 mm), or infection (that required antibiotic treatment for suspected postpartum endometritis). We also compared length of stay by method of termination.
The principal endpoint was the diagnostic value of the fetal autopsy, compared with̀ the ultrasound, defined by the demonstration of at least one supplementary abnormality (major or minor) not detected by ultrasound. This analysis was conducted for all cases that had autopsies and by method of termination. We also sought to assess whether the autopsy results modified the genetic counseling: we considered that it did so if the additional abnormalities that the autopsy identified or did not exclude, guided the diagnosis toward a specific etiology and provided specific information for the follow-up of subsequent pregnancies. This information could concern the risk of recurrence or indicate a potential diagnosis to be tested in subsequent pregnancies by an invasive sample or a specific ultrasound follow-up. At the time of the study, there was no whole genome done in a systematic way. Only targeted genetic research was carried out according to the anomalies found on ultrasound and autopsy. When the fetal autopsy concluded in favor of an anomaly of sporadic onset without any indication for a specific follow-up for subsequent pregnancies, only ultrasounds for reassurance were recommended. We chose not to categorize these situations as modifications of genetic counseling as such, especially as they could have been implemented in some cases based only on the conclusions of the ultrasound. In order to limit the potential confusion bias linked to the gestational age (14–16 weeks) at which the TOPFA is performed, we carried out a sensitivity analysis by evaluating the same endpoints on a population including only pregnancy terminated < = 14 weeks. Secondary analyses for these same outcome measures were conducted in the subgroup with cerebral abnormalities. We also compared the complication rates according to the method of termination: the performance of a secondary aspiration, maternal hemorrhage (blood loss greater than 500 ml), intrauterine retention (with an anteroposterior diameter more than 15 mm), or infection (that required antibiotic treatment for suspected postpartum endometritis). We also compared length of stay by method of termination.
We first described our population by their means (and standard deviations, SDs) for the continuous variables and by percentages for the categorical variables. The categorical variables of interest were then compared either by Chi-2 or Fisher’s exact tests. Student’s t test was used to analyze the quantitative variables. All statistical analyses were performed with Stata 16 software. Differences were considered significant when p< 0.05.
Study population and analysis During the 5-year period of 2013–2017, 318 TOP took place before 16 weeks of gestation at our prenatal diagnosis center. We excluded 239 women: 190 with TOP for cytogenetic abnormality: 102 trisomy 21, 42 trisomy 18, 17 trisomy 13, 14 monosomy X (Turner syndrome), 7 triploidy, 9 other aCGH abnormalities, 20 TOP for genetic diseases, and 29 for maternal disease (e.g., cancer, exposure to teratogenic treatments (without fetal anomalies in our study), preterm premature of membranes). We therefore included 79 women: 53.2% (N = 42) had TOPFA by medical induction, and 46.8% (N = 37) had TOPFA by aspiration. Among the 42 women who underwent medical induction, two wanted only an external examination without an autopsy, and among the 37 women with aspiration, 9 did not consent to a fetal autopsy ( ). Description of the population and their terminations of pregnancy for fetal anomalies Their mean age was 32.3 years (+/-5 years), and 48% (38/79) were nulliparous. Maternal characteristics did not differ significantly between the medical induction and aspiration groups ( ). The most frequent indication for TOPFA was a cerebral abnormality (34.2%), and then multiple congenital malformations (27.8%). The mean gestational age at TOPFA was 14 weeks (+/-1.1). Medical inductions took place later than aspirations: 14.6 weeks versus 13.3 weeks ( p < .001). 53.2% (42/79) were performed before or at 14 weeks, 28.6% (12/42) in the medical inductions and 81.1% (30/37) in the aspirations. 41.8% (33/79) had a trophoblast biopsy, 61.9% (26/42) in the medical inductions and 18.9% (7/37) in the aspirations. Value of fetal autopsy ( ) Principal endpoint Among the fetal autopsies (N = 68), 54.4% (37/68) provided more information than ultrasound; this was the case more frequently in the medical inductions (77.5%, 31/40) than in the aspirations (21.4%, 6/28) ( p < .001) ( ). For example, the autopsy enabled the identification of signs associated with what had appeared to be an isolated hygroma on ultrasound (n = 4) and the diagnosis of amniotic band syndrome with multiple malformation syndromes (n = 3) and with a short umbilical cord (n = 1). It also made it possible to diagnose Cantrell’s pentalogy in two cases, although the medium intermediate celosomia had appeared isolated on ultrasound. The fetal autopsy led to a change in the genetic counseling provided in 20.6% of cases (14/68), again, more often in the medical induction group (32.5%, 13/40) than in the aspiration group (3.6%, 1/28) ( p < .001) ( ). The cases for which fetal autopsies modified the genetic counseling are detailed in . Sensitivity analysis: TOPFA < = 14 weeks When restricting to TOPFA performed before or at 14 weeks, 42 pregnancies were included. Among the fetal autopsies (N = 35), 42.9% (15/35) provided more information than ultrasound; this was the case more frequently in the medical inductions (90.9%, 10/11) than in the aspirations (20.8%, 5/24) ( p < .01) ( ). In this sensitivity analysis, the fetal autopsy led to a change in the genetic counseling provided in 11.4% of all cases (4/35), and this was still significantly more often in the medical induction group (27.3%, 3/11) than in the aspiration group (4.2%, 1/24) ( p = .046). Subgroup analyses: Cerebral abnormalities ( ) Analysis of the subgroup with cerebral abnormalities (N = 27) showed that the autopsy had a greater diagnostic value than ultrasound in 40% of the cases (8/20), again more frequently in case of medical induction group (71.4%, 5/7) than in aspirations (23.1%, 3/13) ( p = .03) ( ). For example, in a case of ultrasound-suspected exencephaly, the autopsy identified a multiple malformation syndrome with iniencephaly, occipital meningocele, cervicodorsal rachischisis, external genitalia abnormalities, and pulmonary hypoplasia. In other cases, the autopsy identified vertebral, costal, and renal abnormalities associated with exencephaly. In two cases, it enabled a diagnosis of amniotic band syndrome. In one case of hydrocephaly, it identified signs suggesting a hemorrhagic stroke. Maternal complications according to method of termination As the different protocols suggested, the length of stay was significantly longer in the medical induction group (2.1 days +/-0.5) than in the aspiration group (1 day +/-0.2, p < .001). In 16.7% of the medical inductions, a secondary aspiration was required for retention of the products of concept (N = 6) or hemorrhage (N = 1); there was none in the aspiration group ( p < .001). Post-expulsion retention was observed in 11.4% of all cases (n = 9): 14.3% (6/42) in the medical induction group and 8.1% (3/37) in the aspiration group, p = .5. There was one case of hemorrhage in each group and one infection in the medical induction group ( ).
During the 5-year period of 2013–2017, 318 TOP took place before 16 weeks of gestation at our prenatal diagnosis center. We excluded 239 women: 190 with TOP for cytogenetic abnormality: 102 trisomy 21, 42 trisomy 18, 17 trisomy 13, 14 monosomy X (Turner syndrome), 7 triploidy, 9 other aCGH abnormalities, 20 TOP for genetic diseases, and 29 for maternal disease (e.g., cancer, exposure to teratogenic treatments (without fetal anomalies in our study), preterm premature of membranes). We therefore included 79 women: 53.2% (N = 42) had TOPFA by medical induction, and 46.8% (N = 37) had TOPFA by aspiration. Among the 42 women who underwent medical induction, two wanted only an external examination without an autopsy, and among the 37 women with aspiration, 9 did not consent to a fetal autopsy ( ).
Their mean age was 32.3 years (+/-5 years), and 48% (38/79) were nulliparous. Maternal characteristics did not differ significantly between the medical induction and aspiration groups ( ). The most frequent indication for TOPFA was a cerebral abnormality (34.2%), and then multiple congenital malformations (27.8%). The mean gestational age at TOPFA was 14 weeks (+/-1.1). Medical inductions took place later than aspirations: 14.6 weeks versus 13.3 weeks ( p < .001). 53.2% (42/79) were performed before or at 14 weeks, 28.6% (12/42) in the medical inductions and 81.1% (30/37) in the aspirations. 41.8% (33/79) had a trophoblast biopsy, 61.9% (26/42) in the medical inductions and 18.9% (7/37) in the aspirations.
) Principal endpoint Among the fetal autopsies (N = 68), 54.4% (37/68) provided more information than ultrasound; this was the case more frequently in the medical inductions (77.5%, 31/40) than in the aspirations (21.4%, 6/28) ( p < .001) ( ). For example, the autopsy enabled the identification of signs associated with what had appeared to be an isolated hygroma on ultrasound (n = 4) and the diagnosis of amniotic band syndrome with multiple malformation syndromes (n = 3) and with a short umbilical cord (n = 1). It also made it possible to diagnose Cantrell’s pentalogy in two cases, although the medium intermediate celosomia had appeared isolated on ultrasound. The fetal autopsy led to a change in the genetic counseling provided in 20.6% of cases (14/68), again, more often in the medical induction group (32.5%, 13/40) than in the aspiration group (3.6%, 1/28) ( p < .001) ( ). The cases for which fetal autopsies modified the genetic counseling are detailed in . Sensitivity analysis: TOPFA < = 14 weeks When restricting to TOPFA performed before or at 14 weeks, 42 pregnancies were included. Among the fetal autopsies (N = 35), 42.9% (15/35) provided more information than ultrasound; this was the case more frequently in the medical inductions (90.9%, 10/11) than in the aspirations (20.8%, 5/24) ( p < .01) ( ). In this sensitivity analysis, the fetal autopsy led to a change in the genetic counseling provided in 11.4% of all cases (4/35), and this was still significantly more often in the medical induction group (27.3%, 3/11) than in the aspiration group (4.2%, 1/24) ( p = .046). Subgroup analyses: Cerebral abnormalities ( ) Analysis of the subgroup with cerebral abnormalities (N = 27) showed that the autopsy had a greater diagnostic value than ultrasound in 40% of the cases (8/20), again more frequently in case of medical induction group (71.4%, 5/7) than in aspirations (23.1%, 3/13) ( p = .03) ( ). For example, in a case of ultrasound-suspected exencephaly, the autopsy identified a multiple malformation syndrome with iniencephaly, occipital meningocele, cervicodorsal rachischisis, external genitalia abnormalities, and pulmonary hypoplasia. In other cases, the autopsy identified vertebral, costal, and renal abnormalities associated with exencephaly. In two cases, it enabled a diagnosis of amniotic band syndrome. In one case of hydrocephaly, it identified signs suggesting a hemorrhagic stroke.
Among the fetal autopsies (N = 68), 54.4% (37/68) provided more information than ultrasound; this was the case more frequently in the medical inductions (77.5%, 31/40) than in the aspirations (21.4%, 6/28) ( p < .001) ( ). For example, the autopsy enabled the identification of signs associated with what had appeared to be an isolated hygroma on ultrasound (n = 4) and the diagnosis of amniotic band syndrome with multiple malformation syndromes (n = 3) and with a short umbilical cord (n = 1). It also made it possible to diagnose Cantrell’s pentalogy in two cases, although the medium intermediate celosomia had appeared isolated on ultrasound. The fetal autopsy led to a change in the genetic counseling provided in 20.6% of cases (14/68), again, more often in the medical induction group (32.5%, 13/40) than in the aspiration group (3.6%, 1/28) ( p < .001) ( ). The cases for which fetal autopsies modified the genetic counseling are detailed in .
When restricting to TOPFA performed before or at 14 weeks, 42 pregnancies were included. Among the fetal autopsies (N = 35), 42.9% (15/35) provided more information than ultrasound; this was the case more frequently in the medical inductions (90.9%, 10/11) than in the aspirations (20.8%, 5/24) ( p < .01) ( ). In this sensitivity analysis, the fetal autopsy led to a change in the genetic counseling provided in 11.4% of all cases (4/35), and this was still significantly more often in the medical induction group (27.3%, 3/11) than in the aspiration group (4.2%, 1/24) ( p = .046).
) Analysis of the subgroup with cerebral abnormalities (N = 27) showed that the autopsy had a greater diagnostic value than ultrasound in 40% of the cases (8/20), again more frequently in case of medical induction group (71.4%, 5/7) than in aspirations (23.1%, 3/13) ( p = .03) ( ). For example, in a case of ultrasound-suspected exencephaly, the autopsy identified a multiple malformation syndrome with iniencephaly, occipital meningocele, cervicodorsal rachischisis, external genitalia abnormalities, and pulmonary hypoplasia. In other cases, the autopsy identified vertebral, costal, and renal abnormalities associated with exencephaly. In two cases, it enabled a diagnosis of amniotic band syndrome. In one case of hydrocephaly, it identified signs suggesting a hemorrhagic stroke.
As the different protocols suggested, the length of stay was significantly longer in the medical induction group (2.1 days +/-0.5) than in the aspiration group (1 day +/-0.2, p < .001). In 16.7% of the medical inductions, a secondary aspiration was required for retention of the products of concept (N = 6) or hemorrhage (N = 1); there was none in the aspiration group ( p < .001). Post-expulsion retention was observed in 11.4% of all cases (n = 9): 14.3% (6/42) in the medical induction group and 8.1% (3/37) in the aspiration group, p = .5. There was one case of hemorrhage in each group and one infection in the medical induction group ( ).
This study showed that in case of medical TOP without cytogenetic abnormality performed after the diagnosis of a fetal anomaly at the first-trimester ultrasound, the fetal autopsy provided additional information over and above that of the ultrasound in 54.4% of the cases, and more frequently after a medical induction (77.5%) than after a vacuum aspiration (21.4%, p < .001). It led to an important change in the genetic counseling and management for the subsequent pregnancy in 20.6% of cases, 32.5% of those after medical induction and 3.6% after aspiration; the difference between the two groups was statistically significant ( p < .001). The principal strengths of our study lie in the performance of fetal autopsies by two physicians specialized in fetal pathology, according to a standardized examination protocol for each method of termination. The conclusions were systematically discussed at a multidisciplinary meeting. We used a strict definition of modification of genetic counseling. Moreover, before each decision for TOPFA, the ultrasound was repeated for independent verification by a specialist. These cases were collected over a limited period of 5 years that allows us to consider that their management was homogeneous. Our collection of complications was exhaustive because the women were all seen at follow-up visits after their TOPFA, and secondary complications were always collected. The limitation due to its retrospective nature and the limited size of our population did not allow any subgroup analysis of the anomalies. Some bias might have been present, especially related to the choices of procedure according to gestational age. Thus, a study of the characteristics of the TOP shows that those performed by aspiration took place at a significantly earlier gestational age than those by medical induction. This might constitute a confounding factor for the assessment of the value of autopsies after medical induction. Nonetheless, the absolute value of the difference in gestational age is low and we performed a sensitivity analysis on TOPFA performed before or at 14 weeks and the results were similar and significant. Moreover, it is important to note that reasons for TOPFA did not differ between the groups. These results point in the same direction as those of our team’s first study on this topic between 2006 and 2008 . That study reported that the autopsies were diagnostically valuable in 42% of the cases, with a greater benefit in the medical inductions, albeit without a statistically significant difference (65% in the medical induction group versus 20% in the aspiration group). It also found that genetic counseling was modified in 37% of the cases, with a difference between the groups: 62% in the medical induction group versus 13% in the aspiration group, p < .01. The difference in the rate observed in our study for the modification of genetic counseling can be explained by our more restrictive definition; we did not include cases in which the autopsy resulted in a conclusion that the abnormality was probably sporadic. We decided not to combine the results of the two studies for the analyses because there were notable improvements between the two periods in the screening and diagnosis of malformations by ultrasound improvement in the ultrasound instruments. Our finding that autopsies modified genetic counseling, regardless of method of termination, are similar to those about for TOP after 18 weeks [ , – ]. Those studies reported that diagnostic benefits occurred in 18 to 51% of cases . A systematic review of the literature published in 2017 by Rossi et al., including results from 19 studies and 3,534 fetal autopsies, found that autopsy and ultrasound results were equivalent in 68% of the cases . In 22.5%, they found additional abnormalities, and in 3.8% of the cases, these modified the diagnosis. These percentages are a little lower than ours. Nonetheless, the endpoint was different since we chose the value of the additional signs from the fetal autopsy, compared with a first-trimester ultrasound alone. Moreover, in this review of the literature, the mean gestational age at which the abnormalities were detected on ultrasound ranged between 17 and 20 weeks, while our study considered terminations performed for abnormalities observed during the first-trimester ultrasound, that is, before 14 weeks. We might wonder if the contribution of the autopsy differs by the gestational age at which the TOPFA is performed, especially if it is still earlier in pregnancy. That is, even if an autopsy is possible at 11 weeks, at this stage of morphogenesis and histogenesis, some abnormalities may not yet be able to be diagnosed. Nonetheless this problem also occurs with ultrasound, some abnormalities are not visible at an early gestational age. The fetal autopsy modified the genetic counseling results in 20.6% of cases, by making it possible to enable prenatal diagnosis for the next pregnancy: by conducting a work-up for the couple, early ultrasound scans if there is a risk of recurrence, possible karyotyping or a genetic examination. Actually, the autopsy provided relevant information even when it did not find additional abnormalities compared with ultrasound. That is, it allowed a conclusion, for example, that the anomaly was indeed isolated and not syndromic, and thus probably sporadic. There is thus no indication for a prenatal diagnosis for a subsequent pregnancy. The benefit provided by fetal autopsy in our study would perhaps be now greater because the fetal autopsy leads to proposal for exome or genome if there is a suspicion of genetic pathology. Nonetheless, even without modifying the genetic counseling, the management of the next pregnancy was often modified in practice, with the introduction of specifically focused ultrasounds. Another example, folic acid at a dose of 5 mg was proposed́ for subsequent pregnancies for women with neural tube defects. These modifications were not necessarily due to an additional finding from the autopsy but could have been proposed́ based on the abnormalities found on the ultrasound alone. We therefore did not consider in these situations that the autopsy modified the genetic counseling. In two cases, the autopsy added the finding of a myelomeningocele to multiple malformation syndromes already visualized on ultrasound; it thus justified folic acid supplementation at a dose of 5 mg for subsequent pregnancies. We conducted a secondary analysis in the subgroup of cephalic abnormalities, for these diagnoses might appear to be modified by an autopsy only rarely. If this was confirmed, vacuum aspiration could be performed in most cases. Our results showed that autopsy has substantial value in a substantial portion of these cases (40%), with once again a difference between the methods of termination. A difference in the value of the autopsy analysis by the method of termination also arises for bone abnormalities where the contribution of the bone radiographs that can be performed on the product of aspiration could lead to a conclusion that this method of termination would be "sufficient". Larger complementary studies focused on some types of abnormalities identified early could be interesting to optimize the discussion with patients about the method to prefer. We found more complications (secondary aspiration and retention) after medical induction than after aspiration and a longer hospital stay. The results by Gitz et al. were similar. We note that the absolute number of complications were few, and they were minor. One of the limitations of our study is the small number of patients available for comparing complication rates (retention, hemorrhage) by method of termination. Other studies compared́ these different methods . Thus Lohr et al. reviewed the literature and showed fewer and milder undesirable effects after aspiration than after medical induction (OR 0.06, 95% CI 0.01–0.76), which was accompanied by more pain. Nonetheless, acceptability and efficacy were finally identical in the two groups . One of the important questions remains women’s experiences and psychological status after TOP. That is, the benefit provided by the autopsy after medical induction must be balanced with the potential benefits associated with the simplicity of the aspiration procedure. The unavailability of a reliable and exact diagnosis after TOP or the lack of information can also have an important impact on the women’s psychological prognosis. Accordingly, Korenromp et al. assessed women’s psychological status at 4, 8, and 16 months after their termination. They observed that 46% had posttraumatic stress at 4 months and 20.5% at 16 months. Factors predictive of PTSD were: a high level of doubt at the time of the decision, lack of support from their partner, higher gestational age, and religious convictions. Other studies have compared both methods of termination according to the women’s perceptions and showed that the surgical pathway is more acceptable . Thus Kelly et al. reported that after aspiration, 100% of the women would prefer to undergo the same procedure if they had to have another TOP, compared with 53% in the group of medical induction ( p < .001). Among the women undergoing aspiration, not one found the procedure more traumatizing than expected, compared with 53% after medical induction ( p = .001), which involved more bleeding ( p = .003) and more pain ( p = .008).
For early TOPFA before 16 weeks, medical induction with vaginal expulsion seems preferable because it enables a more complete and exact analysis of a whole fetus and is more likely to modify genetic counseling or the management of subsequent pregnancies. The balance between the need for a precise diagnosis, the woman’s psychological status, and the potential obstetric complications must be systematically discussed before TOPFA. The benefits and risks of the different procedures must be clearly explained so that women can make an informed choice. Additional studies to analyze the value of fetal autopsies by gestational age at the time of TOP and by type of abnormality might be interesting. Prospective studies would be necessary to assess the physical and psychological consequences of the different methods.
|
Survival analysis of laryngeal squamous cell cancer, considering different treatment modalities and other factors influencing survival – a monocentric retrospective investigation | 33272e56-4770-4c55-8026-bad6da9e7e6f | 11950094 | Surgery[mh] | Laryngeal cancer can spread to the supraglottic, glottic, and subglottic regions. It is the second most common type of head and neck cancer, with an estimated 184,615 new cases reported in 2020. Central and Eastern Europe have the highest mortality rates associated with this type of cancer . Squamous cell carcinoma is the most common histological subtype, and almost all squamous cell variants are found in the laryngeal region . Generally, laryngeal cancer shows a higher occurrence in males, with smoking and regular alcohol consumption being the primary risk factors . Additionally, comorbidities such as type 2 diabetes mellitus (T2DM) , chronic obstructive pulmonary disease (COPD) and coronary disease can also impact general health status and survival. However, p16-related oncogenic pathways have been suspected and described in laryngeal cancer, but the effects of p16-positive and HPV-related oncogenicity remain unclear in laryngeal cancers . The treatment of laryngeal cancers can be challenging, often necessitating multimodality treatment for advanced cases. In cases of early laryngeal cancer, patients may be offered surgery or primary radiotherapy as treatment options. For this, transoral laser surgery or open partial laryngectomy can also be choice. Surgical treatment may include various modalities, with an emphasis on options that lead to better functionality and rehabilitation. Therefore, transoral laser surgery or open partial laryngectomy could be viable choices . In cases of advanced laryngeal cancer (T3-4N0-3 disease), multimodality treatment is often necessary, and partial surgeries may not be feasible in some instances. Multimodality treatment refers to either surgery followed by radiotherapy or chemoradiation. In contrast to early-stage diseases, the treatment of advanced laryngeal cancers involves level I evidence, achieving good locoregional control using chemoradiation, which enables larynx preservation. For T3 diseases, chemoradiation should be considered, and in cases of tumour recurrence, salvage total laryngectomy following chemoradiation should be indicated. In T4a cases, total laryngectomy with adjuvant radiation yields similar results for locoregional control as chemoradiation or salvage surgery but with better survival rates. For T4a cases, it is not recommended to use chemoradiation, as it leads to lower survival rates . In some T3 and T4a cases, partial surgeries with an external approach can be beneficial, providing better functionality and similar survival rates compared to radical surgeries . After salvage surgeries, a higher rate of complications, such as pharyngocutaneous fistulae and slow wound healing, may be expected . Surgeries for laryngeal cancers can include total laryngectomy and partial laryngeal surgeries, such as transoral cordectomy, hemilaryngectomy, supraglottic horizontal resection and supracricoid horizontal partial laryngectomy with the most important ones among them mentioned. Total laryngectomy is the complete removal of the larynx, requiring ventilation through a tracheostomy tube due to airway separation. This procedure is primarily carried out for advanced T3 and T4 cancers. Transoral cordectomy, typically performed for T1 cancers of one vocal cord, involves the resection or removal of the vocal cords, often utilising laser or robotic assistance. Type IV cordectomy primarily involves the complete removal of the vocal cord, including the epithelia, ligaments, and muscles, from the anterior to the posterior commissure. Type I refers to subepithelial resection, while type II is sub-ligamental resection. Type III involves a transmuscular resection with the potential removal of the ventricular fold. For T1 and T2 supraglottic cancers, supraglottic horizontal resections or Alonso surgery involve removing the entire laryngeal vestibule, with the incision line running horizontally through the laryngeal ventricle of Morgagni. This procedure eliminates all supraglottic structures and the pre-epiglottic area. Hemilaryngectomy refers to a modified surgical procedure of Hautant. It involves a vertical skin incision and the removal of the tumorous hemilarynx, including the vocal cord, anterior commissure, ipsilateral arytenoid cartilage, and most of the ipsilateral thyroid cartilage. Nowadays, the cricoid cartilage is usually spared in this procedure. When performing a supracricoid horizontal partial laryngectomy, the thyroid cartilage, vocal cords, arytenoid regions, epiglottis (i.e., cricohyoidopexy) and the parapharyngeal space are completely removed, while preserving the hyoid bone and the cricoid cartilage. This procedure benefits glottic and glotto-supraglottic cancers, with some selected cases of T4 supraglottic and glottic cancers . In addition to enhancing surgical treatments for laryngeal cancers, there have been advancements in non-surgical treatment options as well. When considering radiotherapy, a total dose of 66–70 Gy is typically applied in 33–35 fractions (2 Gy per fraction). Postoperatively, radiation dosages of 54 to 60 Gy are recommended for locally advanced tumours, 60 to 66 Gy for complete surgical resections, and 60 to 66 Gy for R1 resections. A dosage of 50 to 66 Gy is used for the lymph node areas depending on extracapsular spreading. It is important to initiate radiotherapy as soon as possible, preferably within two months after surgery. For instance, a previous study demonstrated the positive impact of postoperative adjuvant radiation on pT4aN0 glottic cancers . Chemotherapy involves a combination of cisplatin and 5-fluorouracil using body-surface area-based dosing, specifically 100mg/m 2 intravenously every three weeks . However, since 2008, the ‘EXTREME’ protocol has been introduced, which combines cetuximab, an epidermal growth factor receptor monoclonal antibody, with chemotherapy options such as cisplatin, carboplatin, or 5-fluorouracil. Previous trials indicated that this treatment combination improved overall survival (OS) in patients with recurrent or metastatic squamous cell head and neck cancers as a first-line option . Furthermore, since 2019 the pembrolizumab, a PD-L1 receptor inhibitor humanised antibody and chemotherapy scheme is applied. Pembrolizumab monotherapy and its combination with platinum and 5-fluorouracil have significantly improved OS in patients with recurrent or metastatic squamous cell head and neck cancers, based on previous trials . Adjuvant chemoradiation is recommended for high-risk cases with histological evidence of regional lymph node metastases, extracapsular extension of the nodal disease, microscopically involved tumour margins, and in some cases, when the resection margin is less than 2 mm. In some cases, docetaxel induction chemotherapy may also be applied. Eligibility for chemotherapy is assessed using the ECOG (Eastern Cooperative Oncology Group) scale, and routine laboratory testing, including white blood cell and platelet counts and creatinine clearance, is performed before chemotherapy. According to some previous research reports, it is possible to perform partial surgeries even after chemoradiation . Despite numerous changes and improvements in the management of laryngeal cancer, the 5-year survival rates have not increased as anticipated. 5-year survival depends on disease stage and location. For example, a previous investigation found 5-year disease-specific survival (DSS) of 100% for T1a, 95% for T1b, 78% for T2, 79% for T3, and 53% for T4 glottic cancers. In cases of supraglottic cancers, the observed DSS values were 68%, 54%, 72%, and 59%, respectively . Over the past 150 years, there has been a significant shift in the treatment of laryngeal cancers. Previously, radical surgeries were the norm, even for early-stage disease . However, in the current era of treating laryngeal cancers, the focus is on partial surgeries with organ preservation, aiming for improved functionality and similar survival rates compared to radical surgeries . In addition to external methods, significant progress was made by utilising CO2 laser surgeries in the early stages of laryngeal cancer . Since then, investigations have found similar survival rates in T1 and T2 laryngeal cancers when comparing surgeries and primary radiotherapy . As a part of the advancement in treatment options, multimodality treatments have significantly improved outcomes in laryngeal cancers. Better survival rates have been observed with chemoradiation compared to radiotherapy . However, this observation is relevant only to advanced stages of laryngeal cancers . Despite notable advancements in treating laryngeal cancers, survival rates have remained largely unchanged. To address this issue, the primary objective of this study was to analyse the various treatment approaches and potential factors influencing survival in laryngeal cancers.
Study population and design A total of 293 patients diagnosed with squamous cell larynx cancer diagnosed between July 2002 and March 2023 were enrolled in this investigation. Each patient was diagnosed and treated in the Department of Otorhinolaryngology and Head and Neck Surgery of Semmelweis University with at least one year of follow-up. The diagnosis and treatment followed the latest National Comprehensive Cancer Network ® (NCCN) guideline for Head and Neck Cancers . Inclusion criteria included the diagnosis of squamous cell larynx cancer in any laryngeal region by an otorhinolaryngologist and histological examination, patient consent to participate in this investigation, eligible clinical data, and at least one year of follow-up. Those patients with other types of head and neck cancer, other primary tumour, lacking clinical data, lost in follow-up, or not consenting to participate, were excluded from this study. Clinical data, including clinical examinations, treatment, survival factors (e.g., smoking, alcohol consumption, TNM, stage or p16 expression), and comorbidities (e.g., T2DM, COPD, and coronary disease), were obtained from the University’s electronic medical system. For the purpose of weight loss, a duration of 6 months was established as the timeframe. The treatment modalities were categorised as total laryngectomy, supracricoid horizontal partial laryngectomy, supraglottic horizontal resection, hemilaryngectomy, transoral laser cordectomy, chemoradiation, chemotherapy, radiotherapy, including palliative irradiation, and patients who did not consent to or could not receive treatment due to poor health were categorised as not having received treatment. Larynx cancer was categorised based on its location as supraglottic, glottic or subglottic cancer. Clinical examinations, diagnostic work-up All patients were diagnosed with laryngeal cancer by a specialist, using a general otorhinolaryngological examination and laryngoendoscopy. After the diagnosis, staging examinations were offered, including CT scans of the neck, chest, and abdomen using contrast. Alternatively, PET-CT scans were recommended in combination with contrast-enhanced neck CT scans. Laryngomicroscopy under general anaesthesia was performed to obtain cancer tissue for histological analysis and to observe local tumour spreading. Additionally, p16 immunohistochemistry was applied during the histological examinations, as detailed below. Staging classification for laryngeal cancers was determined using the most recent American Joint Committee on Cancer (AJCC) staging manual . The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Ethics Committee of Semmelweis University (protocol code: SE IKEB 105/2014; approval date: 29 May 2014). Comorbidities, potential influencing factors Analysis was conducted on the co-occurrences of T2DM, COPD, and coronary artery disease and their potential impact on survival, considering the general characteristics of the populations. As a main risk factor for laryngeal cancers, smoking and regular alcohol consumption was also considered. As it might influence survival, body mass index (BMI) and the potential impact of weight loss on survival were also investigated. Patients’ daily functionality was evaluated using the ECOG scale, and its score was included as a potential influencing factor. TNM stages and grades were also taken into account. p16 immunohistochemistry For the p16 immunohistochemistry, 4 μm histology slides were stained using a Benchmark Ultra Plus automated system (Roche, Basel, Switzerland). The process began with deparaffinisation of the histology slides in EZ Prep Solution (Roche, Basel, Switzerland). Subsequently, a cell conditioning solution (pH = 9) for heat-induced epitope retrieval was applied (Roche, Basel, Switzerland) at a temperature of 95 °C for 30 min. Following this, a drop of UV INHIBITOR (Roche, Basel, Switzerland) was used to inhibit endogenous peroxidase activity and left to incubate at 37 °C for 6 min. A p16-INK4 monoclonal antibody (Cell Marque, Rocklin, CA) was then incubated at 37 °C for one hour with a dilution of 1:100. The binding of the primary antibodies was visualised using the OptiView Amplification kit (Roche, Basel, Switzerland). Nuclear counterstaining was achieved using Haematoxylin II (Roche, Basel, Switzerland). Additionally, a diluted Reaction Buffer Concentrate (Roche, Basel, Switzerland) was used for washing. A positive p16 staining was classified as a distinct cytoplasmic and nucleolar positive reaction in at least 70% of the tumour tissue . Statistical analysis Data processing was performed using IBM SPSS V25 software (IBM Corporation, Armonk, NY, USA). Data normality was checked using the Shapiro – Wilk test. Continuous variables were provided as mean ± SD or median values, based on the normality of the data. The Mann – Whitney U and Kruskal – Wallis tests were used for analysing significant differences between each group. To analyse survival, survivorship curves (Kaplan – Meier) were plotted, and the influence of different factors on survival was analysed using the log-rank (Mantel – Cox) test. Moreover, to analyse the effects of multiple variables on survival, the Cox (Proportional Hazards) regression was applied. This model included the following parameters: therapy, age, sex, p16 expression, BMI, weight loss, T2DM, COPD, coronary artery disease, smoking, and regular alcohol consumption, TNM, and tumour grade. The Spearman’s correlation test was used to analyse the correlations between the parameters. A p -value under 0.05 was consistently considered statistically significant.
A total of 293 patients diagnosed with squamous cell larynx cancer diagnosed between July 2002 and March 2023 were enrolled in this investigation. Each patient was diagnosed and treated in the Department of Otorhinolaryngology and Head and Neck Surgery of Semmelweis University with at least one year of follow-up. The diagnosis and treatment followed the latest National Comprehensive Cancer Network ® (NCCN) guideline for Head and Neck Cancers . Inclusion criteria included the diagnosis of squamous cell larynx cancer in any laryngeal region by an otorhinolaryngologist and histological examination, patient consent to participate in this investigation, eligible clinical data, and at least one year of follow-up. Those patients with other types of head and neck cancer, other primary tumour, lacking clinical data, lost in follow-up, or not consenting to participate, were excluded from this study. Clinical data, including clinical examinations, treatment, survival factors (e.g., smoking, alcohol consumption, TNM, stage or p16 expression), and comorbidities (e.g., T2DM, COPD, and coronary disease), were obtained from the University’s electronic medical system. For the purpose of weight loss, a duration of 6 months was established as the timeframe. The treatment modalities were categorised as total laryngectomy, supracricoid horizontal partial laryngectomy, supraglottic horizontal resection, hemilaryngectomy, transoral laser cordectomy, chemoradiation, chemotherapy, radiotherapy, including palliative irradiation, and patients who did not consent to or could not receive treatment due to poor health were categorised as not having received treatment. Larynx cancer was categorised based on its location as supraglottic, glottic or subglottic cancer.
All patients were diagnosed with laryngeal cancer by a specialist, using a general otorhinolaryngological examination and laryngoendoscopy. After the diagnosis, staging examinations were offered, including CT scans of the neck, chest, and abdomen using contrast. Alternatively, PET-CT scans were recommended in combination with contrast-enhanced neck CT scans. Laryngomicroscopy under general anaesthesia was performed to obtain cancer tissue for histological analysis and to observe local tumour spreading. Additionally, p16 immunohistochemistry was applied during the histological examinations, as detailed below. Staging classification for laryngeal cancers was determined using the most recent American Joint Committee on Cancer (AJCC) staging manual . The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Ethics Committee of Semmelweis University (protocol code: SE IKEB 105/2014; approval date: 29 May 2014).
Analysis was conducted on the co-occurrences of T2DM, COPD, and coronary artery disease and their potential impact on survival, considering the general characteristics of the populations. As a main risk factor for laryngeal cancers, smoking and regular alcohol consumption was also considered. As it might influence survival, body mass index (BMI) and the potential impact of weight loss on survival were also investigated. Patients’ daily functionality was evaluated using the ECOG scale, and its score was included as a potential influencing factor. TNM stages and grades were also taken into account.
For the p16 immunohistochemistry, 4 μm histology slides were stained using a Benchmark Ultra Plus automated system (Roche, Basel, Switzerland). The process began with deparaffinisation of the histology slides in EZ Prep Solution (Roche, Basel, Switzerland). Subsequently, a cell conditioning solution (pH = 9) for heat-induced epitope retrieval was applied (Roche, Basel, Switzerland) at a temperature of 95 °C for 30 min. Following this, a drop of UV INHIBITOR (Roche, Basel, Switzerland) was used to inhibit endogenous peroxidase activity and left to incubate at 37 °C for 6 min. A p16-INK4 monoclonal antibody (Cell Marque, Rocklin, CA) was then incubated at 37 °C for one hour with a dilution of 1:100. The binding of the primary antibodies was visualised using the OptiView Amplification kit (Roche, Basel, Switzerland). Nuclear counterstaining was achieved using Haematoxylin II (Roche, Basel, Switzerland). Additionally, a diluted Reaction Buffer Concentrate (Roche, Basel, Switzerland) was used for washing. A positive p16 staining was classified as a distinct cytoplasmic and nucleolar positive reaction in at least 70% of the tumour tissue .
Data processing was performed using IBM SPSS V25 software (IBM Corporation, Armonk, NY, USA). Data normality was checked using the Shapiro – Wilk test. Continuous variables were provided as mean ± SD or median values, based on the normality of the data. The Mann – Whitney U and Kruskal – Wallis tests were used for analysing significant differences between each group. To analyse survival, survivorship curves (Kaplan – Meier) were plotted, and the influence of different factors on survival was analysed using the log-rank (Mantel – Cox) test. Moreover, to analyse the effects of multiple variables on survival, the Cox (Proportional Hazards) regression was applied. This model included the following parameters: therapy, age, sex, p16 expression, BMI, weight loss, T2DM, COPD, coronary artery disease, smoking, and regular alcohol consumption, TNM, and tumour grade. The Spearman’s correlation test was used to analyse the correlations between the parameters. A p -value under 0.05 was consistently considered statistically significant.
The basic clinical data of the examined groups is provided in Table . According to the table, the average age of patients with larynx cancer was approximately 64 years, indicating that older individuals are generally affected. Additionally, there was a significant male predominance in this study population. When comparing survival times by sex, no significant differences were found ( p = 0.198, Z -score: 1.287, Mann – Whitney U test). When analysing the correlation between age and survival, no significant correlation was observed (rho = 0.089, p = 0.135) according to Spearman’s correlation test. The average follow-up time for this population was 45 months (3.75 years), with the longest follow-up period being 20.83 years. Most patients had glottic cancer (71.3%), followed by supraglottic (26.27%) and subglottic (2.43%) types. P16-positivity was observed in 21.5% of the entire study population. When considering cancer location, 38.9% of supraglottic, 15.8% of glottic, and 0% of subglottic cases presented p16-positivity. Regarding treatment modalities, the most common categories were total laryngectomy (27.6%), supraglottic horizontal resection (19.45%), and chemotherapy (16.72%). Approximately 17% of patients did not receive treatment. In terms of comorbidities, 22.86% had coronary artery disease, 13.65% had T2DM, and 11.9% had COPD, respectively. Smoking was reported in 88% of cases, and regular alcohol consumption in 56.3%. In the next step of the investigation, the OS depending on treatment modalities were analysed. The results are depicted in Fig. . Figure . reveals that patients who underwent hemilaryngectomy had the longest median survival of 156 months (95% CI: 126.027–169.762). A similar outcome was revealed for supraglottic horizontal resection with median OS of approximately 154 months (95% CI: 121.378–186.428). Conversely, the shortest median survival time of 24 months (95% CI: 10.766–37.234) was observed in the chemotherapy group. The log-rank test indicated a statistically significant difference in OS between the groups ( p = 0.000*). Interestingly, a lower OS was achieved in the group that underwent transoral laser cordectomy than expected. This could be due to the fact that our patients who had transoral laser cordectomy were newly diagnosed and had relatively shorter follow-up times compared to the other groups (28.87 ± 19.95 months vs. 46.02 ± 31.17 months), with a statistically significant difference ( p = 0.001*, Z -score: 3.26; Mann–Whitney U test). As the next step, a Kaplan–Meier analysis, depending on the larynx cancer stages and considering surgical and non-surgical treatment modalities, was performed, and the results are depicted in Fig. . According to Fig. ., patients with stage 1 laryngeal cancer showed significantly ( p = 0.007*, log-rank test) longer survival rates in the surgical treatment group (median survival: 154.317 months; 95% CI: 124.566–184.068) compared to the non-surgical treatment group (median survival: 79.407 months; CI: 59.241–99.574). In stage 2 disease, the surgery treatment group showed a tendency for longer survival rates (median survival: 150.683 months; 95% CI: 110.918–190.448); however, this difference was not statistically significant ( p = 0.112, log-rank test) when compared to non-surgical treatment options (median survival: 47.389 months; 95% CI: 31.246–63.531). A similar pattern was noted for stage 3 diseases, with no statistically significant differences ( p = 0.145, log-rank test) between the surgical (median survival: 83.623 months; 95% CI: 63.494–103.752) and non-surgical treatment (median survival: 49.071 months; 95% CI: 28.869–69.274) groups. In cases of stage 4 laryngeal cancers, surgical treatment (median survival: 56.342; 95% CI: 40.253–72.430) has been associated with statistically significantly ( p = 0.007*, log-rank test) longer survival rates compared to non-surgical treatment (median survival: 25.769; 95% CI: 17.674–33.865). To analyse the differences between the OS in supraglottic, glottic and subglottic cancers, an additional survivalship curve was plotted (Fig. .) As depicted in Fig. ., the Kaplan–Meier analysis revealed the longest median OS of 76.098 months (95% CI: 64.462–87.735) in glottic cancers, followed by slightly lower median OS of 60 months (95% CI: 39.338–80.662) in supraglottic glottic cancers. The lowest survival was found in subglottic cancers with a median of 48 months (95% CI: 15.526–80.474). However, the statistical analysis did not show a significant difference between the groups ( p = 0.640), although there is a tendency for higher survival rates in glottic and supraglottic cancers. In addition to comparing therapies on survival, the prediction of other factors on survival was also analysed, and the results are presented in Table . In Table ., it was found that smoking ( p = 0.166; HR = 1.968), regular alcohol consumption ( p = 0.534; HR = 0.850), and certain conditions like T2DM ( p = 0.741; HR = 0.874) and COPD ( p = 0.188; HR = 1.610) did not significantly predict survival in larynx cancers. BMI ( p = 0.775; HR = 1.045) and weight loss ( p = 0.748; HR = 1.120) also did not have a significant effect. P16-positivity ( p = 0.458; HR = 0.680) and sex ( p = 0.386; HR = 0.819) also did not significantly affect the results. However, coronary artery disease was identified as significant predictor of worse survival ( p = 0.039*; HR: 0.947). Additionally, significantly better survival was associated with lower ECOG scores ( p = 0.001*; HR = 1.705), ‘ N ’ stages ( p = 0.025*; HR = 1.662) and lower tumour grades ( p = 0.029*; HR = 2.409).
The study examined the survival rates of patients with laryngeal cancer based on different treatment methods and risk factors. Because laryngeal squamous cell cancer presents with diverse clinical characteristics and treatment options, survival outcomes can vary widely among patients. Our findings indicated that coronary artery disease, higher ECOG performance status, advanced tumour stages, primary chemotherapy, and radiotherapy were all linked to lower OS rates. When comparing survival rates for larynx cancer across different stages and treatment options, significantly longer survival rates were found for patients who underwent surgical treatments in stages 1 and 4. For stages 2 and 3, although patients who received surgical treatment also tended to have longer survival rates, these differences were not statistically significant when compared to those receiving non-surgical treatments, such as primary radiotherapy for stage 2 and chemoradiation for stage 4. Among the various treatment methods, hemilaryngectomy and supraglottic horizontal resection provided the longest survival rates in the entire study population. Previous studies indicate that survival rates in laryngeal cancer vary widely based on the stage of the disease and other potential contributing factors. For example, Cui et al. examined the 5-year survival rates in laryngeal cancer, considering factors such as age, smoking history, ‘ N’ -stage, and various laboratory testing parameters. The 5-year OS rate was 63.5% in the training group and 67.7% in the validation group, according to their findings. Using the aforementioned parameters in a nomogram significantly improved the prediction of survival compared to using only the TNM system . Considering the stages of the disease, a previous study reported a three times higher risk of death in laryngeal cancer at stages 3 and 4. However, a higher chance of survival was observed following surgeries . Another study found significantly improved DSS and OS in advanced stages of laryngeal cancers. Furthermore, stage 3 disease, glottic location, female sex, and married status were found to have a positive influence, while black race and increased age were found to have a negative impact on DSS . According to our study, we have observed better survival rates with surgeries, especially in disease stages 1 and 4. Factors such as tumor location, age, and sex did not significantly impact survival in our group. Another investigation achieving long-term follow-up concluded similar survival rates for all primary treatment modalities (i.e., total laryngectomy, chemoradiation and radiotherapy) for T3 cancers, while better survival rates in terms of total laryngectomy with or without adjuvant radiotherapy for T4 cases . In a study comparing surgeries and radiotherapy for early laryngeal cancers (T1–T2–N0), both treatment options were found to be equally effective. The study included supraglottic horizontal resection, hemilaryngectomy, endoscopic laser surgery, and microlaryngoscopic surgery. However, the study also found that radiotherapy resulted in significantly better voice and speech rehabilitation. This suggests that when considering treatment options, the impact on quality of life and functionality after treatment should be considered . Laryngeal cancer affects men more than women, with recent reports indicating lower survival rates for men . Our study also found a higher number of males diagnosed with laryngeal cancer; however, we did not observe any inherent difference considering sex in OS. Sexton et al. came to a similar conclusion that there was no correlation between male sex and poorer survival compared to females in their study . In our study, we observed a high prevalence of smoking and alcohol use, which are primary risk factors for laryngeal squamous cell cancer. However, we found that these factors do not significantly predict survival outcomes. Among comorbidity characteristics, we identified coronary artery diseases as significant predictors of worse survival outcomes, while respiratory diseases–such as COPD – and T2DM were not found to be significant predictors. Previous research consistently indicates that pre-treatment comorbidities are linked to survival in head and neck cancer, with higher comorbidity rates associated with poorer survival outcomes [ – ]. Mulcahy et al. also showed that both age and comorbidities independently affect prognosis in laryngeal cancer due to their impact on non-cancer-related mortality risk . The reported p16-positivity rate in laryngeal cancers varies widely in the literature, ranging from 1 to 58%. In our study, 24.2% of the examined population was p16-positive. In the study conducted by Chung et al., p16-positivity was found in 7.8% of the examined population . In their report, Dogantemur et al. found p16-positivity in 20% of their cases . Studies examining the relationship between p16-positivity and tumour location have shown varied findings. Xu et al. observed significantly higher p16-positivity in supraglottic cases, whereas other studies have not consistently shown such differences. The study by Dogantemur et al. found a strong link between p16-positivity and tumours located in the supraglottic region . They observed that 55.6% of p16-positive cases were located in the supraglottic area. In our study, 38.9% of cases were p16-positive in the supraglottic region, whereas 15.8% were positive in the glottic region. The treatment for laryngeal squamous cell cancer is generally similar regardless of where the cancer is located, but the outcomes can vary significantly. The treatment approach and TNM staging can greatly influence survival prognosis. Our study emphasises the continued importance of surgery in treating laryngeal cancer, whether it is in the early or advanced stages. The current treatment options for managing laryngeal cancer include total laryngectomy, hemilaryngectomy, supracricoid horizontal partial laryngectomy, supraglottic horizontal resection, chemotherapy, and radiotherapy. According to our findings, the highest OS was observed in cases where hemilaryngectomy was performed, with a median survival of 156 months, followed by supraglottic horizontal resection at approximately 154 months. Conversely, patients who underwent radiotherapy or chemotherapy had the lowest OS. Molina-Fernández et al. found that patients who underwent surgery had better cause-specific survival and disease-free survival at five years compared to those treated with radiation therapy alone . Sexton et al. found that radiotherapy for supraglottic lesions resulted in worse DSS survival outcomes . In our investigation, comparison of treatment outcomes across ‘ T’ stages showed that supraglottic horizontal resection, supracricoid horizontal partial laryngectomy, and total laryngectomy generally yielded better outcomes compared to other treatment methods studied. Partial larynx surgeries are essential considering patients’ functionality and quality of life. Following a total laryngectomy, patients require a tracheostomy tube and encounter more challenges in speech rehabilitation. The most commonly reported issues by patients include loss of verbal communication, tracheostoma-related problems such as increased secretion, cough, and frequent upper-airway infections, olfactory problems like hyposmia or anosmia, and emotional and sexual challenges . Following radiotherapy, some late complications can also be observed. These may include xerostomia (permanent loss of saliva), osteoradionecrosis, hypothyroidism, pharyngoesophageal stenosis resulting in dysphagia, lymphoedema, dizziness, and lightheadedness . Hence, partial surgeries should be taken into consideration in order to prevent complications and enhance functionality. A previous study comparing monocentric and multicentric supraglottic cancers treated with partial laryngectomies showed no significant increase in nodal metastasis rates . Another study found that transoral laser surgery yielded similar or better results for supraglottic cancers compared to traditional open supraglottic surgeries or total laryngectomy, regardless of the stage. Furthermore, transoral laser surgeries demonstrated significantly better outcomes than radiotherapy in advanced stages and slightly better results in early stages . In a previous study, supracricoid partial horizontal laryngectomy was found to yield positive oncological outcomes. The study reported 85.8% 3-year OS, 79.1% 5-year OS, 57.6% 10-year OS, and 57.6% 16-year OS rates for patients. Importantly, postoperative radiotherapy was not administered in most cases to improve functional outcomes, with the exception being invasive carcinoma cases . Therefore, partial surgeries should be prioritised when local tumour spreading and the patients’ general health status allow. Furthermore, personalised treatment selection is essential before treating advanced laryngeal cancers, according to the results of a previous investigation, which yielded good outcomes using open partial laryngectomies followed by adjuvant chemoradiation . The strengths of this examination are that it included a large number of patients suffering from laryngeal cancer and achieved long-term follow-up. Furthermore, the study analysed survival rates for different treatment methods, rather than focusing on just one specific treatment option as in previous investigations. This allowed for a comparison of OS rates among different treatment options and disease stages. However, this study had some limitations. First, since the treatment was divided into various groups, the distribution was quite different in each group, which could result in bias when conducting statistical analysis. For instance, the surprisingly low survival rates, considering transoral laser cordectomy - among other possible influencing factors - might be explained by this. Furthermore, some comorbidities that appeared in a low percentage of the population could not be analysed in terms of survival, which could also impact patients’ OS. Furthermore, achieving long-term follow-up can result in significantly different follow-up times for several patients.
When considering treatment options for laryngeal cancers, this research has shown that the longest survival has been observed after hemilaryngectomy and supraglottic horizontal resection. These procedures not only lead to better survival but also help in preserving organ function. Therefore, partial larynx surgeries should be considered when local tumour spreading allows. Factors influencing survival included the presence of coronary artery disease. Furthermore, poorer survival outcomes are anticipated for cases with higher ‘ N ’ stages, ECOG performance scores, and tumour grades. In considering both surgical and non-surgical treatment options at various stages of the disease, significantly longer survival rates were observed for surgical treatment in stages 1 and 4. Given the study design, future research should investigate factors influencing survival in larynx cancers.
|
A broad assessment of rotavirus vaccine safety in infants in Korea: Insights from a data-driven signal detection approach | 0111fd6c-7a5e-42cd-a605-75987cdbdc7f | 11834447 | Vaccination[mh] | Rotavirus is a leading cause of severe dehydrating gastroenteritis, characterized by diarrhea and vomiting, in infants and young children worldwide. It commonly spreads through families, communities, hospitals, and daycare centers. To prevent rotavirus gastroenteritis, the first rotavirus vaccine was licensed by the U.S. Food and Drug Administration in 1998. The World Health Organization (WHO) recommends widespread rotavirus vaccination of infants, and as of August 2024, the vaccine has been introduced in 125 countries globally. WHO has prequalified four oral, live attenuated rotavirus vaccines; Rotarix™, RotaTeq™, Rotavac™, and RotaSiil™. In the Republic of Korea, two rotavirus vaccines are available; the pentavalent vaccine (RotaTeq™, Merck & Co., Inc, USA), first approved in June 2007, and the monovalent vaccine (Rotarix™, Glaxo Smith Kline Biologicals, Belgium), approved in March 2008. As of March 2023, these vaccines have been included in Korea’s National Immunization Program (NIP) and are provided free of charge to recommended recipients under the age of 8 months. While infants are the primary recipients of the vaccine, this vulnerable group is often excluded or underrepresented in pre-approval clinical trials. As a result, evidence of vaccine safety in infants and children, particularly for rare adverse events, typically derives from post-approval studies. Although the rotavirus vaccine has been in use for several decades and its safety profile is relatively well-established, most published studies focused on selected outcomes such as gastrointestinal disorders (e.g., intussusception) and systemic general symptoms (e.g., fever). Given Korea’s high rotavirus vaccination coverage representing over 88.0% in 2017, finding an unvaccinated control group is especially challenging. Therefore, the use of longitudinal databases and effective signal detection methods is essential for identifying previously unknown potential safety signals within large populations, relying solely on data from vaccinated infants. Tree-temporal scan statistics is a practical data mining method that evaluates a wide range of health outcomes to identify those that may be temporally associated with specific exposures. , This method can detect potential adverse events without prespecifying the specific outcomes or risk intervals of concern. Previous studies have shown its utility in detecting safety signals for vaccines such as human papillomavirus vaccine, , meningococcal conjugate vaccine, live attenuated herpes zoster vaccine, and COVID-19 vaccine. , Given that most rotavirus vaccines are administered to infants and young children, findings from studies applying this methodology provide valuable information that helps healthcare providers, including pediatricians, stay informed about potential risks they might not have otherwise anticipated. Although further rigorous epidemiological studies are required to confirm these signals, this approach can help identify previously unknown and unexpected safety issues. In this study, we detected adverse events following rotavirus vaccination in infants using the tree-temporal scan statistics data mining method and identified potential safety signal that could be further investigated in targeted studies.
Data sources and study population The study population consisted of infants who received the first dose of rotavirus vaccine within 15 weeks of birth. To identify study population, we utilized a comprehensive database that combined the Korea Disease Control and Prevention Agency (KDCA)’s immunization registry and the National Health Insurance Service (NHIS) claims data. Republic of Korea operates a national healthcare insurance program covering over 50 million residents. Health care providers provide medical services and claim medical fees to the Health Insurance Review and Assessment (HIRA) service, which reviews these claims and relays the findings to the NHIS. Based on the records, the NHIS pays healthcare providers for medical services and maintains a comprehensive database including information about insurance eligibility, inpatient and outpatient medical services, prescription records, and medical institutions accumulated in the process. Vaccines are not included in the NHIS database as they are not covered by the national health insurance. However, vaccines provided through the NIP are administered free of charge to eligible recipients by the KDCA. Since 2002, information on NIP vaccinations and recipients has been systematically managed and recorded in the KDCA’s Immunization Registry Information System. This study utilized the database linking the KDCA vaccination registry data and the NHIS claims data from June 1, 2016, to December 31, 2022. Rotavirus vaccines administered between June 1, 2016, and October 31, 2022, were included. The two databases were linked using unique resident registration numbers, and researchers accessed the linked anonymized data within the NHIS analysis network. Exposure and follow-up An analysis was conducted on RotaTeq and Rotarix, the rotavirus vaccines introduced in the Republic of Korea. Rotarix (RV1), a monovalent vaccine, is recommended to be administered twice at 2 and 4 months of age, while RotaTeq (RV5), a pentavalent vaccine, is recommended to be administered three times at 2, 4, and 6 months of age. We considered the first dose of any type of rotavirus vaccine as the exposure. To avoid exposure misclassification or errors in vaccine administration records, we excluded infants who received rotavirus vaccination after 8 months of age, those with heterologous vaccinations, those with multiple vaccination records for the same dose, and those without rotavirus vaccination records within 15 weeks of birth. The follow-up period was set from 1 to 56 days after the first dose of the rotavirus vaccine. This period was chosen based on studies of multi-dose vaccines , to minimize the possibility of time-varying confounding and to prevent overlap with the subsequent recommended dose, which occurs approximately 2 months later. This follow-up period has been validated in previous studies as an appropriate duration for detecting acute adverse events following vaccination, particularly when applying the tree-temporal scan statistic method. The day of vaccination was excluded from the follow-up period to account for potential preexisting conditions that might have been diagnosed during the healthcare visit for vaccination. Hierarchical diagnosis tree and incident outcomes We identified outcomes using the Korean Standard Classification of Diseases, Seventh Revision (KCD-7) codes, which follow a hierarchical structure. The KCD-7 codes are based on the International Classification of Diseases, Tenth Revision (ICD-10) codes. They have been refined to include subdivided categories for common and rare diseases in Korea, that are identifiable within the classification system. The highest classification level (chapter) consists of 22 categories, with branches extending up to five levels of classification. To avoid detecting clusters that are too broad or too specific, we did not include clusters at level 1 (broadest) and level 5 (narrowest) (Supplementary Figure S1). For this study, we used a tree structure that excluded certain diagnostic codes deemed unrelated to vaccination or biologically implausible to occur within a few weeks post-vaccination (Supplementary Table S1). These excluded codes included those related to cancer, birth defects, external causes or factors of morbidity or health status, and conditions specific to pregnancy and the perinatal period. , Only incident diagnoses that were recorded with the primary diagnosis code in an inpatient setting during the follow-up period were included in the analysis. Diagnoses were excluded if the patient had been diagnosed with a code sharing the same first three characters (i.e., the same third level of the tree) in any inpatient or outpatient setting from birth up to the first diagnosis date within the follow-up period. Each patient was allowed to contribute to multiple outcomes during the follow-up period, provided their diagnoses did not fall within the same three levels of the tree. Risk and control window The potential risk window for clustering was set to range from a minimum of 2 days to a maximum of 28 days. This window could start anywhere from 1 to 28 days following the first rotavirus vaccination and could end between 2 to 42 days post-vaccination. The control window was defined as the remaining period within the follow-up period, excluding each potential risk window. For example, if the risk window was set to days 1–5 following rotavirus vaccination, the control window would automatically be days 6–56 ( ). Similarly, if the risk window was days 6–30, the control window would be days 1–5 and 31–56 ( ). To exclude unreliable risk windows, such as short clusters starting long after vaccination, the length of the potential risk window was restricted to at least 20% of the interval between the vaccination date and the end of the risk window. Statistical analysis We applied the self-controlled tree-temporal scan statistics, which is based on tree-based scan statistics. This analysis method identifies a wide array of potential adverse events across diverse clinical outcomes and related outcome groups in any potential risk window. This data mining method allows for the detection of potential adverse events with minimal prior assumptions, thereby eliminating the need to pre-specify safety outcome variables. By employing a self-controlled design, the method compares the probability of outcome occurrence over time within individuals, thus controlling for time-invariant potential confounders such as gender, socioeconomic status, and individual chronic predispositions. Under the null hypothesis, the probability of a specific adverse event occurring on any given day within the observation period is considered to be uniform across all time intervals and tree branches. By adjusting for multiple comparisons across diverse conditions and time windows, the method compares the number of cases in the risk window with the number of cases expected by chance in the control window. This approach has been particularly useful in detecting safety signals for various medications, including vaccines. The self-controlled tree-temporal scan statistics method, adjusted for nodes and time, was used to account for any purely temporal variation in the data that is common to the entire tree. A log-likelihood ratio (LLR)-based test statistics was calculated for each potential adverse event and risk window. To obtain the empirical distribution of the test statistics under the null hypothesis, we generated random datasets and selected the number of Monte-Carlo simulation as 9,999. The pre-specified p -value threshold for statistical significance to identify clusters was set at 0.01. If the test statistics from the real dataset ranked within the top 1% of all datasets, including the random datasets, the null hypothesis was rejected at the α = 0.01 level. This conservative approach was employed to minimize false signals further, even though adjustments were already made for multiple comparisons across a large number of clusters. For detected clusters, the attributable risks per 100,000 vaccinees were calculated by dividing the excess number of cases by the total number of doses. The analysis was conducted for all subjects combined, as well as separately for groups divided by sex and type of rotavirus vaccine. Classification of detected clusters Statistically significant clusters identified as potential safety signals were categorized to determine whether they represented known or previously unknown adverse events. Each cluster was classified into one of the following categories; (1) Adverse drug reaction (ADR) – the condition is a known adverse event reported from domestic post-marketing surveillance and reexamination of drug labeling, or those listed in sections, such as ‘Warnings and Precautions,’ ‘Adverse Reactions,’ or ‘Possible Side Effects’ in the patient information of U.S. Food and Drug Administration website and package insert; (2) ADR-related event (ADR-r) – the condition is not explicitly noted in the known risk profile, but is clinically considered related to known ADRs; (3) Signal – the condition is not specifically categorized as ADR or ADR-r. Broader disease categories encompassing the higher classification levels of ADR were classified as ADR-r. Three authors (NYJ, HC, and HK) independently classified statistically significant clusters, and any disagreements were resolved through discussion. In addition to the primary analysis focusing on the first dose, we performed a sensitivity analysis that included all doses up to the third, as long as they were not preceded by a prior dose within 56 days. The study was approved by the Institutional Review Boards of Ewha Womans University (ewha -202,309-0010-01) and received a waiver of informed consent. All methods were carried out following the relevant guidelines and regulations. The analysis was implemented using SAS enterprise guide version 7.1 (SAS Inc., Cary, NC, USA) and TreeScan software version 2.1.
The study population consisted of infants who received the first dose of rotavirus vaccine within 15 weeks of birth. To identify study population, we utilized a comprehensive database that combined the Korea Disease Control and Prevention Agency (KDCA)’s immunization registry and the National Health Insurance Service (NHIS) claims data. Republic of Korea operates a national healthcare insurance program covering over 50 million residents. Health care providers provide medical services and claim medical fees to the Health Insurance Review and Assessment (HIRA) service, which reviews these claims and relays the findings to the NHIS. Based on the records, the NHIS pays healthcare providers for medical services and maintains a comprehensive database including information about insurance eligibility, inpatient and outpatient medical services, prescription records, and medical institutions accumulated in the process. Vaccines are not included in the NHIS database as they are not covered by the national health insurance. However, vaccines provided through the NIP are administered free of charge to eligible recipients by the KDCA. Since 2002, information on NIP vaccinations and recipients has been systematically managed and recorded in the KDCA’s Immunization Registry Information System. This study utilized the database linking the KDCA vaccination registry data and the NHIS claims data from June 1, 2016, to December 31, 2022. Rotavirus vaccines administered between June 1, 2016, and October 31, 2022, were included. The two databases were linked using unique resident registration numbers, and researchers accessed the linked anonymized data within the NHIS analysis network.
An analysis was conducted on RotaTeq and Rotarix, the rotavirus vaccines introduced in the Republic of Korea. Rotarix (RV1), a monovalent vaccine, is recommended to be administered twice at 2 and 4 months of age, while RotaTeq (RV5), a pentavalent vaccine, is recommended to be administered three times at 2, 4, and 6 months of age. We considered the first dose of any type of rotavirus vaccine as the exposure. To avoid exposure misclassification or errors in vaccine administration records, we excluded infants who received rotavirus vaccination after 8 months of age, those with heterologous vaccinations, those with multiple vaccination records for the same dose, and those without rotavirus vaccination records within 15 weeks of birth. The follow-up period was set from 1 to 56 days after the first dose of the rotavirus vaccine. This period was chosen based on studies of multi-dose vaccines , to minimize the possibility of time-varying confounding and to prevent overlap with the subsequent recommended dose, which occurs approximately 2 months later. This follow-up period has been validated in previous studies as an appropriate duration for detecting acute adverse events following vaccination, particularly when applying the tree-temporal scan statistic method. The day of vaccination was excluded from the follow-up period to account for potential preexisting conditions that might have been diagnosed during the healthcare visit for vaccination.
We identified outcomes using the Korean Standard Classification of Diseases, Seventh Revision (KCD-7) codes, which follow a hierarchical structure. The KCD-7 codes are based on the International Classification of Diseases, Tenth Revision (ICD-10) codes. They have been refined to include subdivided categories for common and rare diseases in Korea, that are identifiable within the classification system. The highest classification level (chapter) consists of 22 categories, with branches extending up to five levels of classification. To avoid detecting clusters that are too broad or too specific, we did not include clusters at level 1 (broadest) and level 5 (narrowest) (Supplementary Figure S1). For this study, we used a tree structure that excluded certain diagnostic codes deemed unrelated to vaccination or biologically implausible to occur within a few weeks post-vaccination (Supplementary Table S1). These excluded codes included those related to cancer, birth defects, external causes or factors of morbidity or health status, and conditions specific to pregnancy and the perinatal period. , Only incident diagnoses that were recorded with the primary diagnosis code in an inpatient setting during the follow-up period were included in the analysis. Diagnoses were excluded if the patient had been diagnosed with a code sharing the same first three characters (i.e., the same third level of the tree) in any inpatient or outpatient setting from birth up to the first diagnosis date within the follow-up period. Each patient was allowed to contribute to multiple outcomes during the follow-up period, provided their diagnoses did not fall within the same three levels of the tree.
The potential risk window for clustering was set to range from a minimum of 2 days to a maximum of 28 days. This window could start anywhere from 1 to 28 days following the first rotavirus vaccination and could end between 2 to 42 days post-vaccination. The control window was defined as the remaining period within the follow-up period, excluding each potential risk window. For example, if the risk window was set to days 1–5 following rotavirus vaccination, the control window would automatically be days 6–56 ( ). Similarly, if the risk window was days 6–30, the control window would be days 1–5 and 31–56 ( ). To exclude unreliable risk windows, such as short clusters starting long after vaccination, the length of the potential risk window was restricted to at least 20% of the interval between the vaccination date and the end of the risk window.
We applied the self-controlled tree-temporal scan statistics, which is based on tree-based scan statistics. This analysis method identifies a wide array of potential adverse events across diverse clinical outcomes and related outcome groups in any potential risk window. This data mining method allows for the detection of potential adverse events with minimal prior assumptions, thereby eliminating the need to pre-specify safety outcome variables. By employing a self-controlled design, the method compares the probability of outcome occurrence over time within individuals, thus controlling for time-invariant potential confounders such as gender, socioeconomic status, and individual chronic predispositions. Under the null hypothesis, the probability of a specific adverse event occurring on any given day within the observation period is considered to be uniform across all time intervals and tree branches. By adjusting for multiple comparisons across diverse conditions and time windows, the method compares the number of cases in the risk window with the number of cases expected by chance in the control window. This approach has been particularly useful in detecting safety signals for various medications, including vaccines. The self-controlled tree-temporal scan statistics method, adjusted for nodes and time, was used to account for any purely temporal variation in the data that is common to the entire tree. A log-likelihood ratio (LLR)-based test statistics was calculated for each potential adverse event and risk window. To obtain the empirical distribution of the test statistics under the null hypothesis, we generated random datasets and selected the number of Monte-Carlo simulation as 9,999. The pre-specified p -value threshold for statistical significance to identify clusters was set at 0.01. If the test statistics from the real dataset ranked within the top 1% of all datasets, including the random datasets, the null hypothesis was rejected at the α = 0.01 level. This conservative approach was employed to minimize false signals further, even though adjustments were already made for multiple comparisons across a large number of clusters. For detected clusters, the attributable risks per 100,000 vaccinees were calculated by dividing the excess number of cases by the total number of doses. The analysis was conducted for all subjects combined, as well as separately for groups divided by sex and type of rotavirus vaccine.
Statistically significant clusters identified as potential safety signals were categorized to determine whether they represented known or previously unknown adverse events. Each cluster was classified into one of the following categories; (1) Adverse drug reaction (ADR) – the condition is a known adverse event reported from domestic post-marketing surveillance and reexamination of drug labeling, or those listed in sections, such as ‘Warnings and Precautions,’ ‘Adverse Reactions,’ or ‘Possible Side Effects’ in the patient information of U.S. Food and Drug Administration website and package insert; (2) ADR-related event (ADR-r) – the condition is not explicitly noted in the known risk profile, but is clinically considered related to known ADRs; (3) Signal – the condition is not specifically categorized as ADR or ADR-r. Broader disease categories encompassing the higher classification levels of ADR were classified as ADR-r. Three authors (NYJ, HC, and HK) independently classified statistically significant clusters, and any disagreements were resolved through discussion. In addition to the primary analysis focusing on the first dose, we performed a sensitivity analysis that included all doses up to the third, as long as they were not preceded by a prior dose within 56 days. The study was approved by the Institutional Review Boards of Ewha Womans University (ewha -202,309-0010-01) and received a waiver of informed consent. All methods were carried out following the relevant guidelines and regulations. The analysis was implemented using SAS enterprise guide version 7.1 (SAS Inc., Cary, NC, USA) and TreeScan software version 2.1.
Characteristics of the study participants From June 1, 2016, to October 31, 2022, a total of 1,720,778 infants who received the rotavirus vaccine met the eligibility criteria for the study. Among them 87,567 infants were hospitalized with diagnoses corresponding to the hierarchical tree structure during the 1–56 days follow-up period after the first rotavirus vaccination. After excluding those with a history of relevant conditions and those diagnosed with codes excluded from the tree structure 64,752 infants were included as subjects contributing to the incident outcomes of interest (Supplementary Figure 2). shows the demographic and vaccination characteristics of the eligible rotavirus vaccine recipients and subjects included in primary analysis. Compared to all eligible subjects, the subjects for primary analysis with hospitalization records during the follow-up period had a higher proportion of males. The year of birth and the year of the first rotavirus vaccination were similarly distributed among the total vaccinated population, except for 2016 and 2022 which showed a dip due to data being limited to June through December and up to October, respectively. The number of vaccinated infants decreased slightly each year after 2017, with a notable drop in post vaccination hospitalizations after 2020, coinciding with the COVID-19 pandemic. Tree-temporal scan statistical analysis The tree-temporal scan statistical analysis identified 72,970 incident diagnoses during the follow-up period among the subjects included in the primary analysis, detecting 71 statistically significant clusters ( ). Of these, 28 clusters were classified as ADR, 17 as ADR-r, and 26 as Signal. Clusters classified as ADR comprised unspecified adverse effects such as infection following immunization (KCD-7 code: T88.0, T88.1) occurring within Days 1–2 ( p < .001) and unspecified viral infection (B34) within Days 1–5 ( p < .001), and local or systemic adverse effects known to be associated with vaccines such as fever (R50) within Days 1–2 ( p < .001) and urticaria and erythema (L50-L54) within Days 3–9 ( p < .001). Another significant category of ADR encompassed acute upper respiratory infections (J00-J06) within Days 28–42 ( p < .001), pneumonia (J09-J18) within Days 9–19 or 28–42 ( p < .001), acute lower respiratory infections such as bronchiolitis (J21) within Days 8–28 ( p < .001), intestinal infections or enteritis (A00-A09) within Days 2–7 ( p < .001), purpura (D69) within Days 2–6 ( p < .001), vomiting within Days 2–28 ( p < .001), and urinary tract infection (N39.0) within Days 27–37 ( p = .002). The clusters of ADR-r consisted of fluid, electrolyte and acid-base balance disorders(E87) developed in Days 4–6 ( p = .006), intestinal diseases including diarrhea (K55-K64) in Days 5–7 ( p < .001), and polyarteritis nodosa and related conditions such as Kawasaki disease (M30) in Days 28–42 ( p < .001). Signal clusters included sepsis (A30-A49) within Days 1–20 ( p < .001), viral meningitis (A87) and unspecified meningitis (G03) within Days 1–23 ( p < .001), viral hepatitis (B15-B19), inflammatory liver disease (K75), and abnormal function results including liver (R94) within Days 4–11 ( p < .001), hernia (K40-K46) within Days 2–17 ( p < .001), and tubulo-interstitial nephritis (N10) within Days 11–38 ( p < .001). These clusters included conditions known to be potentially triggered by rotavirus infection or suggested to be associated with other vaccines. The cytomegaloviral disease (B25) cluster classified as signal was the only signal that was considered biologically implausible. Stratified analysis by subgroup Sex-stratified results revealed 51 statistically significant clusters in the 882,168 male rotavirus vaccine recipients and 42 clusters in the 838,610 female recipients (Supplementary Table S2 and S3). Vaccine-stratified results identified 43 statistically significant clusters in the analysis of 806,567 RV1 recipients and 54 clusters in the analysis of 914,211 RV5 recipients (Supplementary Table S4 and 5). Most diseases identified as statistically significant clusters in the analysis for combined subjects were similar to those detected in sex- and vaccine-stratified results. However, in sex-stratified results involving female recipients, clusters related to disorders of other endocrine glands (E20-E35) were detected within Days 17–20 ( p = .007). These clusters included incident diagnoses of hypofunction and other disorders of the pituitary gland (E23) and other disorders of the adrenal gland (E27), classified as signal. In the vaccine-stratified results for RV1 recipients, an ADR-classified cluster of intussusception (K56.1) within Days 5–8 ( p = .005) and a ADR-r-classified cluster of diseases of male genital organs (N40-N51) within Days 6–7 ( p < .001), including penis disorders, were specifically detected. In the results for RV5 recipients, a cluster of burn and corrosion of the hip and lower limb (T24) was specifically detected as signal ( p = .004) but was considered as biologically implausible. Sensitivity analysis In the sensitivity analysis, 4,036,193 doses administered to 1,720,778 rotavirus vaccinees were included. From this, a total of 147,621 incident diagnoses from 127,715 individuals were analyzed, and 70 statistically significant clusters were detected (Supplementary Table 6). Most of these clusters were aligned with the adverse events identified in the primary analysis. However, new clusters classified as ADR were detected, including viral infections characterized by skin (B00-B09) within Days 28–42 ( p = .003), gastro-esophageal reflux disease (K21) within Days 2–18 ( p = .005), and unspecified adverse effects of drug, including allergic reactions and hypersensitivity (T88.7) within Days 1–2 ( p < .001). Supplementary Table 7 summarizes the detected clusters and the sources that identify these clusters as potential adverse events.
From June 1, 2016, to October 31, 2022, a total of 1,720,778 infants who received the rotavirus vaccine met the eligibility criteria for the study. Among them 87,567 infants were hospitalized with diagnoses corresponding to the hierarchical tree structure during the 1–56 days follow-up period after the first rotavirus vaccination. After excluding those with a history of relevant conditions and those diagnosed with codes excluded from the tree structure 64,752 infants were included as subjects contributing to the incident outcomes of interest (Supplementary Figure 2). shows the demographic and vaccination characteristics of the eligible rotavirus vaccine recipients and subjects included in primary analysis. Compared to all eligible subjects, the subjects for primary analysis with hospitalization records during the follow-up period had a higher proportion of males. The year of birth and the year of the first rotavirus vaccination were similarly distributed among the total vaccinated population, except for 2016 and 2022 which showed a dip due to data being limited to June through December and up to October, respectively. The number of vaccinated infants decreased slightly each year after 2017, with a notable drop in post vaccination hospitalizations after 2020, coinciding with the COVID-19 pandemic.
The tree-temporal scan statistical analysis identified 72,970 incident diagnoses during the follow-up period among the subjects included in the primary analysis, detecting 71 statistically significant clusters ( ). Of these, 28 clusters were classified as ADR, 17 as ADR-r, and 26 as Signal. Clusters classified as ADR comprised unspecified adverse effects such as infection following immunization (KCD-7 code: T88.0, T88.1) occurring within Days 1–2 ( p < .001) and unspecified viral infection (B34) within Days 1–5 ( p < .001), and local or systemic adverse effects known to be associated with vaccines such as fever (R50) within Days 1–2 ( p < .001) and urticaria and erythema (L50-L54) within Days 3–9 ( p < .001). Another significant category of ADR encompassed acute upper respiratory infections (J00-J06) within Days 28–42 ( p < .001), pneumonia (J09-J18) within Days 9–19 or 28–42 ( p < .001), acute lower respiratory infections such as bronchiolitis (J21) within Days 8–28 ( p < .001), intestinal infections or enteritis (A00-A09) within Days 2–7 ( p < .001), purpura (D69) within Days 2–6 ( p < .001), vomiting within Days 2–28 ( p < .001), and urinary tract infection (N39.0) within Days 27–37 ( p = .002). The clusters of ADR-r consisted of fluid, electrolyte and acid-base balance disorders(E87) developed in Days 4–6 ( p = .006), intestinal diseases including diarrhea (K55-K64) in Days 5–7 ( p < .001), and polyarteritis nodosa and related conditions such as Kawasaki disease (M30) in Days 28–42 ( p < .001). Signal clusters included sepsis (A30-A49) within Days 1–20 ( p < .001), viral meningitis (A87) and unspecified meningitis (G03) within Days 1–23 ( p < .001), viral hepatitis (B15-B19), inflammatory liver disease (K75), and abnormal function results including liver (R94) within Days 4–11 ( p < .001), hernia (K40-K46) within Days 2–17 ( p < .001), and tubulo-interstitial nephritis (N10) within Days 11–38 ( p < .001). These clusters included conditions known to be potentially triggered by rotavirus infection or suggested to be associated with other vaccines. The cytomegaloviral disease (B25) cluster classified as signal was the only signal that was considered biologically implausible.
Sex-stratified results revealed 51 statistically significant clusters in the 882,168 male rotavirus vaccine recipients and 42 clusters in the 838,610 female recipients (Supplementary Table S2 and S3). Vaccine-stratified results identified 43 statistically significant clusters in the analysis of 806,567 RV1 recipients and 54 clusters in the analysis of 914,211 RV5 recipients (Supplementary Table S4 and 5). Most diseases identified as statistically significant clusters in the analysis for combined subjects were similar to those detected in sex- and vaccine-stratified results. However, in sex-stratified results involving female recipients, clusters related to disorders of other endocrine glands (E20-E35) were detected within Days 17–20 ( p = .007). These clusters included incident diagnoses of hypofunction and other disorders of the pituitary gland (E23) and other disorders of the adrenal gland (E27), classified as signal. In the vaccine-stratified results for RV1 recipients, an ADR-classified cluster of intussusception (K56.1) within Days 5–8 ( p = .005) and a ADR-r-classified cluster of diseases of male genital organs (N40-N51) within Days 6–7 ( p < .001), including penis disorders, were specifically detected. In the results for RV5 recipients, a cluster of burn and corrosion of the hip and lower limb (T24) was specifically detected as signal ( p = .004) but was considered as biologically implausible.
In the sensitivity analysis, 4,036,193 doses administered to 1,720,778 rotavirus vaccinees were included. From this, a total of 147,621 incident diagnoses from 127,715 individuals were analyzed, and 70 statistically significant clusters were detected (Supplementary Table 6). Most of these clusters were aligned with the adverse events identified in the primary analysis. However, new clusters classified as ADR were detected, including viral infections characterized by skin (B00-B09) within Days 28–42 ( p = .003), gastro-esophageal reflux disease (K21) within Days 2–18 ( p = .005), and unspecified adverse effects of drug, including allergic reactions and hypersensitivity (T88.7) within Days 1–2 ( p < .001). Supplementary Table 7 summarizes the detected clusters and the sources that identify these clusters as potential adverse events.
This study retrospectively analyzed longitudinal data using the self-controlled tree-temporal scan statistics method to confirm the safety profile of the rotavirus vaccine and to detect previously unknown potential adverse events. The analysis included over 1.7 million infants who received the rotavirus vaccine in South Korea. The primary analysis identified 71 statistically significant temporal clusters without prespecifying type of adverse events or time periods of concern. Of these, 28 clusters classified as ADR were indicated in drug labeling of Korea, U.S. patient information, or package inserts. Seventeen clusters classified as ADR-r were conditions clinically related to ADR based on previous knowledge or conditions classified as ADR in the lower-level diagnosis codes. The clusters identified as signal encompassed 26 clusters involving conditions suggestive of complications from rotavirus infection or immune responses. The conditions classified as ADRs included disorders such as unspecified viral infections, local or systemic adverse events (e.g., fever), acute upper respiratory diseases (e.g., pharyngitis and pneumonia), and acute lower respiratory infections (e.g., bronchiolitis). These conditions were indicated in the drug labeling in Korea as adverse events reported from randomized clinical trials or domestic surveillance of drug use. Also, this was indicated in the sections, such as ‘Adverse Reactions,’ or ‘Possible Side Effects’ of the package insert or patient information in the U.S. FDA website. Notably, intussusception, detected as a cluster through the RV1-stratified analysis and classified as ADR, is known as one of the serious adverse events associated with the rotavirus vaccine. Both RV1 and RV5 are noted for adverse events related to intussusception, and epidemiological studies from various countries indicate an increased risk of intussusception following vaccination with both RV1 and RV5. , However, some studies suggest that the increased risk may be limited to RV1, , , with no evidence supporting a similar risk for RV5. , This discrepancy suggests that the incidence of intussusception may vary by region and ethnicity, indicating potentially diverse results depending on the study population. Disorders clinically associated with ADRs, such as electrolyte and fluid balance disorders, intestinal diseases and systemic connective tissue disorders were identified as ADR-r, as their lower-level diagnosis codes were classified as ADR, Disorders related to fluid, electrolyte and acid-base balance, such as hyponatremia, hyperkalemia, and acidosis, may occur in association with the ADRs, such as diarrhea and vomiting following rotavirus vaccination. , Kawasaki disease (KCD-7 M30.3), also known as mucocutaneous lymph node syndrome and classified under systemic connective tissue disorders, is explicitly mentioned in package inserts and patient information as an adverse event reported in clinical trials and post-marketing surveillance for both RV1 and RV5. Furthermore, concerning diseases of male genital organs identified as clusters in the RV1-stratified analysis, balanoposthitis (KCD-7 N48.1) were listed in the drug labeling in Korea as an unexpected adverse event. Although this is referenced in the RV5 labeling, it is not included in the RV1 documentation. It was classified as an ADR due to its potential association. Although sepsis was categorized as a signal, rather than an ADR or ADR-r, they have been reported as a complication following rotavirus gastroenteritis and they are considered suggestive conditions of rotavirus infection. The exact mechanism by which septicemia develops from rotavirus infection is not fully understood, but it is believed that damage to the intestinal epithelium by the rotavirus allows enteric bacteria to enter the bloodstream, increasing the risk of secondary septicemia. While rotavirus infection most commonly causes gastroenteritis, it also affects other conditions, including central nervous system diseases, hepatobiliary diseases, and respiratory illnesses. To date, no association has been established between rotavirus vaccination and liver diseases, including abnormal liver function, inflammatory liver diseases, or viral hepatitis, which were observed as clusters in this study. However, rotavirus itself can cause significant elevations in liver transaminases, such as aspartate aminotransferase (AST) and alanine aminotransferase (ALT), , and evidence suggests that it may lead to liver dysfunction, including hepatitis, elevated serum sodium, and increased liver transaminases. Additionally, rotavirus has been detected in cerebrospinal fluid in cases of meningitis, indicating the potential for central nervous system complications associated with rotavirus infection. Renal tubulo-interstitial diseases, including nephritis, were detected as statistically significant clusters but have not been associated with rotavirus vaccination to date. However, similar cases have been reported following vaccinations like the COVID-19 vaccine. , Acute interstitial nephritis is often drug-induced through a cell-mediated type Ⅳ hypersensitivity reaction. Some vaccines may trigger acute tubulo-interstitial nephritis by forming immunogenic haptens, which activate the immune response and lead to renal inflammation, as observed in cases of leukocytoclastic vasculitis and other immune cell infiltrations post-vaccination. While no association between rotavirus vaccine and hernia has been established, conditions such as vomiting, which are considered ADRs of rotavirus vaccination, can increase abdominal pressure, potentially contributing to the development of hernia, including inguinal hernia. , Cytomegaloviral disease, identified as a cluster in the analysis, is caused by cytomegalovirus and is biologically unrelated to the rotavirus vaccine. However, in cases where pregnant women are infected with cytomegalovirus, transmission to the child during pregnancy or birth is possible, and latent cytomegalovirus infections may be reactivated by immune stimuli such as vaccinations. No direct association has been found between the rotavirus vaccine and burn injuries to the hip and lower limbs, which were identified as a cluster in the RV5-stratified analysis. The rarity of these cases, combined with the delayed diagnosis following vaccination, suggests that this cluster may represent false signals. The strength of this study lies in its use of hospitalization data for all infants in Korea, making the findings highly representative and enabling the detection of relatively rare adverse events. Additionally, this study evaluated diverse combinations of vaccine-adverse events without pre-specifying the risk period. However, there are several limitations. First, the outcomes of interest were defined solely by diagnostic codes from claims data, without validation. This may have led to the inclusion of individuals who did not actually experience the disease or the exclusion of those who did, introducing potential misclassification bias and affecting internal validity. To draw causal inferences, it is crucial to evaluate the validity of the operational definitions and apply robust epidemiological study designs. Furthermore, as this study focused on the entire infant population in Korea, caution should be exercised when generalizing the findings to other countries or populations. Second, those with an increased risk of long-term adverse events or with a constant risk throughout the follow-up period may not have been detected. Considering the significant differences in health conditions among children of varying age, a relatively short follow-up period of 56 days was set to minimize time-varying confounding. Consequently, long-term adverse events may not have been identified as statistically significant clusters in this study. Third, the study only included data from hospitalized patients, which could have led to the exclusion of less severe adverse events. As of 2021, the annual average number of outpatient care consultations per capita in Korea was 15.7, the highest among the OECD nations, whose average was 5.9. To prevent excessive detection of false signals and focus on identifying more severe adverse events, only inpatient data were included in the analysis. Fourth, the impact of other concomitant vaccines recommended for the study population at the same age was not considered. Given the large numbers and varieties of NIP vaccines co-administered with rotavirus vaccines (e.g., diphtheria, tetanus toxoids, and acellular pertussis [DTaP] vaccine, pneumococcal conjugate vaccine, and polio vaccine), further research is needed to assess their impact on safety. Fifth, while the administrative data used in this study provide vaccination and diagnostic records, potential errors in exposure and outcome information cannot be completely excluded. Additionally, the cases included in the analysis were not limited to those with confirmed causal relationships with vaccination. This means that some cases occurring close in time to vaccination may have been coincidental, potentially leading to an overestimation of event incidence. Consequently, this study is limited to identifying signals of increased occurrences following rotavirus vaccination. Further studies specifically designed to evaluate causality are needed to thoroughly investigate these findings and draw definitive conclusions. Lastly, this study applied a hypothesis-generating method based on the self-controlled tree-temporal scan statistics. Therefore, causal conclusions cannot be drawn without a rigorous evaluation of the specific hypothesis that the exposure is associated with an increased risk of the outcome identified as signal. In conclusion, this study used the tree-temporal scan statistics method to detect safety signals, including a number of well-known adverse events, which show consistency with the previously known risk profiles. We also identified unknown but biologically plausible events associated with immune response or complications following the rotavirus vaccination. The conditions detected as statistically significant signals may be considered as priority outcomes for evaluating potential associations with the rotavirus vaccine. Among over 1.7 million infants who received the rotavirus vaccine, only 87,567 were hospitalized during the follow-up period, regardless of whether the hospitalization was vaccine-related or incidental. This finding underscores the rarity of severe adverse events associated with rotavirus vaccination, further supporting the vaccine’s robust safety profile. Implementation of this screening approach for safety signal detection can provide timely access to the best available evidence on the safety of medications, especially for vaccines targeting broad population. This approach offers comprehensive information on potential safety concerns, particularly for healthcare providers and caregivers who administer vaccines to children to be informed about issues that may require special caution. However, the tree-temporal scan statistics serves only as the first step in vaccine active surveillance to detect potential adverse events among the thousands of potential events and risk windows. Therefore, safety signals detected from the analysis need further investigation through epidemiological studies and validated operational outcome definitions to confirm causal relationship between rotavirus vaccination and the events.
Supplementary material.pdf
|
null | db01d275-0adf-4e1c-abad-ddb093700811 | 11266425 | Microbiology[mh] | Cadmium (Cd) is a ubiquitous heavy metal with a widely distributed and known toxicity , and contamination of agricultural land by Cd is primarily caused by human activities, such as industrial wastewater discharge, disposal of large quantities of metal wastes and sewage sludge, and pesticide misuse . Cd is more mobile than other heavy metals in plants, leading to its increased accumulation and subsequent induction of severe toxicity , . This is caused by cation deficiency, inhibition of the biosynthesis of chlorophyll or other essential substances, and increased oxidative damage, such as structural and functional cellular degeneration and destruction of biomolecules, which are closely linked to plant growth and development . The composition and functional activities of the plant rhizosphere microbiome are dynamically regulated under various environmental stresses, providing a critical foundation for plant adaptation and health – . Interestingly, some microorganisms can reduce the toxicity of Cd to plants and themselves by adsorbing, complexing, enzymatically converting, and redox-converting Cd ions , . For instance, Klebsiella mobilis CIAM 880 has the ability to form complexes with unbound Cd ions, which reduces their bioavailability of Cd to barley plants and mitigates the toxic effects . Enterobacter bugandensis can also reduce Cd accumulation in wheat grains by bioprecipitation and extracellular adsorption , . Plant functional genes, including those involved in biotic and abiotic stress response and nutrient uptake and transport, could modulate the secretion of root exudates and some ionic signals, which significantly influence the rhizosphere microorganisms and are essential for enhancing plant stress tolerance and nutrient use efficiency – . Conversely, the activity of rhizosphere microorganisms can induce systemic tolerance in plants by releasing metabolites, which can affect host genes expression, and alter phytohormone secretion , . For example, Pseudomonas aeruginosa and Burkholderia gladioli are able to reduce Cd uptake and expression of metal transporter genes in tomatoes plants, leading to improved growth and photosynthetic pigmentation . NRT1.1B , encoding a nitrate transporter in rice, is involved in the attraction of bacteria that are enriched in O. sativa subsp. indica rice . These bacteria can improve rice growth under organic nitrogen (N) conditions using the synthetic community . Our recent work has shown that loss of sst gene function in rice can affect rhizosphere microbes by altering plant metabolites, such as salicin and arbutin, which in turn can help the host to resist salt stress . Ammonium transporters (AMTs), involved in ammonium (NH 4 + -N) uptake and transport in most plants, promote the uptake of N sources by plants , . Previous studies have investigated the role of AMTs in promoting NH 4 + -N transport to alleviate salt stress . A growing body of evidence suggests that AMTs-targeted NH 4 + -N is more sensitive to Cd concentration than NO 3 - -N. For example, NH 4 + -N can inhibit Cd translocation from roots to shoots, thereby protecting Arabidopsis thaliana from Cdtoxicity , . Recent research has also found in Solanum nigrum L . that the transcription level of Cd transport-related genes is regulated by NH 4 + -N signaling to prevent the Cd accumulation and flux, compared to NO 3 - -N. The researchers also suggest that NH 4 + -N can fix Cd in the cell wall component, thereby improving Cd tolerance. However, the potential of GmAMT2.1/2.2 genes to alleviate external Cd toxicity by regulating metabolites and microbiota in the soybean rhizosphere remains to be determined. Synthetic communities (SynComs) have been increasingly used to study the interactions between microbes and their hosts in recent years , . The development of high-throughput microbial isolation techniques has enabled researchers to efficiently isolate more microbial strains . The isolated strains were then compared with the sequencing results to select suitable candidates for SynComs construction. Despite the fact that research on SynComs has often focused on bacteria, fungi also play a very important role in helping plants resist heavy metal stress . In addition, fungi can establish symbiotic networks with bacteria , which may be more effective in helping plants resist stresses. Therefore, it is necessary to evaluate the role of different synthetic communities in helping soybeans resist Cd toxicity. In this study, we hypothesized that the GmAMT2.1/2.2 genes enhance NH 4 + -N uptake in soybeans, and help them to resist Cd toxicity by influencing metabolites to recruit beneficial rhizosphere microorganisms. To test these hypotheses, high-throughput sequencing, and liquid chromatography-mass spectrometry (LC-MS) assays were used to identify taxonomic and metabolic differences and their correlations in the rhizosphere. In addition, we isolated and identified bacteria and fungi in the rhizosphere of different genotypes and constructed bacterial, fungal, and bacterial-fungal cross-kingdom SynComs. Finally, we verified that these SynComs influence the expression of heavy metal tolerance-related genes in soybean roots. The objectives of this study were to: (1) investigate how the GmAMT2.1/2.2 genes influence rhizosphere microbes by regulating metabolites and N patterns, which in turn help soybean to resist Cd toxicity; and (2) determine which SynCom consisting of recruited microorganisms is most effective in alleviating Cd toxicity in soybean, and to elucidate the corresponding physiological and molecular mechanisms.
GmAMT2.1/2.2 are responsible for the Cd toxicity alleviation through affecting N patterns in soybean To identify potential candidate AMTs in response to the Cd toxicity alleviation, we examined the transcriptome data of AMT homologous genes in the CK and Cd treatment. Among these, two homologous genes GmAMT2.1/2.2 , which are predominantly expressed in roots (Supplementary Fig. ), are significantly up regulated by Cd treatment (Fig. ). To further explore the transcriptional response of GmAMT2.1/2 .2 to Cd treatment, we analyzed the expression levels of GmAMT2.1/2 .2 using the soybean plants treated with Cd at different concentrations. The expression levels of GmAMT2.1/2 .2 exhibited an upward trend with increasing Cd concentration and time (except for a decrease in GmAMT2.1 expression after 6 h of treatment and a decrease in GmAMT2.2 expression after 12 h of treatment) (Fig. ). In addition, we generated four lines to confirm the identification and function of GmAMT2.1/2.2 , including two stable double knockout lines (GmAMT2.1/2.2) and two overexpression lines (OXAMT2.2). Mu1 deleted 5 bp (AGCAT) in sgRNA1 of GmAMT2.1 , and deleted 7 bp (CAATGGG) in sgRNA2 of GmAMT2.2 . Mu2 inserted 1 bp (T) in sgRNA1 of both GmAMT2.1 and GmAMT2.2 , which ultimately lead to Mu1 and Mu2 are loss-of-function mutants. In the two overexpressed lines (OX1 and OX2), the expression of GmAMT2.2 was up-regulated 1.87- and 2.07-fold, respectively. (Fig. ; Supplementary Figs. , ). When exposed to identical Cd concentrations, the knockout lines consistently displayed a more pronounced Cd sensitivity compared to the wild-type (WT) lines in terms of plant growth. In contrast, the overexpressed lines did not show significant differences compared to the WT (Fig. ). Cd treatment resulted in a significant decrease in the fresh weight of all four lines. Furthermore, an increase in N content, especially NH 4 + -N, was observed in response to Cd toxicity. This upward trend in N content progression was also observed upon overexpression of GmAMT2.2 as well, whereas an opposite relationship was observed in the mutants (Fig. ). Taken together, these findings suggest that NH 4 + -N may play a role in alleviating Cd toxicity and that the GmAMT2.1/2.2 genes may inhibit Cd transport and accumulation by increasing NH 4 + -N levels, thereby mitigating the detrimental effects of Cd toxicity on soybean plants. Modification of rhizosphere microbiota by GmAMT2.1/2.2 under Cd toxicity From 2,236,825 high-quality bacterial 16 S rRNA reads and 2,078,640 fungal ITS reads, we identified a total of 11,980 bacterial OTUs and 1,702 fungal OTUs. Cd toxicity had no significant effect ( P > 0.05) on the alpha diversity of rhizobacterial communities, but increased the alpha diversity of fungal communities in both WT and OX soybean genotypes. Under no-Cd toxicity conditions, the Mu genotype had a higher Shannon index than the other two genotypes, while Cd toxicity did not result in a significant difference between the three genotypes (Fig. ). In terms of beta diversity, Cd toxicity and the interaction of Cd toxicity and genotype significantly altered the structure of microbial communities (Fig. ; Supplementary Table ). Furthermore, WT, OX, and Mu genotypes generally harbored significantly different microbial communities under both Cd and no-Cd toxicity conditions (PERMANOVA, pairwise comparison, n = 6, P < 0.05) (Fig. ; Supplementary Table ), suggesting that the GmAMT2.1/2.2 gene activity affects the assembly of the soybean rhizosphere microbiome. The rhizosphere soil of the three genotypes exhibited comparable community composition under Cd and no-Cd toxicity conditions (Fig. ). Briefly, the most prevalent bacterial phyla were Acidobacteria, Proteobacteria, Actinobacteria and Chloroflexi with the relative abundances ranging from 21.56% to 22.65%, 7.42% to 7.98%, 5.09% to 5.55% and 4.98% to 5.40%, respectively. The fungal communities were mainly Ascomycota, Basidiomycota and Chytridiomycota, with the relative abundances varying from 62.08% to 70.27%, 12.71% to 25.92% and 0.73% to 4.26%, respectively. In general, five bacterial phyla (i.e., Proteobacteria, Actinobacteria, Myxococcota, Firmicutes and Gemmatimonadetes) and five fungal phyla (i.e., Ascomycota, Basidiomycota, Chytridiomycota, Mortierellomycota and Kickxellomycota) were significantly affected by Cd toxicity. One bacterial phylum (i.e., Verrucomicrobiota) and one fungal phylum (i.e., Basidiomycot) were influenced by genotypes. Moreover, three bacterial (i.e., Acidobacteriota, Actinobacteriota and Verrucomicrobiota) and three fungal phyla (i.e., Ascomycota, Basidiomycot and Glomeromycota) were altered by the interactions between genotype and Cd toxicity (Supplementary Table ). All genotypes were observed to be significantly affected by Cd toxicity in terms of the co-occurrence network structure, but the responses varied (Fig. ; Supplementary Table ). Specifically, Cd toxicity increased network complexity, as evidenced by an increase in both the clustering coefficient and the number of edges. Mu genotype showed the highest number of edges and clustering coefficient under Cd toxicity, followed by WT and OX genotypes. These results underline the impact of Cd toxicity on microbial community dynamics and the genotype-specific mechanisms that shape community composition under Cd toxicity. Enriched potential beneficial microorganisms by GmAMT2.1/2.2 under Cd toxicity We investigated the effect of Cd toxicity on the relative abundance of microbes in three soybean genotypes (Figs. , ). Cd toxicity increased the number of microbial OTUs in Mu, WT and OX by 306, 256, and 280 (bacteria), and 77, 71, and 94 (fungi), respectively (Figs. b, ). The enriched bacterial OTUs were mainly belonged to Proteobacteria, Firmicutes, Chloroflexi, and Verrucomicrobiota, while the enriched fungal OTUs were mainly associated with Ascomycota and Rozellomycota. Venn analysis revealed that WT and OX genotypes carrying the AMT genes showed a specific enrichment of 274 bacterial and 96 fungal OTUs under Cd toxicity. Based on the information of the isolation and identification of culturable microbes, six bacterial OTUs including OTU759 ( Tumebacillus ), OTU215 ( Ralstonia ), OTU1135 ( Alicyclobacillus ), OTU57 ( Burkholderia ), OTU419 ( Paenibacillus ), OTU5695 ( Methylophilus ), and six fungal OTUs including OTU76 ( Aspergillus ), OTU99 ( Talaromyces ), OTU144 ( Penicillium ) and OTU168 ( Cladosporium ) were identified and selected to show their relative abundance in different treatments (Figs. c, ). Response of soil metabolites to GmAMT2.1/2.2 under Cd toxicity To explore the complex metabolic alterations in the rhizosphere soil under different treatments and genotypes, we performed LC-MS analysis on soil metabolites of Mu, WT, and OX with and without Cd exposure. A total of 798 distinct peaks with defined identities were detected across the three genotypes. Orthogonal partial least squares discriminant analysis (OPLS-DA) illustrated a clear separation between the treatments (Supplementary Fig. ; Supplementary Table ). Using a combination of filtering procedures and a threshold of variable importance in projection (VIP) > 1.0 and P < 0.05, we identified 35 Cd-induced metabolites that increased and 30 that decreased in OX and/or WT, respectively (Supplementary Table ). KEGG pathway enrichment analysis revealed differential enrichment and depletion of metabolites in secondary metabolism biosynthesis, including purine metabolism, arginine biosynthesis, galactose metabolism, stilbenoid, flavonoid biosynthesis, tryptophan metabolism, isoflavonoid biosynthesis, and phenylpropanoid biosynthesis (Fig. ). Notably, the levels of genistein, chrysin, piceatannol, glycitein, daidzein, daidzin, and coumestrol increased, while xanthurenic acid, sinapoyl alcohol, L-glutamic acid, guanine, 2-deoxyguanosine and sucrose decreased in WT and/or OX plants under Cd exposure (Fig. ). Correlation analysis between metabolites and microbial species revealed a positive association between sinapyl Alcohol, daidzein, and glycitein with Alicyclobacillus , Tumebacillus , Ralstonia , and Methylophilus in Mu and WT genotypes. However, a significant increase in the levels of these metabolites in the OX genotypewas positively correlated with the abundance of different microbial species (Fig. ). These findings suggest that GmAMT2.1/2.2 may play a role in regulating metabolic pathways associated with microbial community assembly in the rhizosphere. Effect of different SynComs on Cd tolerance in soybean Using a high-throughput cultivation method, we sequenced and identified 403 bacterial and 268 fungal colonies from the rhizosphere compartment across different treatments. Among them, six bacterial and eight fungal strains, identical to the microorganisms enriched in the soybean genotypes carrying the AMT genes based on the sequencing results, were used to construct three SynComs (bacterial, fungal, and cross-kingdom SynComs) (Supplementary Table ). The efficacy of these SynComs in enhancing soybean Cd toxicity tolerance was evaluated (Fig. ). The results showed that SynComs-treated plants had significantly higher shoot and root fresh weight, NH 4 + -N content, and translocation factor, with reduced Cd content in shoot and root compared to heat-killed control plants (Fig. ). Additionally, cross-kingdom and fungal SynComs had more pronounced impact on the indicators mentioned above compared to bacterial SynCom. RNA-Seq analysis was conducted to elucidate the molecular mechanisms underlying the SynCom-mediated enhancement of soybean growth under Cd toxicity. PCA showed distinct separation of the four treatments ( P < 0.05) (Fig. ). SynComs application induced differential expression of numerous genes in soybean root tissues (Fig. ). Notably, all three SynComs significantly upregulated the expression of genes involved in cytochrome P450 ( Glyma.08G350800 ), endopeptidase inhibitor activity ( Glyma.16G211700 ), metabolic process related ( Glyma.07G00700 0, Glyma.01G205900 and Glyma.02G104600 ), zinc finger protein ( Glyma.04G044900 ), messenger RNA biogenesis ( Glyma.12G093100 ), ethylene response factor (Glyma.12G117000) and BURP domain family member ( Glyma.12G217300 ). Bacterial SynCom specifically upregulated genes related to cell redox homeostasis and casparian strip membrane domain proteins (CASP) family member, while fungal SynCom exclusively upregulated genes involved in protein phosphorylation. KEGG enrichment analysis was further performed to explore the distinct metabolic pathways modulated by the three SynComs in alleviating Cd toxicity (Fig. ). The results showed that the three SynComs mainly influenced metabolic processes associated with amino acid metabolism, biosynthesis of other secondary metabolites, carbohydrate metabolism and signal transduction. Among these, phenylpropanoid biosynthesis exhibited the highest enrichment in both bacterial and fungal SynComs, while plant hormone signal transduction showed the highest enrichment in cross-kingdom SynCom. However, certain pathways demonstrated specific enrichment under different SynCom treatments. For instance, the toll-like receptor signaling pathway was specially enriched in bacterial SynCom treatment and linked to plant disease resistance and immune response. Flavonoid biosynthesis, ascorbate and aldarate metabolism and glucosinolate biosynthesis were exclusively associated with fungal SynCom, while plant-pathogen interaction and mineral absorption were only associated with cross-kingdom SynCom.
are responsible for the Cd toxicity alleviation through affecting N patterns in soybean To identify potential candidate AMTs in response to the Cd toxicity alleviation, we examined the transcriptome data of AMT homologous genes in the CK and Cd treatment. Among these, two homologous genes GmAMT2.1/2.2 , which are predominantly expressed in roots (Supplementary Fig. ), are significantly up regulated by Cd treatment (Fig. ). To further explore the transcriptional response of GmAMT2.1/2 .2 to Cd treatment, we analyzed the expression levels of GmAMT2.1/2 .2 using the soybean plants treated with Cd at different concentrations. The expression levels of GmAMT2.1/2 .2 exhibited an upward trend with increasing Cd concentration and time (except for a decrease in GmAMT2.1 expression after 6 h of treatment and a decrease in GmAMT2.2 expression after 12 h of treatment) (Fig. ). In addition, we generated four lines to confirm the identification and function of GmAMT2.1/2.2 , including two stable double knockout lines (GmAMT2.1/2.2) and two overexpression lines (OXAMT2.2). Mu1 deleted 5 bp (AGCAT) in sgRNA1 of GmAMT2.1 , and deleted 7 bp (CAATGGG) in sgRNA2 of GmAMT2.2 . Mu2 inserted 1 bp (T) in sgRNA1 of both GmAMT2.1 and GmAMT2.2 , which ultimately lead to Mu1 and Mu2 are loss-of-function mutants. In the two overexpressed lines (OX1 and OX2), the expression of GmAMT2.2 was up-regulated 1.87- and 2.07-fold, respectively. (Fig. ; Supplementary Figs. , ). When exposed to identical Cd concentrations, the knockout lines consistently displayed a more pronounced Cd sensitivity compared to the wild-type (WT) lines in terms of plant growth. In contrast, the overexpressed lines did not show significant differences compared to the WT (Fig. ). Cd treatment resulted in a significant decrease in the fresh weight of all four lines. Furthermore, an increase in N content, especially NH 4 + -N, was observed in response to Cd toxicity. This upward trend in N content progression was also observed upon overexpression of GmAMT2.2 as well, whereas an opposite relationship was observed in the mutants (Fig. ). Taken together, these findings suggest that NH 4 + -N may play a role in alleviating Cd toxicity and that the GmAMT2.1/2.2 genes may inhibit Cd transport and accumulation by increasing NH 4 + -N levels, thereby mitigating the detrimental effects of Cd toxicity on soybean plants.
GmAMT2.1/2.2 under Cd toxicity From 2,236,825 high-quality bacterial 16 S rRNA reads and 2,078,640 fungal ITS reads, we identified a total of 11,980 bacterial OTUs and 1,702 fungal OTUs. Cd toxicity had no significant effect ( P > 0.05) on the alpha diversity of rhizobacterial communities, but increased the alpha diversity of fungal communities in both WT and OX soybean genotypes. Under no-Cd toxicity conditions, the Mu genotype had a higher Shannon index than the other two genotypes, while Cd toxicity did not result in a significant difference between the three genotypes (Fig. ). In terms of beta diversity, Cd toxicity and the interaction of Cd toxicity and genotype significantly altered the structure of microbial communities (Fig. ; Supplementary Table ). Furthermore, WT, OX, and Mu genotypes generally harbored significantly different microbial communities under both Cd and no-Cd toxicity conditions (PERMANOVA, pairwise comparison, n = 6, P < 0.05) (Fig. ; Supplementary Table ), suggesting that the GmAMT2.1/2.2 gene activity affects the assembly of the soybean rhizosphere microbiome. The rhizosphere soil of the three genotypes exhibited comparable community composition under Cd and no-Cd toxicity conditions (Fig. ). Briefly, the most prevalent bacterial phyla were Acidobacteria, Proteobacteria, Actinobacteria and Chloroflexi with the relative abundances ranging from 21.56% to 22.65%, 7.42% to 7.98%, 5.09% to 5.55% and 4.98% to 5.40%, respectively. The fungal communities were mainly Ascomycota, Basidiomycota and Chytridiomycota, with the relative abundances varying from 62.08% to 70.27%, 12.71% to 25.92% and 0.73% to 4.26%, respectively. In general, five bacterial phyla (i.e., Proteobacteria, Actinobacteria, Myxococcota, Firmicutes and Gemmatimonadetes) and five fungal phyla (i.e., Ascomycota, Basidiomycota, Chytridiomycota, Mortierellomycota and Kickxellomycota) were significantly affected by Cd toxicity. One bacterial phylum (i.e., Verrucomicrobiota) and one fungal phylum (i.e., Basidiomycot) were influenced by genotypes. Moreover, three bacterial (i.e., Acidobacteriota, Actinobacteriota and Verrucomicrobiota) and three fungal phyla (i.e., Ascomycota, Basidiomycot and Glomeromycota) were altered by the interactions between genotype and Cd toxicity (Supplementary Table ). All genotypes were observed to be significantly affected by Cd toxicity in terms of the co-occurrence network structure, but the responses varied (Fig. ; Supplementary Table ). Specifically, Cd toxicity increased network complexity, as evidenced by an increase in both the clustering coefficient and the number of edges. Mu genotype showed the highest number of edges and clustering coefficient under Cd toxicity, followed by WT and OX genotypes. These results underline the impact of Cd toxicity on microbial community dynamics and the genotype-specific mechanisms that shape community composition under Cd toxicity.
GmAMT2.1/2.2 under Cd toxicity We investigated the effect of Cd toxicity on the relative abundance of microbes in three soybean genotypes (Figs. , ). Cd toxicity increased the number of microbial OTUs in Mu, WT and OX by 306, 256, and 280 (bacteria), and 77, 71, and 94 (fungi), respectively (Figs. b, ). The enriched bacterial OTUs were mainly belonged to Proteobacteria, Firmicutes, Chloroflexi, and Verrucomicrobiota, while the enriched fungal OTUs were mainly associated with Ascomycota and Rozellomycota. Venn analysis revealed that WT and OX genotypes carrying the AMT genes showed a specific enrichment of 274 bacterial and 96 fungal OTUs under Cd toxicity. Based on the information of the isolation and identification of culturable microbes, six bacterial OTUs including OTU759 ( Tumebacillus ), OTU215 ( Ralstonia ), OTU1135 ( Alicyclobacillus ), OTU57 ( Burkholderia ), OTU419 ( Paenibacillus ), OTU5695 ( Methylophilus ), and six fungal OTUs including OTU76 ( Aspergillus ), OTU99 ( Talaromyces ), OTU144 ( Penicillium ) and OTU168 ( Cladosporium ) were identified and selected to show their relative abundance in different treatments (Figs. c, ).
GmAMT2.1/2.2 under Cd toxicity To explore the complex metabolic alterations in the rhizosphere soil under different treatments and genotypes, we performed LC-MS analysis on soil metabolites of Mu, WT, and OX with and without Cd exposure. A total of 798 distinct peaks with defined identities were detected across the three genotypes. Orthogonal partial least squares discriminant analysis (OPLS-DA) illustrated a clear separation between the treatments (Supplementary Fig. ; Supplementary Table ). Using a combination of filtering procedures and a threshold of variable importance in projection (VIP) > 1.0 and P < 0.05, we identified 35 Cd-induced metabolites that increased and 30 that decreased in OX and/or WT, respectively (Supplementary Table ). KEGG pathway enrichment analysis revealed differential enrichment and depletion of metabolites in secondary metabolism biosynthesis, including purine metabolism, arginine biosynthesis, galactose metabolism, stilbenoid, flavonoid biosynthesis, tryptophan metabolism, isoflavonoid biosynthesis, and phenylpropanoid biosynthesis (Fig. ). Notably, the levels of genistein, chrysin, piceatannol, glycitein, daidzein, daidzin, and coumestrol increased, while xanthurenic acid, sinapoyl alcohol, L-glutamic acid, guanine, 2-deoxyguanosine and sucrose decreased in WT and/or OX plants under Cd exposure (Fig. ). Correlation analysis between metabolites and microbial species revealed a positive association between sinapyl Alcohol, daidzein, and glycitein with Alicyclobacillus , Tumebacillus , Ralstonia , and Methylophilus in Mu and WT genotypes. However, a significant increase in the levels of these metabolites in the OX genotypewas positively correlated with the abundance of different microbial species (Fig. ). These findings suggest that GmAMT2.1/2.2 may play a role in regulating metabolic pathways associated with microbial community assembly in the rhizosphere.
Using a high-throughput cultivation method, we sequenced and identified 403 bacterial and 268 fungal colonies from the rhizosphere compartment across different treatments. Among them, six bacterial and eight fungal strains, identical to the microorganisms enriched in the soybean genotypes carrying the AMT genes based on the sequencing results, were used to construct three SynComs (bacterial, fungal, and cross-kingdom SynComs) (Supplementary Table ). The efficacy of these SynComs in enhancing soybean Cd toxicity tolerance was evaluated (Fig. ). The results showed that SynComs-treated plants had significantly higher shoot and root fresh weight, NH 4 + -N content, and translocation factor, with reduced Cd content in shoot and root compared to heat-killed control plants (Fig. ). Additionally, cross-kingdom and fungal SynComs had more pronounced impact on the indicators mentioned above compared to bacterial SynCom. RNA-Seq analysis was conducted to elucidate the molecular mechanisms underlying the SynCom-mediated enhancement of soybean growth under Cd toxicity. PCA showed distinct separation of the four treatments ( P < 0.05) (Fig. ). SynComs application induced differential expression of numerous genes in soybean root tissues (Fig. ). Notably, all three SynComs significantly upregulated the expression of genes involved in cytochrome P450 ( Glyma.08G350800 ), endopeptidase inhibitor activity ( Glyma.16G211700 ), metabolic process related ( Glyma.07G00700 0, Glyma.01G205900 and Glyma.02G104600 ), zinc finger protein ( Glyma.04G044900 ), messenger RNA biogenesis ( Glyma.12G093100 ), ethylene response factor (Glyma.12G117000) and BURP domain family member ( Glyma.12G217300 ). Bacterial SynCom specifically upregulated genes related to cell redox homeostasis and casparian strip membrane domain proteins (CASP) family member, while fungal SynCom exclusively upregulated genes involved in protein phosphorylation. KEGG enrichment analysis was further performed to explore the distinct metabolic pathways modulated by the three SynComs in alleviating Cd toxicity (Fig. ). The results showed that the three SynComs mainly influenced metabolic processes associated with amino acid metabolism, biosynthesis of other secondary metabolites, carbohydrate metabolism and signal transduction. Among these, phenylpropanoid biosynthesis exhibited the highest enrichment in both bacterial and fungal SynComs, while plant hormone signal transduction showed the highest enrichment in cross-kingdom SynCom. However, certain pathways demonstrated specific enrichment under different SynCom treatments. For instance, the toll-like receptor signaling pathway was specially enriched in bacterial SynCom treatment and linked to plant disease resistance and immune response. Flavonoid biosynthesis, ascorbate and aldarate metabolism and glucosinolate biosynthesis were exclusively associated with fungal SynCom, while plant-pathogen interaction and mineral absorption were only associated with cross-kingdom SynCom.
Plant functional genes can actively shape the microbial community structure of the rhizosphere by regulating root metabolites, which improving the nutrient uptake efficiency and environmental stress resistance of crops , , . AMTs regulate the uptake and transport of NH 4 + -N, thereby promoting plant uptake of N sources . NH 4 + -N produces various N-containing macromolecules, which regulate Cd uptake and translocation by plants, and alter the soybean root exudate and rhizosphere environment , . In this study, we used three soybean genotypes with normal expression (WT), overexpression (OX) and knockout (Mu) of the GmAMT2.1/2.2 genes, respectively, to demonstrate the key role of the GmAMT2.1/2.2 genes in influencing NH 4 + -N uptake, Cd levels, and soil metabolites, resulting in significant shifts in rhizosphere microbial community structure (Fig. ). Plant materials carrying the GmAMT2.1/2.2 genes have been found to attract a diverse array of potentially beneficial microorganisms, including bacterial species such as Tumebacillus , Alicyclobacillus , Paenibacillus , and Methylphilus , and fungal species such as Aspergillus , Penicillium , Cladosporium , and Talaromyces (Figs. , ). The enriched microbial species with relevant function may play an essential role in enhancing plant resilience to Cd toxicity. For instance, Tumebacillus and Alicyclobacillus are soil organic matter (SOM) decomposers that can mobilize scarce nutrients, thereby improving plant adaptation to Cd toxicity – . Paenibacillus and Methylphilus have been demonstrated to promote plant N uptake , . Talaromyces sp., a plant growth-promoting fungus, releases terpenoid-like volatiles that stimulate plant growth . Certain Ralstonia species, such as Ralstonia eutropha Q2-8 and Ralstonia Bcul-1 , are considered heavy metal-resistant strains and can enhance plant Cd tolerance through different mechanisms, including modulating heavy metal resistance gene expression, strengthening cell wall components, or sequestering Cd into root cell vacuoles , . Aspergillus sp. activated genes encoding glutathione (GSH) under Cd toxicity, alleviating the detrimental effects of Cd . Moreover, Paenibacillus , Penicillium and Cladosporium are known to directly absorb and immobilize soil Cd by exopolysaccharide production, formation of stable phosphate precipitates and oxidation of Mn (II) to biogenic manganese oxides (BMOs), respectively – . Thus, these rhizosphere microorganisms with diverse functionalities can synergistically enhance plant Cd tolerance through various mechanisms, including nutrient provision, gene expression regulation, chemical transformation, and direct Cd absorption/immobilization. They formed a complex functional network of rhizosphere microorganisms that promote healthy plant growth under Cd toxicity. Microbial community composition and metabolite profiles exhibited significant variations among soybean varieties. In particular, the abundances of Paenibacillus , Tumebacillus , Methylophilus , Aspergillus , Penicillium and Talaromyces were positively correlated with sinapyl alcohol and coumestrol levels. Sinapyl alcohol serves as a precursor of coumestrol, a type of coumarin. Coumarin exudation, regulated by MYB72 , have been shown to shape the root microbiome, promoting the health of Arabidopsis thaliana . Therefore, our findings suggest that GmAMT2.1/2.2 may enhance soybean adaptation to Cd toxicity by modulating the root microbiome through coumarin synthesis. GmAMT2.1/2.2 could influence soil N content and alter the synthesis of flavonoids, including genistein, daidzin and daidzein (Fig. ). These flavonoids may influence the symbiotic relationship between rhizobia and legumes, enriching beneficial microorganisms such as Cladosporium and Aspergillus , and further modulate plant defense responses , . In addition, low concentrations of the flavonoid chrysin can promote spore germination and root colonization by arbuscular mycorrhizal fungi, enhancing the rhizobia-plant symbiosis and potentially increasing soybean tolerance to Cd . Moreover, Paenibacillus , Ralstonia and Penicillium were significantly correlated with piceatannol, a compound known to enhance plant resistance to biotic and abiotic stresses. This indicates that piceatannol may also selectively recruit beneficial microbes. Although a direct link between piceatannol and specific microbes has not yet been established, this could be a direction for future research. All three SynComs alleviated Cd toxicity, with fungal SynCom and cross-kindom SynCom exhibiting greater efficacy than the bacterial SynCom (Fig. ). This suggests that GmAMT2.1/2.2 genes-regulated microbiome plays a crucial role in enhancing soybean resistance Cd toxicity, and fungi may hold greater potential than bacteria in assisting soybeans to adapt to Cd toxicity. Fungi mitigation of Cd toxicity involves diverse mechanisms, including metal biosorption onto the cell wall, intracellular accumulation and sequestration, and deposition of metal compounds within and around the mycelium . Additionally, unlike bacteria, fungi can form extensive mycelial networks, facilitating bidirectional nutrient exchange with plants . Additionally, the fungal SynCom may promote soybean adaptation Cd toxicity through the induction of genes involved in flavonoid and glucosinolate biosynthesis and ascorbic acid metabolism in the soybean root, which was not observed with the bacterial SynCom. These substances contribute to Cd toxicity mitigation by enriching certain beneficial microorganisms, increasing sulfur availability and scavenging excess H 2 O 2 from plant cells – . Of note, a recent study by Xie et al. showed that microorganisms might regulate the immune system responsible for Cd resistance in host plants . This aligns with our finding that the bacterial SynCom specifically enriched for toll-like receptor signalling pathway, which served as a sensor for pathogens and enhances the immune response of plants , . Furthermore, DEGs significantly upregulated by SynComs were involved in plant defense signalling, abiotic stress resistance and plant hormone signalling. It has been reported that lipoxygenases 1 (LOX1) and ZAT10 were directly related to Cd toxicity response. When Arabidopsis thaliana was exposed to Cd toxicity, LOX1 was involved in Cd-induced signalling transduction and increased jasmonic acid (JA) biosynthesis , . Similarly, ZAT10, a member of the C 2 H 2 zinc finger gene family, negatively regulated Cd uptake in Arabidopsis thaliana and enhanced Cd detoxification, and positively regulating heavy metal detoxification-related genes such as NAS1, IRT2 and MTP3 . RD22, containing the plant-specific BURP domain, was also highly up-regulated under SynCom application and was involved in several abiotic resistances, including drought, salt and osmotic stresses , . CYP93D1, belonging to the cytochrome P450 monooxygenases family, is involved in JA metabolism and can be activated by ZmCLA4 to affect lignin biosynthesis . Cd toxicity stimulates lignin biosynthesis, and the upregulation of CYP93D1 increases lignin content, which enhances Cd adsorption and promotes plant growth , . Two other significantly up-regulated DEGs under SynComs application are related to plant defense mechanisms. UGT73B3 is involved in regulating redox status and ROS reactivity, while ERF9 serves as a negative regulator of DREB AND EAR MOTIF PROTEIN 1 (DEAR1)-dependent ethylene/JA-mediated plant defense mechanisms , . Furthermore, phenylpropanoid biosynthesis was the key enriched pathway under SynComs treatments, indicating that SynComs activate the phenylpropanoid biosynthetic pathway, synthesizing lignin from the accumulated phenolic compounds to overcome heavy metal-induced stress . In summary, SymComs application enables the coordinated regulation of LOX1, ZAT10, RD22, CYP93D1, UGT73B3 and ERF9 expression, triggering an interconnected network of defense responses, plant hormone signalling pathways, lignin biosynthesis and heavy metal detoxification mechanisms to enhance soybean’s ability to mitigate Cd toxicity. It is noteworthy that the soil metabolites in this study are not the same as those from root exudates. Although the study used a single soil type, theoretically, the majority of the differences in soil metabolites are likely due to soybean genotypes. However, we cannot ignore the potential for soil microorganisms to be influenced by root exudates under GmAMT2.1/2.2 genetic regulation, which could lead to the up-regulation or down-regulation of extracellular metabolites. Several metabolites from root exudates have been shown to alter soil microbial community composition , suggesting that changes in the soil metabolite profile may partly due to extracellular compounds readily released by microorganisms. Thus, future studies should dedicate greater effort to distinguishing the relative importance of specific root- or soil-derived metabolites in shaping rhizosphere microbial structures , . Moreover, advanced techniques such as microfluidic-based cultivation and diversification, should be applied to isolate additional uncultivated microbes, including fungi and bacteria . The efficacy of SynComs in promoting Cd tolerance in other crops and under field agricultural conditions requires further validation. In summary, our study reveals that Cd toxicity induces the GmAMT2.1/2.2 , leading to the recruitment of beneficial microorganisms, such as Tumebacillus , Alicyclobacillus , Methylophilus , Aspergillus , Talaromyces , and Penicillium , by altering metabolites such as sinapyl alcohol, genistein, coumestrol, and piceatannol. SynComs composed of these microorganisms collectively affecting various molecular mechanisms, thereby enhancing plant resistance to Cd toxicity. These mechanisms include ascorbate and aldarate metabolism and glucosinolate biosynthesis pathways associated with fungal SynCom, and plant-pathogen interaction and mineral absorption pathways associated with cross-kingdom SynCom. Notably, cross-kingdom SynCom and fungal SynCom showed greater potential for synergistic effects. Overall, our study results draw attention to the crucial role of microorganisms, especially fungi within SynComs, in the molecular mechanism underlying Cd toxicity resistance in soybean. The advanced insights provided by our metabarcoding data, and subsequent validation by microbial culture profiling, substantially enhance our understanding of plant functional gene regulation and synergistic resistance mechanisms within the root microbiome. This study provides critical data and guidance for the identification of agriculturally important microbial communities as potential breeding targets, as well as the development of microbial formulations to enhance resistance to Cd toxicity in sustainable agriculture.
Soil and soybean materials Experimental soils were collected at the surface layer at a depth of 10 cm from acidic soils in China in the summer of 2020. The sampling site was Guangzhou (113°35′N, 23°15′E), Guangdong Province, which is classified as Udic Agrosol, according to USDA soil taxonomy. The characteristics of the soil were: pH 5.3, porosity 40.2, Cd 0.2 mg/kg, organic matter 5.2 g/kg, NH 4 + -N 5.3 mg/kg, available phosphorus (P) 15.5 mg/kg, NO 3 - -N 24 mg/kg, and available potassium (K) 85.2 mg/kg. To construct the CaMV35S-drived GmAMT2.2 , the full-length CDS of GmAMT2.2 was cloned into the pTF101-eGFP vector, and the resultant construct was introduced into Agrobacterium tumefaciens EHA101. To produce the GmAMT2.1 and GmAMT2.2 loss-of-function mutants, the gene-editing tool CRISPR-Cas9 was used to knockout GmAMT2.1 and GmAMT2.2 . CRISPR/Cas9-mediated gene editing was performed as previously reported . Briefly, the two sgRNAs targeting the first exon of GmAMT2.1 and GmAMT2.2 were designed on CRISPR-P server ( http://crispr.hzau.edu.cn/CRISPR/ ) . The sgRNAs was sub-cloned into pGES201 plasmid, and the resultant construct was further introduced into Agrobacterium tumefaciens EHA105. Agrobacterium -mediated transformation as previously reported , , and the soybean variety Young was used as the transgene receptor. RNA extraction and quantitative real-time PCR (qRT-PCR) Total RNA was extracted from soybean or Arabidopsis thaliana with a TRNzol Universal Kit (DP424, TIANGEN, Beijing, China). The cDNA was synthesized from total RNA using a PrimeScript RT Reagent Kit with gDNA Eraser (RR047A, Takara Bio, Japan) according to the manufacturer’s instructions. DNA fragment amplification was performed using KOD FX neo (TOYOBO (SHANGHAI) BIOTECH CO., Shanghai, China). qRT-PCR was conducted using TB Green Premix Ex Taq II (RR820, Takara Bio, Japan) with a CFX96 Real-Time System (Bio-Rad, Hercules, CA, USA). Data were normalized to the reference genes GmActin3 . All analyses were performed with three biological replicates and three technical replicates. The results were analyzed using the 2 –ΔΔCt method. Student’s t -test implemented in Excel software (Excel 2016) was used to evaluate the statistical significance of the data. The primers for the markers were listed in Supplementary Table . Experimental design A pot experiment was conducted at the College of Agriculture, South China Agricultural University, located in Guangzhou, China. A randomized complete block design was used for the experiment, with three soybean genotypes (with or without Cd), for a total of six treatments (3 soybean genotypes × 2 Cd treatments = 6 treatments). Soybean seeds of the three genotypes were sterilized with alcohol and then planted in separate pots. The pots were of the same size and contained approximately 2.5 kg of soil. In each pot, eight seeds of uniform size were initially sown and subsequently thinned to two on the 10th day. The soil moisture content was maintained at 80% of the field capacity throughout the experiment. For each treatment, there were six replicates (plots). The soybeans were grown under controlled greenhouse conditions (day temperature 28 ~ 32 °C, night temperature 16 ~ 20 °C). Sample collection and soil chemical analysis Twenty days after soybean sowing, 36 samples were collected and subjected to amplicon sequencing. Briley, the roots were shaken gently to remove the soil adhering to the roots. The roots and attached soil, which consider the rhizosphere soil, were then transferred to 1x phosphate-buffered saline. Ten grams of rhizosphere soil were obtained, of which two grams were stored at -80°C for the microbial experiment and LC-MS analysis. The remaining samples were stored at 4 °C for subsequent microbial isolation and soil chemical characterization measurements. Amplicon sequencing and data analysis Utilizing the Fast DNA SPIN Kit for Soil (MP Biomedicals, Santa Ana, CA), microbial DNA was extracted, and the bacterial 16 S rRNA gene was specifically targeted by amplifying the V4 region using primers 515 F/806R . Meanwhile, amplification of the fungal ITS region was achieved by targeting the ITS1 region using primers ITS5/1737 F. Briefly, each PCR reaction contained 4 μL buffer, 2 μL dNTPs (2 mM), 1 μL of forward/reverse primer (10 μM), 10 ng of DNA and 10 μL ddH 2 O. The PCR was then performed on an ABI 7900 system according to the following program 94 °C for 45 s; 35 cycles of 95 °C for 15 s, 55 °C for 10 s, 72 °C for 10 s; and 50 for 15 min. The amplicons that were pooled in equimolar amounts were subjected to paired-end sequencing on the Illumina MiSeq platform (Shanghai Majorbio Bio-pharm Technology Co., Ltd). The bacterial 16 S rRNA gene and fungal ITS sequences were processed with QIIME (v1.9.1) and VSEARCH, starting from the raw FASTQ files. After quality filtering, the primers and low-quality sequences with scores < 20 and lengths < 200 bp were removed. The paired-end reads that correspond to bacterial 16 S rRNA genes and fungal ITS regions were merged into single files, respectively. The operational taxonomic units (OTUs) were classified at a 97% sequence similarity cutoff using CD-HIT . Taxonomic assignment was performed using the SILVA (v138) and UNITE (v8.0) databases for bacteria and fungi, respectively. Mothur and QIIME (v1.91) were used, respectively, to calculate the alpha and beta diversity of microbial communities. Rarefaction of bacterial and fungal OTU tables resulted in 60,024 and 60,049 reads, respectively, for alpha diversity estimation. Metabolite measurement Fifty mg of the rhizosphere soil sample were added to a solvent of acetonitrile, methanol, and water (volume ratio 2:2:1). The sample was then vortexed and sonicated three times in an ice-water bath. The supernatant was centrifuged and subjected to UHPLC-Q Exactive (QE) Orbitrap MS analysis. Quality control samples were generated by pooling the supernatant from all samples and detecting soil metabolites via Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS) methodology (Shanghai Majorbio Bio-pharm Technology Co., Ltd). The OPLS model was used to identify metabolites with the greatest group-specific differences based on their variable importance prediction (VIP) scores, with a threshold of 1 (OPLS, VIP > 1). High-throughput cultivation of microbes and construction of synthetic communities The rhizosphere bacteria and fungi were isolated using high-throughput culture according to Zhang et al., (2021) and Zhou et al., (2022) with minor modifications. Briefly, the rhizosphere suspensions were diluted to an optimal dilution such that 30% of the wells showed bacterial growth. Then, 100 μL of the diluted suspension was plated on the 96-well plates containing different culture media. For the bacteria, the media included LB medium, beef extract peptone and Trytone yeast extract glucose medium. For the fungi, the media included 1/4 strength RBM, 1/4 strength RBM, 1/10 strength PDA and 1/4 strength MEA . In addition, 0.01 M CdCl 2 was added to the media to obtain bacteria and fungi more resistant to Cd. After 7 days of inoculation, microbial identification was used to identify each well. Bacterial 16 S rDNA and fungal 18 S rDNA were amplified and sequenced at Sangon Biotech Co., Ltd (Shanghai, China). Sequence alignment was performed on the NCBI website, and 1 mL of 30% glycerol (v/v) at -80°C was used to preserve each microbial species. We identified bacteria and fungi that were significantly enriched in the rhizosphere of the soybean carrying GmAMT2.1/2.2 genes by analyzing the composition of the rhizosphere microbial community in different treatments. These strains were then considered as potential candidates for building SynComs. The OD 600 of each bacterial fermentation broth was adjusted to 0.02 (approximately 10 7 cells/ml). Fungal strains were propagated using shake flask fermentation in 1/10th strength PDB medium, and diluted to 10 6 conidia/ml . To identify how root-derived microbes enhance plant adaptation to Cd toxicity, we conducted an experiment consisting of three synthetic SynComs, including bacterial, fungal, and bacterial and fungal cross-kingdom SynComs. Each SynCom consisted of an equal proportion of the respective bacterial or fungal strains. Different SynComs were added to the soil 10 days after soybean sowing. Plant fresh weight, root Cd 2+ and NH 4 + -N content and translocation factor were determined 15 days after the addition of the SynComs. RNA‐seq for soybean treat with SynComs RNA-seq was used to investigate the effect of SynComs on the expression of related genes in soybean. Total RNA was extracted from soybean root samples using RNAiso Plus reagent (Takara Bio). The constructed libraries underwent a thorough quality assessment before sequencing on the Illumina platform (PE 150). Then, the raw data files were converted into raw reads via base calling analysis. RSEM and STAR were used to perform the sequence the alignment and quantification of the detected genes . Using the “stats” package in R 4.1.2, PCA (Principal components analysis) was conducted to show the effect of three SynComs on the gene expression of the soybean root. DESeq2 with the FDR < 0.05 was used to identify the differentially expressed genes (DEGs) . A hierarchical clustering heatmap was employed to identify DEGs, which were subsequently annotated and analyzed for enriched pathways using the “stats” package’s hclust function . The identified genes were annotated according to SoyBase ( https://www.soybase.org ) and then KEGG ( http://www.genome.jp/kegg/ ) analysis was performed using the software package “clusterProfiler” to find significantly enriched metabolic or signalling pathways with a threshold of FDR < 0.05 , . Statistical analysis Principal coordinate analysis (PCoA) was conducted in R 4.1.2 using the “vegan” package, based on Bray-Curtis dissimilarities. The PERMANOVA (Permutation Multivariate Analysis of Variance) and the Mantel test were also used to evaluate the significance of the PCoA results. A generalized linear model was run in R using the “edgeR” package to analyze the microbial and metabolite differences between each two treatments. The results of the generalized linear model were visualized in volcano plots . Using the “vcd” package, the Kruskal-Wallis test was performed in R to calculate the enrichment of microbes, and was then visualized in ternary plots using “ggplot2” package . Differences in chemical properties of soil and microbial relative abundance on the phylum level were evaluated using Genstat (version 13.0) with the two-way analysis of variance (ANOVA). Analysis of the microbiome and metabolome data integration was performed using M 2 IA . Microbial co-occurrence networks with an average OTU abundance greater than 0.1% across samples were constructed and analyzed to determine network connectivity in the genotypes. With the ’Hmisc’ and ’igraph’ packages in the R environment, spearman coefficients between OTUs were determined, and correlations with r > 0.8 and P < 0.05 were included in the network . The networks were explored and visualized using Gephi (v 0.8.2). The codes used for the study are accessible on GitHub ( https://github.com/liantengxiang1988/GmAMT2.1-2.2-shape-rhizosphere-microbiome-mitigate-cadmium-toxicity ). Reporting summary Further information on research design is available in the linked to this article.
Experimental soils were collected at the surface layer at a depth of 10 cm from acidic soils in China in the summer of 2020. The sampling site was Guangzhou (113°35′N, 23°15′E), Guangdong Province, which is classified as Udic Agrosol, according to USDA soil taxonomy. The characteristics of the soil were: pH 5.3, porosity 40.2, Cd 0.2 mg/kg, organic matter 5.2 g/kg, NH 4 + -N 5.3 mg/kg, available phosphorus (P) 15.5 mg/kg, NO 3 - -N 24 mg/kg, and available potassium (K) 85.2 mg/kg. To construct the CaMV35S-drived GmAMT2.2 , the full-length CDS of GmAMT2.2 was cloned into the pTF101-eGFP vector, and the resultant construct was introduced into Agrobacterium tumefaciens EHA101. To produce the GmAMT2.1 and GmAMT2.2 loss-of-function mutants, the gene-editing tool CRISPR-Cas9 was used to knockout GmAMT2.1 and GmAMT2.2 . CRISPR/Cas9-mediated gene editing was performed as previously reported . Briefly, the two sgRNAs targeting the first exon of GmAMT2.1 and GmAMT2.2 were designed on CRISPR-P server ( http://crispr.hzau.edu.cn/CRISPR/ ) . The sgRNAs was sub-cloned into pGES201 plasmid, and the resultant construct was further introduced into Agrobacterium tumefaciens EHA105. Agrobacterium -mediated transformation as previously reported , , and the soybean variety Young was used as the transgene receptor.
Total RNA was extracted from soybean or Arabidopsis thaliana with a TRNzol Universal Kit (DP424, TIANGEN, Beijing, China). The cDNA was synthesized from total RNA using a PrimeScript RT Reagent Kit with gDNA Eraser (RR047A, Takara Bio, Japan) according to the manufacturer’s instructions. DNA fragment amplification was performed using KOD FX neo (TOYOBO (SHANGHAI) BIOTECH CO., Shanghai, China). qRT-PCR was conducted using TB Green Premix Ex Taq II (RR820, Takara Bio, Japan) with a CFX96 Real-Time System (Bio-Rad, Hercules, CA, USA). Data were normalized to the reference genes GmActin3 . All analyses were performed with three biological replicates and three technical replicates. The results were analyzed using the 2 –ΔΔCt method. Student’s t -test implemented in Excel software (Excel 2016) was used to evaluate the statistical significance of the data. The primers for the markers were listed in Supplementary Table .
A pot experiment was conducted at the College of Agriculture, South China Agricultural University, located in Guangzhou, China. A randomized complete block design was used for the experiment, with three soybean genotypes (with or without Cd), for a total of six treatments (3 soybean genotypes × 2 Cd treatments = 6 treatments). Soybean seeds of the three genotypes were sterilized with alcohol and then planted in separate pots. The pots were of the same size and contained approximately 2.5 kg of soil. In each pot, eight seeds of uniform size were initially sown and subsequently thinned to two on the 10th day. The soil moisture content was maintained at 80% of the field capacity throughout the experiment. For each treatment, there were six replicates (plots). The soybeans were grown under controlled greenhouse conditions (day temperature 28 ~ 32 °C, night temperature 16 ~ 20 °C).
Twenty days after soybean sowing, 36 samples were collected and subjected to amplicon sequencing. Briley, the roots were shaken gently to remove the soil adhering to the roots. The roots and attached soil, which consider the rhizosphere soil, were then transferred to 1x phosphate-buffered saline. Ten grams of rhizosphere soil were obtained, of which two grams were stored at -80°C for the microbial experiment and LC-MS analysis. The remaining samples were stored at 4 °C for subsequent microbial isolation and soil chemical characterization measurements.
Utilizing the Fast DNA SPIN Kit for Soil (MP Biomedicals, Santa Ana, CA), microbial DNA was extracted, and the bacterial 16 S rRNA gene was specifically targeted by amplifying the V4 region using primers 515 F/806R . Meanwhile, amplification of the fungal ITS region was achieved by targeting the ITS1 region using primers ITS5/1737 F. Briefly, each PCR reaction contained 4 μL buffer, 2 μL dNTPs (2 mM), 1 μL of forward/reverse primer (10 μM), 10 ng of DNA and 10 μL ddH 2 O. The PCR was then performed on an ABI 7900 system according to the following program 94 °C for 45 s; 35 cycles of 95 °C for 15 s, 55 °C for 10 s, 72 °C for 10 s; and 50 for 15 min. The amplicons that were pooled in equimolar amounts were subjected to paired-end sequencing on the Illumina MiSeq platform (Shanghai Majorbio Bio-pharm Technology Co., Ltd). The bacterial 16 S rRNA gene and fungal ITS sequences were processed with QIIME (v1.9.1) and VSEARCH, starting from the raw FASTQ files. After quality filtering, the primers and low-quality sequences with scores < 20 and lengths < 200 bp were removed. The paired-end reads that correspond to bacterial 16 S rRNA genes and fungal ITS regions were merged into single files, respectively. The operational taxonomic units (OTUs) were classified at a 97% sequence similarity cutoff using CD-HIT . Taxonomic assignment was performed using the SILVA (v138) and UNITE (v8.0) databases for bacteria and fungi, respectively. Mothur and QIIME (v1.91) were used, respectively, to calculate the alpha and beta diversity of microbial communities. Rarefaction of bacterial and fungal OTU tables resulted in 60,024 and 60,049 reads, respectively, for alpha diversity estimation.
Fifty mg of the rhizosphere soil sample were added to a solvent of acetonitrile, methanol, and water (volume ratio 2:2:1). The sample was then vortexed and sonicated three times in an ice-water bath. The supernatant was centrifuged and subjected to UHPLC-Q Exactive (QE) Orbitrap MS analysis. Quality control samples were generated by pooling the supernatant from all samples and detecting soil metabolites via Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS) methodology (Shanghai Majorbio Bio-pharm Technology Co., Ltd). The OPLS model was used to identify metabolites with the greatest group-specific differences based on their variable importance prediction (VIP) scores, with a threshold of 1 (OPLS, VIP > 1).
The rhizosphere bacteria and fungi were isolated using high-throughput culture according to Zhang et al., (2021) and Zhou et al., (2022) with minor modifications. Briefly, the rhizosphere suspensions were diluted to an optimal dilution such that 30% of the wells showed bacterial growth. Then, 100 μL of the diluted suspension was plated on the 96-well plates containing different culture media. For the bacteria, the media included LB medium, beef extract peptone and Trytone yeast extract glucose medium. For the fungi, the media included 1/4 strength RBM, 1/4 strength RBM, 1/10 strength PDA and 1/4 strength MEA . In addition, 0.01 M CdCl 2 was added to the media to obtain bacteria and fungi more resistant to Cd. After 7 days of inoculation, microbial identification was used to identify each well. Bacterial 16 S rDNA and fungal 18 S rDNA were amplified and sequenced at Sangon Biotech Co., Ltd (Shanghai, China). Sequence alignment was performed on the NCBI website, and 1 mL of 30% glycerol (v/v) at -80°C was used to preserve each microbial species. We identified bacteria and fungi that were significantly enriched in the rhizosphere of the soybean carrying GmAMT2.1/2.2 genes by analyzing the composition of the rhizosphere microbial community in different treatments. These strains were then considered as potential candidates for building SynComs. The OD 600 of each bacterial fermentation broth was adjusted to 0.02 (approximately 10 7 cells/ml). Fungal strains were propagated using shake flask fermentation in 1/10th strength PDB medium, and diluted to 10 6 conidia/ml . To identify how root-derived microbes enhance plant adaptation to Cd toxicity, we conducted an experiment consisting of three synthetic SynComs, including bacterial, fungal, and bacterial and fungal cross-kingdom SynComs. Each SynCom consisted of an equal proportion of the respective bacterial or fungal strains. Different SynComs were added to the soil 10 days after soybean sowing. Plant fresh weight, root Cd 2+ and NH 4 + -N content and translocation factor were determined 15 days after the addition of the SynComs.
RNA-seq was used to investigate the effect of SynComs on the expression of related genes in soybean. Total RNA was extracted from soybean root samples using RNAiso Plus reagent (Takara Bio). The constructed libraries underwent a thorough quality assessment before sequencing on the Illumina platform (PE 150). Then, the raw data files were converted into raw reads via base calling analysis. RSEM and STAR were used to perform the sequence the alignment and quantification of the detected genes . Using the “stats” package in R 4.1.2, PCA (Principal components analysis) was conducted to show the effect of three SynComs on the gene expression of the soybean root. DESeq2 with the FDR < 0.05 was used to identify the differentially expressed genes (DEGs) . A hierarchical clustering heatmap was employed to identify DEGs, which were subsequently annotated and analyzed for enriched pathways using the “stats” package’s hclust function . The identified genes were annotated according to SoyBase ( https://www.soybase.org ) and then KEGG ( http://www.genome.jp/kegg/ ) analysis was performed using the software package “clusterProfiler” to find significantly enriched metabolic or signalling pathways with a threshold of FDR < 0.05 , .
Principal coordinate analysis (PCoA) was conducted in R 4.1.2 using the “vegan” package, based on Bray-Curtis dissimilarities. The PERMANOVA (Permutation Multivariate Analysis of Variance) and the Mantel test were also used to evaluate the significance of the PCoA results. A generalized linear model was run in R using the “edgeR” package to analyze the microbial and metabolite differences between each two treatments. The results of the generalized linear model were visualized in volcano plots . Using the “vcd” package, the Kruskal-Wallis test was performed in R to calculate the enrichment of microbes, and was then visualized in ternary plots using “ggplot2” package . Differences in chemical properties of soil and microbial relative abundance on the phylum level were evaluated using Genstat (version 13.0) with the two-way analysis of variance (ANOVA). Analysis of the microbiome and metabolome data integration was performed using M 2 IA . Microbial co-occurrence networks with an average OTU abundance greater than 0.1% across samples were constructed and analyzed to determine network connectivity in the genotypes. With the ’Hmisc’ and ’igraph’ packages in the R environment, spearman coefficients between OTUs were determined, and correlations with r > 0.8 and P < 0.05 were included in the network . The networks were explored and visualized using Gephi (v 0.8.2). The codes used for the study are accessible on GitHub ( https://github.com/liantengxiang1988/GmAMT2.1-2.2-shape-rhizosphere-microbiome-mitigate-cadmium-toxicity ).
Further information on research design is available in the linked to this article.
Supplementary Information Reporting summary
|
Targeting ferroptosis to enhance the efficacy of mesenchymal stem cell-based treatments for intervertebral disc degeneration | 5b488a46-3d65-4f21-bc9f-c72313adf635 | 11781166 | Surgical Procedures, Operative[mh] | The intervertebral disc (IVD) is a well-hydrated, fibro-cartilaginous tissue comprising a central, proteoglycan-rich nucleus pulposus (NP), an outer annulus fibrosus, and a cartilage endplate . It is critical for spinal mobility and stability . Located within the inner disc, NP cells (NPCs) are primarily responsible for synthesizing and maintaining the extracellular matrix (ECM) . The onset and acceleration of intervertebral disc degeneration (IVDD) are widely recognized to be initiated and accelerated by the loss of NPCs and the breakdown of the ECM . IVDD can lead to lower back pain which, if worsened, could result in disability and deteriorate the quality of life for patients . The condition involves mineralization and calcification, which reduce the IVD's nutrient supply and metabolism, thereby worsening IVDD - . Although certain perspectives have suggested that directly targeting NPCs to boost their vitality could delay IVDD - , transplanted NPCs often fail to proliferate sufficiently to produce adequate cell quantities or secrete sufficient functional ECM components owing to immune responses and limited self-renewal capability , , . Moreover, harvesting autologous NPCs could cause harm to the donor disc. Given these challenges, implanting NPC-based cells proves ineffective in regenerating degenerated NP tissue. Mesenchymal stem cell (MSC)-based therapy has emerged as a new field in regenerative medicine - . MSCs, with their anti-inflammatory, regenerative, and immunomodulatory properties, are sourced from various tissues such as bone marrow, adipose tissue, the umbilical cord, and NP - , . Transplanted MSCs migrate to damaged IVD tissue, differentiate into chondrogenic cell types, enhance proteoglycan production, and exhibit NPC characteristics , . Following differentiation into chondrogenic cell types, MSCs express high levels of chondrogenic markers such as type II collagen and aggrecan, key components of the ECM in the IVD - . Thus, MSC transplantation could mitigate the progression of IVDD by supplementing depleted NPCs and exerting paracrine effects on compromised or aging NPCs - . For instance, hypoxia-preconditioned MSCs maintain an NPC-like phenotype and curb IVD mineralization . Furthermore, the secretion of cytokines and growth factors by MSCs is crucial for enhancing the viability of resident NPCs, upregulating NP marker gene expression, and stimulating local tissue cells , . However, the harsh and complex microenvironment of degenerative intervertebral discs (IVDs) poses a severe challenge to the survival of transplanted MSCs, thereby limiting the effectiveness of MSC-based therapies - , - . This adverse microenvironment is primarily due to excessive reactive oxygen species (ROS) and inflammation - . The combination of excess ROS and inflammation leads to oxidative damage, reduces the retention rate of engrafted MSCs, and aimpedes their osteogenic differentiation, thus diminishing the efficacy of IVDD treatment. Additionally, the pathological progression in degenerated IVDs may impair the essential functions of MSCs. Elevated ROS levels contribute to protein carbonylation, lipid peroxidation, and DNA damage in engrafted MSCs . It is widely recognized that when pathological stimuli exceed their stress tolerance, MSCs undergo irreversible programmed cell death in such harsh environments , , - . Consequently, enhancing the survival rate of MSCs under oxidative stress (OS) is essential for improving the efficacy of MSC-based therapy for IVDD. The severe oxidative stress in degenerated IVDs can induce various forms of programmed cell death in transplanted MSCs, including apoptosis, pyroptosis, and necrosis - . These forms of regulated cell death are reportedly observed in MSCs under oxidative conditions - . However, studies have shown that targeted strategies have not effectively mitigated the excessive loss of engrafted MSCs - , . In MSC-based therapy for liver injury repair, poor retention of MSCs in the OS environment of the liver significantly diminishes their therapeutic efficacy . Notably, the addition of inhibitors of apoptosis, pyroptosis, and necrosis to MSCs under OS did not significantly delay the loss of stem cell retention. Similarly, our preliminary study on bone marrow mesenchymal stem cells (BMSCs) transplantation for repairing degenerated IVDs found that inhibiting apoptosis, pyroptosis, or necrosis did not significantly prevent BMSC death . Thus, it is plausible that other forms of regulated cell death induced by ROS stress significantly contribute to the excessive loss of MSCs in degenerated IVDs. In recent years, substantial evidence has established that OS-induced ferroptosis plays a crucial role in the survival and retention of transplanted MSCs, posing a significant threat - , - . Iron, a critical trace element, is indispensable for maintaining redox reactions and homeostasis in organisms - . Recent studies have shown that iron overload is an independent risk factor for human intervertebral disc degeneration (IVDD) and accelerates its pathological progression, indicating that transplanted MSCs are also exposed to this iron overload microenvironment - . An iron overload microenvironment induces oxidative stress (OS) damage to MSCs and disrupts iron homeostasis within the stem cells, affecting the balance between oxidation and antioxidant systems. Ferroptosis, characterized by iron-dependent accumulation of lipid peroxides to lethal levels, threatens the survival of transplanted MSCs and results in poor MSC retention , - . Our preliminary work has also confirmed that ferroptosis is responsible for the low MSC retention rate when engrafted into degenerative IVDs , . Therefore, targeting ferroptosis in MSC-based therapy is recognized as a promising approach for IVDD, and developing inhibitors of ferroptosis in MSCs may enhance the efficacy of MSC transplantation in degenerative IVDs.
First, MSC-based therapy has shown promise in preclinical treatments for IVDD. Yuan et al. implanted cultured autologous BMSCs into degenerative IVDs and noted improved spinal segmental stability in the goat IVD defect model. Similarly, a rat IVDD model induced by needle puncture demonstrated that intradiscal injection of MSCs effectively restored the structural integrity and tissue composition of the IVD, highlighting the potential of MSCs in treating IVDD . Second, ongoing clinical trials on MSC-based therapy for IVDD have shown a moderate therapeutic effect , - (Table ). Gomez-Ruiz et al. reported on a 10-year follow-up of a prospective phase I / II clinical trial involving autologous MSC transplantation, noting improvements in radicular pain and low back pain (evaluated by the Visual Analog Scale (VAS) and the Oswestry Disability Index (ODI)). Lee et al. evaluated the therapeutic effects of transplantation of adipose-derived stem cells (ADSCs) transplantation in patients with lumbar degenerative disc diseases, also reporting improvements in VAS pain scores, ODI scores, and radiological scores. Thus, these clinical trials support the efficacy of stem cell therapy for IVD regeneration. Current evidence suggests that stem cell transplantation therapy could be a viable option for partially restoring the biological characteristics of degenerated IVDs. However, ongoing concerns persist regarding the long-term safety and efficacy of MSC transplantation for repair treatments. Despite numerous benefits, potential long-term issues with MSC therapy include improper differentiation, immune suppression, and the risk of tumor formation - . Cultured human MSCs may undergo spontaneous transformation under in vitro culture conditions, which do not accurately reflect the characteristics of MSCs in the bone marrow microenvironment in vivo . Consequently, their proliferation and differentiation capabilities may be affected, impacting the long-term efficacy of subsequent transplantation therapies. The primary concern remains that transplanted MSCs may undergo unwanted differentiation, potentially hindering anti-tumor immune responses and promoting new blood vessel formation, thereby facilitating tumor growth and metastasis - . Despite significant progress in MSC therapy over the past few decades, many challenges remain. Therefore, large-scale follow-up studies are essential to confirm the long-term efficacy and safety of stem cell therapy, including the evaluation of stem cell immunocompatibility, stability, heterogeneity, differentiation, and migratory capacity. Nevertheless, some studies have proposed exosomes secreted by MSCs as a safer alternative to MSC-based therapy - . Stem cell-derived exosomes act as intercellular messengers and provide similar therapeutic effects to their parent MSCs, such as tissue regeneration, immunomodulation, and anti-inflammation - . To date, only a few clinical studies have reported on the safety and potential efficacy of MSC-derived exosomes (MSC-Exos), including their use in preclinical and clinical studies of IVDD - . Additionally, the rapid in vivo clearance of MSC-Exos and their relatively low extraction and purification efficiencies also limit their long-term therapeutic benefits . Given the dominance of MSC-based therapy in current clinical trials, this review will focus on stem cell transplantation therapy and summarize the relevant literature on targeting MSC ferroptosis to optimize preclinical strategies for the effective treatment of IVDD.
Ferroptosis is a form of regulated cell death (RCD) distinct from apoptosis , triggered by the accumulation of free intracellular iron and the disruption of redox regulatory mechanisms - . It is characterized by chromatin condensation, disruption of membrane integrity, and morphologically evident by mitochondrial condensation, disappearance of mitochondrial cristae, and increased densities of the mitochondrial membrane - . Transferrin receptor protein 1 (TFR1) is responsible for iron uptake, while ferroportin manages iron export, both essential for maintaining intracellular iron homeostasis - . Ferritin, comprising ferritin heavy chain 1 (FTH1) and ferritin light chain (FTL), also helps maintain iron homeostasis by storing iron to prevent toxicity - . Nuclear receptor coactivator 4 (NCOA4) mediates 'ferritin autophagy', where NCOA4 binds FTH1 and facilitates ferritin delivery to autophagosomes for lysosomal degradation, releasing sequestered iron . This process leads to the generation of reactive oxygen species (ROS) via the iron-based Fenton reaction, and excessive ROS production, often referred to as oxidative stress (OS), accelerates lipid membrane oxidation and causes lipid bilayer membrane rupture, ultimately inducing ferroptosis - . Hence, lipid peroxidation is considered a crucial indicator of OS - . Solute carrier family 7 member 11 (SLC7A11) is vital in the cystine/glutamate antiporter system xc, which transports cystine intracellularly for glutathione synthetase (GSH) biosynthesis . Glutathione peroxidase 4 (GPX4) uses antioxidant GSH as a cofactor to detoxify lipid peroxidation effectively and prevent ferroptosis - (Figure ). Increasing evidence suggests that targeting ferroptosis could significantly enhance MSC-based therapies - , - , - (Table ). A deeper understanding of ferroptosis mechanisms in MSCs is crucial for developing strategies to improve their functionality and survival, ultimately optimizing cell-based therapy outcomes. 3.1 Inhibiting MSC ferroptosis via the iron metabolism pathways Ferroptosis is implicated in the OS-induced loss of transplanted MSCs under harsh degenerative conditions in intervertebral discs (IVDs), with this involvement closely regulated by intracellular iron metabolism - . Iron ions disrupt the redox balance when accumulated excessively within cells and tissues, catalyzing ROS generation and promoting ferroptosis - . In response to the initiation mechanisms of ferroptosis, researchers have developed strategies encompassing iron uptake, storage, and transport to preserve cellular iron homeostasis and prevent MSC ferroptosis. In an OS injury in vitro model, Jing et al. demonstrated significant activation of the ferroptosis pathway through transcriptomic analysis and phenotypic experiments. Notably, while NCOA4 expression levels increased and ferritin levels decreased, their binding rate was found to be elevated, suggesting that NCOA4-mediated ferritin degradation contributes to iron accumulation and promotes ferroptosis. In a rat model of smoking-related osteoporosis, inhibitors of ferritinophagy or ferroptosis were shown to suppress BMSC ferroptosis, enhance cell viability, and thereby reduce BMSC dysfunction and reverse bone damage . Similarly, in dental pulp stem cells, NCOA4-mediated ferritinophagy has been linked to ferroptosis, indicated by cytosolic iron overload, elevated ROS levels, and increased NCOA4 expression . Knocking down NCOA4 was observed to reduce ferritin degradation. Additionally, another study targeting the iron storage pathway showed potential in inhibiting MSC ferroptosis and enhancing MSC survival rates. In an in vitro model of BMSC ferroptosis induced by erastin, overexpression of FTH1/FTL was found to reduce ROS levels and abrogate lipid peroxidation, markedly improving BMSC survival . This finding indicates that targeting the FTH1 / FTL pathway could enhance resistance to ferroptosis and improve engrafted BMSC survival. Moreover, other studies have employed pre-treatment of stem cells with deferoxamine (DFO). DFO, known as a ferroptosis inhibitor, chelates cytoplasmic iron ions, preventing their interaction with peroxides and thus inhibiting the formation of harmful radicals . Hopfner et al. found that DFO-preconditioned ADSCs significantly enhanced regenerative capabilities for wound healing in diabetes while reducing intracellular ROS levels, offering a new approach to improve diabetic ADSC functionality. Khoshlahni et al. also reported that DFO-preconditioned BMSCs resisted OS damage in stressful environments. Through iron depletion by DFO, BMSCs showed improved survival rates under OS conditions, facilitating subsequent BMSC-based transplantation therapies. These findings underscore the importance of modulating the iron metabolism pathway in maintaining iron homeostasis and inhibiting MSC ferroptosis. We were also acutely aware of the critical role of targeting MSC ferroptosis in clinical MSC treatments. Therefore, we hypothesized that regulating iron metabolism in MSCs influences the efficiency of stem cell transplantation in repairing degenerated IVDs. Free ferrous iron (Fe 2+ ) acts as a potent oxidative factor, generating unstable radicals via the Fenton reaction , . Coenzyme Q10 (Co-Q10) has been reported to reduce the perferryl radical, chelate iron, and thus prevent iron overload - . Lazourgui et al. confirmed that Co-Q10 helps regulate iron levels, improving cellular antioxidant status and alleviating oxidative stress (OS) during insulin resistance. Peng et al. observed that intracellular iron ion levels, when overloaded, corresponded with a continuous decrease in Co-Q10, exacerbating cellular OS, further suggesting Co-Q10's efficacy as an iron chelating agent. In a rat needle puncture IVDD model, Sun et al. administered Co-Q10 preconditioned BMSCs into degenerative IVDs, noting improved viability, restored mitochondrial structure and function, and increased production of ECM components. Thus, Co-Q10 protects against OS-induced BMSC ferroptosis, providing a protective role in cellular survival and differentiation, enhancing the efficacy of BMSC transplantation therapy, and may represent a viable treatment option for IVDD. Similarly, adding ferritin, crucial for iron homeostasis and detoxification, has shown cytoprotective effects on BMSCs and promoted their differentiation into osteoblasts, essential for bone tissue health - . The protective mechanism likely involves enhancing the iron ion storage pathway, effectively countering OS-induced BMSC ferroptosis. However, direct in vivo research evidence is currently lacking regarding the regulation of intracellular iron levels in stem cells and its impact on the efficiency of repairing degenerated IVDs. More importantly, future studies in both cellular and rat IVDD models will aim to demonstrate that interventions in the iron metabolism pathway can desensitize transplanted MSCs to ferroptosis, further validating the regulation of ferroptosis by imbalanced iron homeostasis. Targeting iron-chelating drugs, iron metabolism, or redox-related proteins may offer a new strategy to enhance the efficiency of MSC-based therapy in IVDD. 3.2 Inhibiting MSC ferroptosis via the antioxidant pathways It is widely acknowledged that oxidative stress (OS) is a primary pathogenic factor in the numerous pathophysiological processes of IVDD , - . Studies have shown that OS accelerates IVDD, and the OS microenvironment within degenerated discs jeopardizes the survival of transplanted MSCs , , . OS is characterized by an imbalance between ROS production and scavenging , . Iron overload, overwhelming the antioxidant defense system, initiates the classic ferroptosis pathway, closely linking ferroptosis to OS , - . Park et al. reported that excessive ROS production activates OS-induced lipid peroxidation, thereby exacerbating ferroptosis-dependent cytotoxicity in human astrocytes. Sánchez-Ortega et al. observed that excessive ROS production also induces ferroptosis in lung squamous cell carcinoma (LUSC), suggesting that targeting OS-induced LUSC ferroptosis could present a new therapeutic approach. Increasing evidence indicates that molecular pathways activated by OS intersect with those of ferroptosis in transplanted MSCs, sharing several molecular targets such as accumulation of lipid peroxidation products, and low levels of GSH and GPX4 , - . Persistent and excessive OS in the transplantation site triggers ferroptosis in MSCs due to elevated ROS levels, significantly reducing MSC retention after transplantation. Accordingly, targeting ferroptosis could markedly improve the effectiveness of stem cell transplantation therapy by reducing intracellular OS levels or enhancing antioxidant defenses. For example, in an erastin-treated BMSC model, the antioxidant quercetin demonstrated effective anti-ferroptosis properties through its antioxidative functions, protecting BMSCs from erastin-induced ferroptosis . Tert-butyl hydroperoxide (TBHP) has been shown to induce cellular oxidative injury, and we used TBHP to simulate OS conditions in an in vitro model , - . TBHP treatment significantly increased ferroptosis indicators (Fe 2+ aggregation, increased ROS levels, excessive lipid peroxidation) and altered levels of ferroptosis-related proteins (Long-chain-fatty-acid--CoA ligase 4 (ACSL4), Prostaglandin G/H synthase 2 (PTGS2), GPX4, and Ferritin) . Conversely, ROS scavengers and ferroptosis inhibitors (ferrostatin-1 (Fer-1) and liproxstatin-1 (Lip-1), both potent lipid peroxidation inhibitors) suppressed the increase in OS-induced ferroptosis indicators and blocked the expression of ferroptosis-related proteins . Therefore, previous studies and our preliminary findings collectively confirm that OS can activate ferroptosis, and suggest that employing antioxidants or activating antioxidant pathways could mitigate OS-induced MSC ferroptosis. 3.2.1 Activating KEAP1/Nrf2/GPX4 signaling pathway As a primary defender against ferroptosis, upregulation of GPX4 levels can render MSCs resistant to ferroptosis. Hu et al. identified ferroptosis as the cause of poor MSC retention rates shortly after exposure to an OS microenvironment or following engraftment into an injured liver setting. Strategies to suppress MSC ferroptosis include pretreating MSCs with Fer-1 and Lip-1 and enhancing intracellular GPX4 transcription . These approaches have effectively increased MSC retention under ROS stress and improved the liver-protective outcomes post-implantation. Conversely, BMSCs influenced by neuroblastoma displayed increased sensitivity to ferroptosis, due to GPX4 downregulation . Similarly, pretreatment of BMSCs with poliumoside, which boosts intracellular GPX4 expression levels, effectively countered OS-induced BMSC ferroptosis, as indicated by reduced mitochondrial ROS and malondialdehyde (MDA) levels, and increased GPX4 protein levels . In contrast, disrupting GPX4 expression through lentiviral transfection diminished the anti-ferroptosis effects. The nuclear factor erythroid 2-related factor 2 (Nrf2), a transcription factor located in the mammalian nucleus, regulates the stress-induced activation of cytoprotective genes, including GPX4 , - . Bhat et al. mechanistically demonstrated that enhancing Nrf2 transcriptional activity could prevent lipid peroxidation and reduce suppression of GPX4 expression during ferroptosis. Furthermore, activating the Nrf2-dependent GPX4 antioxidant pathway in doxorubicin-stimulated H9c2 cardiomyocytes markedly alleviated ferroptosis and mitochondrial damage . In addition, Nrf2 nuclear translocation is negatively regulated by Kelch-like ECH-associated protein 1 (KEAP1), and the KEAP1 / Nrf2 complex can modulate intracellular OS levels , . In cardiomyocytes, reducing KEAP1 levels promoted the nuclear translocation of Nrf2 and transcription of SLC7A11 and GPX4, thus offering protection against doxorubicin-induced ferroptosis . Koppula et al. also confirmed that KEAP1transcriptionally regulates Nrf2 levels, and the KEAP1/Nrf2 pathway regulates ferroptosis via a GPX4-independent mechanism. In an erastin-induced BMSC ferroptosis in vitro model, the import of excess iron via TFR1 and minimal export by ferroportin, alongside a dysregulated KEAP1 / Nrf2 / GPX4 axis and low SLC7A11 levels, collectively exacerbated BMSC ferroptosis . Deactivating the KEAP1 / Nrf2 / GPX4 pathway reduced BMSC ferroptosis, as indicated by decreased KEAP1 protein levels and increased Nrf2 and GPX4 protein levels. The cystathionine γ-lyase (CSE) / hydrogen sulfide (H 2 S) pathway is vital for redox homeostasis, including ROS scavenging, antioxidant activation, and cellular protection against OS - . The CSE / H 2 S pathway is crucial in modulating ferroptosis - . Regulation of this pathway in MSCs has been shown to protect against ferroptosis, improving low retention and engraftment rates post-MSC delivery. Notably, enhancing the CSE / H 2 S pathway induced Keap1 S-sulfhydration, activating Nrf2 and inhibiting ferroptosis, as evidenced by reduced iron levels and ROS production, and increased GPX4 protein levels . Thus, the CSE / H 2 S pathway exerts anti-ferroptosis effects by mediating the KEAP1 / Nrf2 / GPX4 in MSCs, ultimately enhancing MSC survival post-delivery into mice . Targeting ferroptosis has proven effective in enhancing the therapeutic efficacy of endogenous MSCs for treating IVDD. Increasing GPX4 expression in NP-derived MSCs (NPSCs) inhibited NPSC ferroptosis and promoted their proliferation, revealing significant therapeutic potential for endogenous MSCs in IVDD treatment . Additionally, activating Nrf2 in ADSCs effectively alleviated stem cell dysfunction and the cell death rate in degenerative IVDs and stimulated ADSC differentiation into an NPC-like phenotype . This suggests that targeting ferroptosis could enhance ADSC transplantation therapy for IVDD and is promising for improving stem cell transplantation efficacy. Although direct evidence is lacking for modulating the KEAP1/Nrf2/GPX4 pathway to inhibit MSC ferroptosis in treating IVDD, existing evidence supports enhancing the KEAP1 / Nrf2 / GPX4 axis in MSCs for use in preclinical and clinical trials of MSC-based therapies for IVDD. 3.2.2 Preconditioning of MSCs with antioxidants In addition to targeting antioxidant pathways within MSCs, recent studies have utilized direct pre-treatment with antioxidants to protect MSCs from ferroptosis. Quercetin was shown to inhibit erastin-induced BMSC ferroptosis through antioxidant pathways, potentially converting into its metabolite quercetin diels-alder anti-dimer (QDAD) during this process . Both quercetin and QDAD exhibit strong antioxidant and anti-ferroptosis properties. Ebselen, an antioxidant with glutathione peroxidase-like activity, was found to prevent BMSC ferroptosis and reduce its inhibitory effect on osteogenic differentiation of BMSCs . Curcumin, known for its potent antioxidant properties, enhanced the antioxidant capacity of MSCs, as indicated by increased survival rates . Curcumin preconditioning also reduced MSC death in the hostile brain microenvironment and improved MSC therapeutic efficacy for treating intracerebral hemorrhage. Curcumin-preconditioned MSCs exhibited neuroprotective effects in an intracerebral hemorrhage model, characterized by reduced cellular injury and lower ROS levels in neuronal cells . Geraniin showed ferroptosis-inhibitory effects in BMSCs induced by erastin, involving inhibition of lipid peroxidation, iron chelation, and antioxidant actions . Picein, possessing antioxidant properties, ameliorated erastin-induced oxidative stress and enhanced the proliferation and migration of BMSCs . The protective mechanism of Picein involves activating the Nrf2 / Heme oxygenase 1 (HO-1) / GPX4 pathway. HO-1, regulated by Nrf2 and associated with antioxidant proteins, helps counter oxidative stress by facilitating Nrf2 nuclear translocation to bind the antioxidant response element of the HMOX1 gene - . Activation of the Nrf2 / HO-1 pathway increases GPX4 levels to counter OS - , highlighting the importance of the Nrf2/HO-1/GPX4 axis in countering BMSC ferroptosis. This class of antioxidants similarly exerts ferroptosis-inhibiting effects by activating intracellular antioxidant pathways. However, Yuan et al. presented a contrasting view, noting an upregulation of HO-1 in macrophages that increased intracellular Fe 2+ and promoted ferroptosis, indicating that the anti-ferroptosis effects of Picein in BMSCs require further exploration. Growing evidence has shown that antioxidant preconditioning effectively enhances MSC proliferation, migration, and yields better therapeutic outcomes for repairing degenerated intervertebral discs (IVDs). Icariin, a flavonoid with anti-ferroptotic activity, mitigates oxidative stress damage and promotes BMSC osteogenesis . Icariin treatment has been shown to improve the therapeutic efficacy of stem cell transplantation for repairing degenerated IVDs by alleviating pathological trends of IVDD and increasing collagen II and aggrecan levels in IVD tissues . Neochlorogenic acids, possessing antioxidant and anti-ferroptosis properties, potentially act through the Nrf2 / HO-1 pathway . These acids have been shown to reduce hydrogen peroxide (H 2 O 2 )-induced ROS production in BMSCs, thereby inhibiting ferroptosis. In vivo , BMSCs preconditioned with neochlorogenic acids exhibited enhanced protective effects for IVDD compared to other model groups, attributed to improved BMSC stability during the repair process . Similarly, co-encapsulating BMSCs with salvianolic acid B, a potent antioxidant, in 1% hyaluronic acid methacrylate hydrogel significantly reduced cell death rates compared to the BMSCs + hydrogel group . It was observed that salvianolic acid B efficiently reduced ferroptotic damage caused by H 2 O 2 to BMSCs and increased cell survival percentages in vitro . In a rat model, pretreatment of BMSCs with salvianolic acid B delayed disc degeneration progression compared to stem cell therapy alone, as evidenced by histological and immunohistochemical analyses . Thus, encapsulating BMSCs with salvianolic acid B presents potential research value for regenerative disc tissue repair. In summary, these findings collectively support the use of MSC transplantation with antioxidants as a viable approach for advancing MSC-based tissue engineering for IVDD repair, potentially through ferroptosis inhibition. This also offers a new perspective on suppressing MSC ferroptosis to increase the retention of grafted MSCs and enhance the efficiency of stem cell transplantation repair. 3.3 Inhibiting MSC ferroptosis by targeting specific molecular and pathway 3.3.1 HIF-1α Hypoxia-inducible factor 1-alpha (HIF-1α) was identified as a key ferroptosis gene by analyzing differentially expressed genes from OS-induced BMSCs compared to controls, based on the Gene Expression Omnibus databases and a ferroptosis dataset - . The pivotal role of HIF-1α in regulating ferroptosis was further validated in an animal model, suggesting that targeting HIF-1α to suppress MSC ferroptosis might be a novel approach for treating IVDD . Previous studies have shown that a hypoxic microenvironment stabilizes cellular HIF-1α and enhances ferroptosis resistance in a HIF-1α-dependent manner, likely through increased HIF-1α expression and its interaction with hypoxia-inducible factor 1 beta to form the HIF-1 complex - . Activation of the HIF-1 / hypoxic response elements (HRE) signaling pathway then promotes the binding of HRE to the solute carrier family 2, facilitated glucose transporter member 12 (SLC2A12) promoter, elevating SLC2A12 expression, modulating glutathione metabolism, and conferring resistance to ferroptosis . Transplanting ADSCs into degenerative IVDs demonstrates the therapeutic potential of MSC-based therapy for IVDD, although the physiological hypoxic state and OS contribute to low retention of transplanted ADSCs - . Hypoxia-preconditioned ADSCs have shown increased cell proliferation and migration capabilities and promoted differentiation into NPC-like cells via HIF-1α, whereas inhibiting HIF-1α produced the opposite effect . He et al. observed significant decreases in HIF-1α levels and NPSC counts in degenerative rat and human IVDs. Similarly, hypoxia-preconditioned NPSCs demonstrated enhanced resistance to cell death by activating HIF-1α, involving HMOX1 and solute carrier family 2, facilitated glucose transporter member 1 . Overexpressing HIF-1α in NPSCs also resulted in higher survival rates post-transplantation into degenerative discs in vivo . These findings collectively support the anti-ferroptosis effect of HIF-1α under the severe hypoxic and OS conditions typical of degenerative discs, suggesting HIF-1α's crucial role in enhancing the therapeutic efficacy of MSCs for IVDD treatment. However, other research presents a contrary view, indicating that inhibition of HIF-1α enhanced the anti-ferroptosis effects of Fer-1 in chondrocytes , and downregulation of HIF-1α reduced ferroptosis and improved gastric and minor intestinal mucosal injury in an animal model , . Therefore, the approach of targeting HIF-1α to suppress MSC ferroptosis remains controversial, and extensive foundational research on MSC-based therapy for IVDD is essential to elucidate this strategy. 3.3.2 SIRT1 NAD-dependent histone deacetylase sirtuin-1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent enzyme that regulates critical metabolic proteins during oxidative stress (OS) - . The SIRT1 / Nrf2 signaling pathway provides anti-OS effects and can prevent ferroptosis, restoring redox balance. Sanz-Alcázar et al. reported that frataxin deficiency in dorsal root ganglion neurons disrupts iron homeostasis, reduces SIRT1 expression, diminishes Nrf2 activation, and impairs the cellular response to OS, ultimately leading to ferroptosis. Conversely, increased SIRT1 levels activate the liver kinase B1 homolog / 5'-AMP-activated protein kinase pathway, which in turn activates Nrf2, playing a vital role in the antioxidant response . Activation of the Sirt1/Nrf2/GPX4 pathway also protects BMSCs from erastin-induced ferroptosis and enhances cell viability, thereby improving the effectiveness of BMSC-based therapy . Interestingly, in fundamental research on MSC-based therapy for IVDD, activating SIRT1 reduced ROS accumulation, decreased expression of senescence-related proteins, and increased the proliferation ability of NPSCs . SIRT1 activation also confirmed the efficacy of NPSCs in alleviating IVDD in a puncture-induced IVDD rat model based on in vivo assessments . However, inhibiting SIRT1 partially reversed the therapeutic effects of NPSCs on degenerated IVDs - . Further research showed that overexpressing SIRT1 in MSCs effectively mitigated IVDD, as evidenced by the recovery of IVD height and volume and increased mRNA and protein levels of type II collagen and aggrecan, which inhibit the NF-kappaB p65 inflammatory pathway . Additionally, activating the SIRT1 / PPAR-γ co-activator 1α pathway, closely associated with mitochondrial function and antioxidant activities , , alleviated OS-induced mitochondrial dysfunctions, including reduced mitochondrial ROS production, increased mitochondrial membrane potentials, and enhanced survival rates of transplanted NPSCs, thus enhancing the therapeutic potential of MSCs for IVDD . Overexpression of SIRT1 could mitigate some limitations in MSC survival and adaptation under the severe conditions of disc degeneration, potentially due to SIRT1-mediated anti-ferroptosis effects in MSCs. Collectively, these findings indicate that targeting SIRT1 holds promise for treating IVDD by improving the therapeutic efficacy of MSC-based therapy. 3.3.3 PI3K/AKT pathway ROS accumulation in degenerative IVDs can trigger ferroptosis in MSCs and reduce the survival rate of engrafted MSCs. Suppressing MSC ferroptosis could potentially enhance their survival in the oxidative stress (OS) microenvironment of degenerative IVDs. Further mechanistic studies have shown that the phosphatidylinositol 3-kinase (PI3K) / threonine-protein kinase (AKT) signaling pathway can modulate ferroptosis, offering a novel approach for stem cell therapy in treating IVDD - . The PI3K / AKT signaling cascades are activated by hormones, growth factors, and other extracellular stimuli to regulate essential cellular functions such as cell proliferation, regulated cell death (RCD), and cell survival - . For example, activation of the PI3K / AKT pathway provides neuroprotection by phosphorylating Nrf2, which then enhances FTH1 transcription to store excess iron ions and prevent iron toxicity from excessive Fe 2+ accumulation . Furthermore, activation of the PI3K/AKT pathway has been shown to increase optic atrophy 1 expression, a member of the mitochondrial fusion protein family, promoting mitochondrial fusion and preventing mitochondrial structural and functional abnormalities, ultimately reducing ferroptosis , - . The PI3K / AKT pathway, associated with resistance mechanisms and cell survival, can be activated to desensitize cells to ferroptosis, offering an alternative approach to stem cell therapy for IVDD. In an OS microenvironment in vitro model of BMSCs, the PI3K / AKT signaling pathway was found to be involved in OS-induced BMSC ferroptosis . The upregulation of PI3K and AKT phosphorylation increased anti-oxidative gene expression and reduced intracellular ROS levels, thereby inhibiting BMSC ferroptosis and enhancing cell viability . Interestingly, activation of the PI3K / AKT signaling pathway also improved the antioxidant defense of BMSCs under the OS microenvironment and enhanced their osteogenic differentiation capacity, as demonstrated by increased mRNA transcripts of osteogenic markers . Activating the PI3K / AKT pathway has improved anti-OS processes in MSCs by suppressing ferroptosis, providing a theoretical foundation for enhancing stem cell-based therapy for treating IVDD by increasing the survival of transplanted MSCs in the OS microenvironment. NPSCs can differentiate into NPCs and exert paracrine effects that maintain the quantity and quality of IVD cells, thus enhancing stem cell-based therapy for intervertebral disc regeneration , . Activation of the PI3K / AKT pathway promotes NPSC proliferation and adaptation to the niche of degenerated IVDs, advancing the endogenous repair process - . In an in vitro oxidative stress (OS) microenvironment simulated by H2O2, activation of the PI3K / AKT pathway alleviated mitochondrial dysfunction, including changes in mitochondrial ultrastructure and mitochondrial ROS production, thereby reducing ROS levels and increasing NPSC viability . Conversely, pretreatment of NPSCs with a PI3K inhibitor diminished these protective effects and exacerbated NPSC ferroptosis. In rat models of mechanical loading stress-induced IVDD, excessive mechanical loading inactivated the PI3K / AKT pathway in human NPSCs, resulting in intracellular ROS accumulation, mitochondrial dysfunction, and decreased cell viability . This validated the role of the PI3K / AKT signaling pathway in regulating and inhibiting ferroptosis. Activation of the PI3K/AKT pathway reversed these effects and efficiently suppressed ferroptosis in NPSCs. Significantly, activation of the PI3K/AKT pathway alleviated cell death in human NPSCs in vivo models and substantially mitigated IVDD . The structure of the IVD was restored, the amount of extracellular matrix (ECM) increased, and cell numbers were augmented. Furthermore, in a rat IVDD model induced by needle puncture, activation of the PI3K/AKT pathway reduced ROS generation and maintained mitochondrial homeostasis in NPSCs, while inhibitors of this pathway attenuated these protective effects . The PI3K / AKT pathway alleviated excessive cell death of NPSCs induced by OS in the microenvironment of degenerative IVDs, as evidenced by X-ray, magnetic resonance imaging (MRI), and histological analyses . Thus, the PI3K / AKT signaling pathway emerges as a promising candidate for treating IVDD by increasing the survival rate of NPSCs. Therefore, suppressing MSC ferroptosis via the PI3K/AKT pathway could be adopted for IVDD treatment, establishing a specific therapeutic strategy to preserve MSC viability and potentially enhance IVD regeneration. 3.3.4 GDFs Growth differentiation factors (GDFs) modulate cellular processes such as proliferation, differentiation, and cell death - . Previous research has indicated that GDFs may mediate oxidative stress (OS) responses and be involved in ferroptosis - . For example, in a mouse model of sepsis-induced cardiomyopathy, GDF-15 provided protection to cardiomyocytes by inhibiting OS and suppressing ferroptosis, thereby reducing myocardial injury . Further research showed that GDF-15 promotes the transcription of GPX4, reducing lipid peroxidation in cardiomyocytes. In another model, GDF-15 knockout in ferroptosis induced by erastin led to decreased intracellular glutathione (GSH) levels and accelerated ferroptosis progression . This study also demonstrated that GDF-15 enhances the expression of SLC7A11, explaining its role in increasing intracellular GSH and GPX4 levels. In a sepsis model induced by cecal ligation and puncture in mice, overexpression of GDF-11 inhibited ferroptosis by upregulating SIRT1, thereby reducing lung tissue damage and inflammation, and preserving alveolar barrier integrity, thus presenting a promising molecular target for acute lung injury treatment . These findings underscore the role of GDFs in maintaining redox balance and regulating ferroptosis in BMSCs. GDF-5, part of the transforming growth factor-beta superfamily, enhances chondrogenic differentiation - . In an oxidative stress environment simulating IVDD, adding GDF-5 reduced OS-induced cell death in NPSCs and promoted their chondroid differentiation. This effect is possibly mediated by the RhoA / Rho-associated protein kinase (ROCK) signaling pathway . In models of myocardial injury and subarachnoid hemorrhage in mice and rats, modulation of the RhoA / ROCK pathway has been shown to influence cardiomyocyte and neuronal ferroptosis, respectively - . This suggests that GDF-5 may enhance MSC survival by suppressing ferroptosis through the RhoA/ROCK pathway, indicating potential for gene-targeted NPSCs. Additionally, in a rat tail IVDD model, GDF-5-preconditioned BMSCs significantly improved NP regeneration and outperformed the sole BMSC treatment . Okoro et al. also reported that GDF-5 enhanced differentiation of BMSCs into NP-like cells, supporting the NPC phenotype . A series of experiments suggests that GDF-5 not only inhibits ferroptosis in engrafted MSCs but also effectively induces their differentiation into NP-like cells both in vivo and in vitro , offering a feasible approach for NP regeneration and IVDD repair. In summary, targeting MSC ferroptosis using GDF-5 as a gene target has shown great potential and warrants further validation in additional in vivo IVDD animal models. 3.3.5 Activating Prominin-2 exerts anti-ferroptosis effects As a 100-kDa glycoprotein, Prominin-2 comprises five transmembrane domains and two glycosylated extracellular loops - . It is a member of cholesterol-binding proteins and is associated with plasma membrane protrusions . PROM2 (encoding the prominin 2) mRNA is expressed in various normal human tissues . Recently, Prominin-2 has been recognized as a ferroptosis inhibitor that can transport ferritin out of the cell, thereby reducing intracellular iron ion accumulation and preventing ferroptosis , - . Adamiec-Organisciok et al. confirmed that PROM2 is a ferro-resistance marker, as evidenced by increased PROM2 mRNA levels in resistant cell lines, and decreased levels in sensitive cell lines. Paris et al. discovered that overexpression of Prominin-2 is associated with ferroptosis resistance by reducing cytoplasmic Fe 2+ accumulation, thus reducing lipid peroxidation and ferroptosis. Brown et al. reported that Prominin-2 opposes ferroptotic cell death. They found that 4-hydroxynonenal, a lipid peroxidation by-product and a known ferroptosis biomarker, activates heat shock factor protein 1 (HSF1), which induces Prominin-2 expression through HSF1-dependent transcription of PROM2. Additionally, activating transcription factor 1 enhances ferroptosis resistance by upregulating N6-adenosine-methyltransferase subunit transcription, thereby stabilizing PROM2 mRNA . Increased Prominin-2 reduces erastin-induced ferroptosis and improves cell viability and proliferation rate . These findings demonstrate that Prominin-2-activated iron export contributes to ferroptosis resistance and has significant implications for enhancing MSC-based therapy for IVDD. Based on this research, we proposed that targeting the mechanism inducing Prominin-2 expression could expand the options to enhance MSC ferroptosis resistance. Activating Prominin-2 expression could be a viable approach for improving the therapeutic efficiency of MSC-based therapies for IVDD. Our preliminary work supports that ferroptosis is a major factor in the early, rapid, and extensive depletion of BMSCs under in vitro ROS stress, consistent with the aforementioned findings . Additionally, we transfected BMSCs with an overexpression lentivirus to generate Prominin-2-overexpressed BMSCs. We also confirmed that activating Prominin-2 expression significantly mitigates the loss of BMSCs during the initial stage after implantation into degenerative IVDs. Enhanced BMSC retention slows the depletion of engrafted BMSCs and effectively boosts their therapeutic efficacy in the harsh environment of degenerative IVDs. Although our previous findings are impactful for MSC-based therapy, targeted Prominin-2 activators are not yet available. In MSC-based therapy for IVDD, targeting Prominin-2-mediated ferroptosis defense offers a potential intervention. Our future work will delve into the involvement and underlying mechanisms of Prominin-2 in ferroptosis and reveal its additional roles. For instance, our preliminary work discovered that Prominin-2 not only transports iron ions beyond the cell but also physically interacts with BTB and CNC homolog 1 (BACH1) and facilitates its degradation . BACH1, a transcription factor, represses multiple antioxidant genes (including Nrf2) and thus disrupts cellular redox homeostasis - . Increasing evidence indicates that BACH1 enhances ferroptosis by inhibiting the transcription of various OS-induced protective genes - . Regarding its mechanism, Prominin-2 exerts a dual role in OS-induced ferroptosis through the iron metabolism pathway and the Prominin-2 / BACH1 antioxidant pathway (Figure ). Investigating the upstream and downstream nodes of this pathway could clarify the function of Prominin-2 and provide new insights for Prominin-2-activating drug discovery. Moreover, evidence has shown that hydrogels could boost MSC survival after engraftment in the adverse OS microenvironment - . In this context, we also plan to explore the potential of Prominin-2 activator-delivering hydrogels to enhance MSC resistance to OS. Activating Prominin-2 could prevent ferroptotic cell death of transplanted MSCs and improve the therapeutic effect of MSCs for IVDD. In summary, a deeper exploration of the anti-ferroptosis mechanism based on Prominin-2 is highly promising. 3.4 Hydrogel-encapsulated stem cells exert anti-ferroptosis effects Given the harsh OS microenvironment in degenerative IVDs sensitizes MSCs to ferroptosis, targeting transplanted MSC ferroptosis is essential for enhancing the clinical efficacy of cell transplantation therapy for IVDD. This includes modulating the ferroptosis signaling pathway in stem cells or enhancing the intracellular antioxidant defense system. It is important to note that injecting gene-modified or antioxidant preconditioned MSCs might cause a sudden increase in intradiscal pressure, contributing to the leakage of engrafted MSCs. Considering the challenge of injecting MSCs into degenerative IVDs, an optimal candidate carrier could be designed to address this challenge. Composite hydrogels exhibit good biocompatibility, are injectable for minimally invasive administration, and maintain stable mechanical strength after injection to prevent MSC leakage - . More importantly, hydrogel encapsulation creates a defensive shield for the transplanted MSCs and protects them from ferroptotic damage, thus reducing ferroptosis in stem cells , , . In vivo research on aged bone regeneration showed that the fabricated injectable hydrogel was highly sensitive to ROS and effectively scavenged intracellular ROS of BMSCs in the OS microenvironment . This demonstrated that composite hydrogels efficiently modulate the antioxidant function of BMSCs to defend against ferroptosis and improve the host microenvironment, thereby enhancing BMSCs' self-renewal ability and osteogenic capacity. Additionally, composite hydrogels could also be utilized for sustained drug release, including ferroptosis inhibitors and antioxidants , - . The sustained drug release from the hydrogel has a persistent and strong anti-ferroptosis effect on MSCs. In a rat-infected bone defect model, ferroptosis was identified as the primary cause of BMSC death under the infected bone microenvironment. Based on this, Yuan et al. designed a hydrogel composite scaffold with ROS-responsive and anti-ferroptosis properties, featuring long-term Fer-1 release to deliver BMSCs for repairing infected bone defects. Targeting ferroptosis mitigated OS damage to BMSCs and protected their cell viability, thus preserving osteogenic differentiation potential and facilitating osteogenic regeneration. Results from micro-CT, X-ray, and histological evaluations indicated that the hydrogel composite scaffold-loaded BMSCs targeted ferroptosis to better promote bone regeneration than the pure hydrogel group and are a promising therapeutic approach for repairing infected bone defects. As a result, composite hydrogels for MSC encapsulation provide a direct protective barrier and continuously inhibit ferroptosis through the OS pathway within the stem cells. Alternatively, it can achieve sustained and controlled release of ferroptosis inhibitors, thus achieving continuous suppression of ferroptosis. Increasing numbers of in vivo studies on IVDD show that composite hydrogels carrying MSCs are promising for enhancing the effectiveness of stem cell-based therapy , , , . These hydrogels have shown anti-ferroptosis effects on engrafted MSCs, thus promoting IVD regeneration. In rat models of IVDD, ADSCs or ADSC-laden hydrogels were transplanted into degenerative IVDs . Four weeks post-implantation, the retention of ADSCs in the composite hydrogels was higher, and the proportion of the NP area was also larger than in other groups. Notably, there was a significant increase in cell proliferation rate and a decrease in intracellular MDA levels in the ADSC-laden hydrogel group, indicating that hydrogel-encapsulated ADSCs effectively resisted ferroptotic damage under OS conditions. More importantly, these hydrogels showed a more substantial effect in delaying IVDD, preserving IVD tissue integrity, and boosting NP-like ECM production . In another study using a rat IVDD model, MSCs were encapsulated by alginate and gelatin microgel, and the biocompatible hydrogel loaded with MSCs was delivered into degenerative IVDs . It is noteworthy that damage to mitochondrial cristae was reduced under transmission electron microscopy in the hydrogel-encapsulated group, demonstrating that ferroptosis contributed to engrafted MSC death under OS conditions. Hydrogel encapsulation protected MSCs in the harsh disc microenvironment, prolonged MSC retention, and preserved their migration, proliferation, and differentiation properties, ultimately resulting in more effective reduction of disc degeneration compared to treatment with MSCs alone . Based on this defense against MSC ferroptosis after transplantation, Wang et al. designed a manganese oxide (MnOx) nanohydrogel to deliver BMSCs to repair IVDD . MnOx has strong antioxidant enzyme activities and can continuously eliminate ROS in degenerative IVDs and improve the OS microenvironment - . This injectable composite nanohydrogel enables MnOx to be more resistant to degradation, allowing it to maintain effective concentrations in the lesion area for extended periods. Importantly, MnOx nanohydrogel reduced ferroptotic damage to BMSCs, as evidenced by increased cell proliferation, enhanced BMSC metabolic activity, and lowered intracellular ROS levels . Due to its superior anti-ferroptosis effects, the BMSCs released by hydrogels maintain high stemness and low senescence under OS conditions, and also remodel NPCs' ECM. In vivo , injection of BMSC-loaded MnOx nanohydrogel showed the highest therapeutic efficacy, as assessed by gross evaluation, X-ray and MRI examinations, and histological staining . In conclusion, the biocompatible MSC hydrogel suppressed ferroptosis activation and improved MSC survival after intradiscal transplantation. Targeting the ferroptosis signaling pathway, combined with hydrogel encapsulation, shows great potential and could be an effective strategy to enhance the effectiveness of MSC-based therapy for treating IVDD diseases.
Ferroptosis is implicated in the OS-induced loss of transplanted MSCs under harsh degenerative conditions in intervertebral discs (IVDs), with this involvement closely regulated by intracellular iron metabolism - . Iron ions disrupt the redox balance when accumulated excessively within cells and tissues, catalyzing ROS generation and promoting ferroptosis - . In response to the initiation mechanisms of ferroptosis, researchers have developed strategies encompassing iron uptake, storage, and transport to preserve cellular iron homeostasis and prevent MSC ferroptosis. In an OS injury in vitro model, Jing et al. demonstrated significant activation of the ferroptosis pathway through transcriptomic analysis and phenotypic experiments. Notably, while NCOA4 expression levels increased and ferritin levels decreased, their binding rate was found to be elevated, suggesting that NCOA4-mediated ferritin degradation contributes to iron accumulation and promotes ferroptosis. In a rat model of smoking-related osteoporosis, inhibitors of ferritinophagy or ferroptosis were shown to suppress BMSC ferroptosis, enhance cell viability, and thereby reduce BMSC dysfunction and reverse bone damage . Similarly, in dental pulp stem cells, NCOA4-mediated ferritinophagy has been linked to ferroptosis, indicated by cytosolic iron overload, elevated ROS levels, and increased NCOA4 expression . Knocking down NCOA4 was observed to reduce ferritin degradation. Additionally, another study targeting the iron storage pathway showed potential in inhibiting MSC ferroptosis and enhancing MSC survival rates. In an in vitro model of BMSC ferroptosis induced by erastin, overexpression of FTH1/FTL was found to reduce ROS levels and abrogate lipid peroxidation, markedly improving BMSC survival . This finding indicates that targeting the FTH1 / FTL pathway could enhance resistance to ferroptosis and improve engrafted BMSC survival. Moreover, other studies have employed pre-treatment of stem cells with deferoxamine (DFO). DFO, known as a ferroptosis inhibitor, chelates cytoplasmic iron ions, preventing their interaction with peroxides and thus inhibiting the formation of harmful radicals . Hopfner et al. found that DFO-preconditioned ADSCs significantly enhanced regenerative capabilities for wound healing in diabetes while reducing intracellular ROS levels, offering a new approach to improve diabetic ADSC functionality. Khoshlahni et al. also reported that DFO-preconditioned BMSCs resisted OS damage in stressful environments. Through iron depletion by DFO, BMSCs showed improved survival rates under OS conditions, facilitating subsequent BMSC-based transplantation therapies. These findings underscore the importance of modulating the iron metabolism pathway in maintaining iron homeostasis and inhibiting MSC ferroptosis. We were also acutely aware of the critical role of targeting MSC ferroptosis in clinical MSC treatments. Therefore, we hypothesized that regulating iron metabolism in MSCs influences the efficiency of stem cell transplantation in repairing degenerated IVDs. Free ferrous iron (Fe 2+ ) acts as a potent oxidative factor, generating unstable radicals via the Fenton reaction , . Coenzyme Q10 (Co-Q10) has been reported to reduce the perferryl radical, chelate iron, and thus prevent iron overload - . Lazourgui et al. confirmed that Co-Q10 helps regulate iron levels, improving cellular antioxidant status and alleviating oxidative stress (OS) during insulin resistance. Peng et al. observed that intracellular iron ion levels, when overloaded, corresponded with a continuous decrease in Co-Q10, exacerbating cellular OS, further suggesting Co-Q10's efficacy as an iron chelating agent. In a rat needle puncture IVDD model, Sun et al. administered Co-Q10 preconditioned BMSCs into degenerative IVDs, noting improved viability, restored mitochondrial structure and function, and increased production of ECM components. Thus, Co-Q10 protects against OS-induced BMSC ferroptosis, providing a protective role in cellular survival and differentiation, enhancing the efficacy of BMSC transplantation therapy, and may represent a viable treatment option for IVDD. Similarly, adding ferritin, crucial for iron homeostasis and detoxification, has shown cytoprotective effects on BMSCs and promoted their differentiation into osteoblasts, essential for bone tissue health - . The protective mechanism likely involves enhancing the iron ion storage pathway, effectively countering OS-induced BMSC ferroptosis. However, direct in vivo research evidence is currently lacking regarding the regulation of intracellular iron levels in stem cells and its impact on the efficiency of repairing degenerated IVDs. More importantly, future studies in both cellular and rat IVDD models will aim to demonstrate that interventions in the iron metabolism pathway can desensitize transplanted MSCs to ferroptosis, further validating the regulation of ferroptosis by imbalanced iron homeostasis. Targeting iron-chelating drugs, iron metabolism, or redox-related proteins may offer a new strategy to enhance the efficiency of MSC-based therapy in IVDD.
It is widely acknowledged that oxidative stress (OS) is a primary pathogenic factor in the numerous pathophysiological processes of IVDD , - . Studies have shown that OS accelerates IVDD, and the OS microenvironment within degenerated discs jeopardizes the survival of transplanted MSCs , , . OS is characterized by an imbalance between ROS production and scavenging , . Iron overload, overwhelming the antioxidant defense system, initiates the classic ferroptosis pathway, closely linking ferroptosis to OS , - . Park et al. reported that excessive ROS production activates OS-induced lipid peroxidation, thereby exacerbating ferroptosis-dependent cytotoxicity in human astrocytes. Sánchez-Ortega et al. observed that excessive ROS production also induces ferroptosis in lung squamous cell carcinoma (LUSC), suggesting that targeting OS-induced LUSC ferroptosis could present a new therapeutic approach. Increasing evidence indicates that molecular pathways activated by OS intersect with those of ferroptosis in transplanted MSCs, sharing several molecular targets such as accumulation of lipid peroxidation products, and low levels of GSH and GPX4 , - . Persistent and excessive OS in the transplantation site triggers ferroptosis in MSCs due to elevated ROS levels, significantly reducing MSC retention after transplantation. Accordingly, targeting ferroptosis could markedly improve the effectiveness of stem cell transplantation therapy by reducing intracellular OS levels or enhancing antioxidant defenses. For example, in an erastin-treated BMSC model, the antioxidant quercetin demonstrated effective anti-ferroptosis properties through its antioxidative functions, protecting BMSCs from erastin-induced ferroptosis . Tert-butyl hydroperoxide (TBHP) has been shown to induce cellular oxidative injury, and we used TBHP to simulate OS conditions in an in vitro model , - . TBHP treatment significantly increased ferroptosis indicators (Fe 2+ aggregation, increased ROS levels, excessive lipid peroxidation) and altered levels of ferroptosis-related proteins (Long-chain-fatty-acid--CoA ligase 4 (ACSL4), Prostaglandin G/H synthase 2 (PTGS2), GPX4, and Ferritin) . Conversely, ROS scavengers and ferroptosis inhibitors (ferrostatin-1 (Fer-1) and liproxstatin-1 (Lip-1), both potent lipid peroxidation inhibitors) suppressed the increase in OS-induced ferroptosis indicators and blocked the expression of ferroptosis-related proteins . Therefore, previous studies and our preliminary findings collectively confirm that OS can activate ferroptosis, and suggest that employing antioxidants or activating antioxidant pathways could mitigate OS-induced MSC ferroptosis. 3.2.1 Activating KEAP1/Nrf2/GPX4 signaling pathway As a primary defender against ferroptosis, upregulation of GPX4 levels can render MSCs resistant to ferroptosis. Hu et al. identified ferroptosis as the cause of poor MSC retention rates shortly after exposure to an OS microenvironment or following engraftment into an injured liver setting. Strategies to suppress MSC ferroptosis include pretreating MSCs with Fer-1 and Lip-1 and enhancing intracellular GPX4 transcription . These approaches have effectively increased MSC retention under ROS stress and improved the liver-protective outcomes post-implantation. Conversely, BMSCs influenced by neuroblastoma displayed increased sensitivity to ferroptosis, due to GPX4 downregulation . Similarly, pretreatment of BMSCs with poliumoside, which boosts intracellular GPX4 expression levels, effectively countered OS-induced BMSC ferroptosis, as indicated by reduced mitochondrial ROS and malondialdehyde (MDA) levels, and increased GPX4 protein levels . In contrast, disrupting GPX4 expression through lentiviral transfection diminished the anti-ferroptosis effects. The nuclear factor erythroid 2-related factor 2 (Nrf2), a transcription factor located in the mammalian nucleus, regulates the stress-induced activation of cytoprotective genes, including GPX4 , - . Bhat et al. mechanistically demonstrated that enhancing Nrf2 transcriptional activity could prevent lipid peroxidation and reduce suppression of GPX4 expression during ferroptosis. Furthermore, activating the Nrf2-dependent GPX4 antioxidant pathway in doxorubicin-stimulated H9c2 cardiomyocytes markedly alleviated ferroptosis and mitochondrial damage . In addition, Nrf2 nuclear translocation is negatively regulated by Kelch-like ECH-associated protein 1 (KEAP1), and the KEAP1 / Nrf2 complex can modulate intracellular OS levels , . In cardiomyocytes, reducing KEAP1 levels promoted the nuclear translocation of Nrf2 and transcription of SLC7A11 and GPX4, thus offering protection against doxorubicin-induced ferroptosis . Koppula et al. also confirmed that KEAP1transcriptionally regulates Nrf2 levels, and the KEAP1/Nrf2 pathway regulates ferroptosis via a GPX4-independent mechanism. In an erastin-induced BMSC ferroptosis in vitro model, the import of excess iron via TFR1 and minimal export by ferroportin, alongside a dysregulated KEAP1 / Nrf2 / GPX4 axis and low SLC7A11 levels, collectively exacerbated BMSC ferroptosis . Deactivating the KEAP1 / Nrf2 / GPX4 pathway reduced BMSC ferroptosis, as indicated by decreased KEAP1 protein levels and increased Nrf2 and GPX4 protein levels. The cystathionine γ-lyase (CSE) / hydrogen sulfide (H 2 S) pathway is vital for redox homeostasis, including ROS scavenging, antioxidant activation, and cellular protection against OS - . The CSE / H 2 S pathway is crucial in modulating ferroptosis - . Regulation of this pathway in MSCs has been shown to protect against ferroptosis, improving low retention and engraftment rates post-MSC delivery. Notably, enhancing the CSE / H 2 S pathway induced Keap1 S-sulfhydration, activating Nrf2 and inhibiting ferroptosis, as evidenced by reduced iron levels and ROS production, and increased GPX4 protein levels . Thus, the CSE / H 2 S pathway exerts anti-ferroptosis effects by mediating the KEAP1 / Nrf2 / GPX4 in MSCs, ultimately enhancing MSC survival post-delivery into mice . Targeting ferroptosis has proven effective in enhancing the therapeutic efficacy of endogenous MSCs for treating IVDD. Increasing GPX4 expression in NP-derived MSCs (NPSCs) inhibited NPSC ferroptosis and promoted their proliferation, revealing significant therapeutic potential for endogenous MSCs in IVDD treatment . Additionally, activating Nrf2 in ADSCs effectively alleviated stem cell dysfunction and the cell death rate in degenerative IVDs and stimulated ADSC differentiation into an NPC-like phenotype . This suggests that targeting ferroptosis could enhance ADSC transplantation therapy for IVDD and is promising for improving stem cell transplantation efficacy. Although direct evidence is lacking for modulating the KEAP1/Nrf2/GPX4 pathway to inhibit MSC ferroptosis in treating IVDD, existing evidence supports enhancing the KEAP1 / Nrf2 / GPX4 axis in MSCs for use in preclinical and clinical trials of MSC-based therapies for IVDD. 3.2.2 Preconditioning of MSCs with antioxidants In addition to targeting antioxidant pathways within MSCs, recent studies have utilized direct pre-treatment with antioxidants to protect MSCs from ferroptosis. Quercetin was shown to inhibit erastin-induced BMSC ferroptosis through antioxidant pathways, potentially converting into its metabolite quercetin diels-alder anti-dimer (QDAD) during this process . Both quercetin and QDAD exhibit strong antioxidant and anti-ferroptosis properties. Ebselen, an antioxidant with glutathione peroxidase-like activity, was found to prevent BMSC ferroptosis and reduce its inhibitory effect on osteogenic differentiation of BMSCs . Curcumin, known for its potent antioxidant properties, enhanced the antioxidant capacity of MSCs, as indicated by increased survival rates . Curcumin preconditioning also reduced MSC death in the hostile brain microenvironment and improved MSC therapeutic efficacy for treating intracerebral hemorrhage. Curcumin-preconditioned MSCs exhibited neuroprotective effects in an intracerebral hemorrhage model, characterized by reduced cellular injury and lower ROS levels in neuronal cells . Geraniin showed ferroptosis-inhibitory effects in BMSCs induced by erastin, involving inhibition of lipid peroxidation, iron chelation, and antioxidant actions . Picein, possessing antioxidant properties, ameliorated erastin-induced oxidative stress and enhanced the proliferation and migration of BMSCs . The protective mechanism of Picein involves activating the Nrf2 / Heme oxygenase 1 (HO-1) / GPX4 pathway. HO-1, regulated by Nrf2 and associated with antioxidant proteins, helps counter oxidative stress by facilitating Nrf2 nuclear translocation to bind the antioxidant response element of the HMOX1 gene - . Activation of the Nrf2 / HO-1 pathway increases GPX4 levels to counter OS - , highlighting the importance of the Nrf2/HO-1/GPX4 axis in countering BMSC ferroptosis. This class of antioxidants similarly exerts ferroptosis-inhibiting effects by activating intracellular antioxidant pathways. However, Yuan et al. presented a contrasting view, noting an upregulation of HO-1 in macrophages that increased intracellular Fe 2+ and promoted ferroptosis, indicating that the anti-ferroptosis effects of Picein in BMSCs require further exploration. Growing evidence has shown that antioxidant preconditioning effectively enhances MSC proliferation, migration, and yields better therapeutic outcomes for repairing degenerated intervertebral discs (IVDs). Icariin, a flavonoid with anti-ferroptotic activity, mitigates oxidative stress damage and promotes BMSC osteogenesis . Icariin treatment has been shown to improve the therapeutic efficacy of stem cell transplantation for repairing degenerated IVDs by alleviating pathological trends of IVDD and increasing collagen II and aggrecan levels in IVD tissues . Neochlorogenic acids, possessing antioxidant and anti-ferroptosis properties, potentially act through the Nrf2 / HO-1 pathway . These acids have been shown to reduce hydrogen peroxide (H 2 O 2 )-induced ROS production in BMSCs, thereby inhibiting ferroptosis. In vivo , BMSCs preconditioned with neochlorogenic acids exhibited enhanced protective effects for IVDD compared to other model groups, attributed to improved BMSC stability during the repair process . Similarly, co-encapsulating BMSCs with salvianolic acid B, a potent antioxidant, in 1% hyaluronic acid methacrylate hydrogel significantly reduced cell death rates compared to the BMSCs + hydrogel group . It was observed that salvianolic acid B efficiently reduced ferroptotic damage caused by H 2 O 2 to BMSCs and increased cell survival percentages in vitro . In a rat model, pretreatment of BMSCs with salvianolic acid B delayed disc degeneration progression compared to stem cell therapy alone, as evidenced by histological and immunohistochemical analyses . Thus, encapsulating BMSCs with salvianolic acid B presents potential research value for regenerative disc tissue repair. In summary, these findings collectively support the use of MSC transplantation with antioxidants as a viable approach for advancing MSC-based tissue engineering for IVDD repair, potentially through ferroptosis inhibition. This also offers a new perspective on suppressing MSC ferroptosis to increase the retention of grafted MSCs and enhance the efficiency of stem cell transplantation repair.
As a primary defender against ferroptosis, upregulation of GPX4 levels can render MSCs resistant to ferroptosis. Hu et al. identified ferroptosis as the cause of poor MSC retention rates shortly after exposure to an OS microenvironment or following engraftment into an injured liver setting. Strategies to suppress MSC ferroptosis include pretreating MSCs with Fer-1 and Lip-1 and enhancing intracellular GPX4 transcription . These approaches have effectively increased MSC retention under ROS stress and improved the liver-protective outcomes post-implantation. Conversely, BMSCs influenced by neuroblastoma displayed increased sensitivity to ferroptosis, due to GPX4 downregulation . Similarly, pretreatment of BMSCs with poliumoside, which boosts intracellular GPX4 expression levels, effectively countered OS-induced BMSC ferroptosis, as indicated by reduced mitochondrial ROS and malondialdehyde (MDA) levels, and increased GPX4 protein levels . In contrast, disrupting GPX4 expression through lentiviral transfection diminished the anti-ferroptosis effects. The nuclear factor erythroid 2-related factor 2 (Nrf2), a transcription factor located in the mammalian nucleus, regulates the stress-induced activation of cytoprotective genes, including GPX4 , - . Bhat et al. mechanistically demonstrated that enhancing Nrf2 transcriptional activity could prevent lipid peroxidation and reduce suppression of GPX4 expression during ferroptosis. Furthermore, activating the Nrf2-dependent GPX4 antioxidant pathway in doxorubicin-stimulated H9c2 cardiomyocytes markedly alleviated ferroptosis and mitochondrial damage . In addition, Nrf2 nuclear translocation is negatively regulated by Kelch-like ECH-associated protein 1 (KEAP1), and the KEAP1 / Nrf2 complex can modulate intracellular OS levels , . In cardiomyocytes, reducing KEAP1 levels promoted the nuclear translocation of Nrf2 and transcription of SLC7A11 and GPX4, thus offering protection against doxorubicin-induced ferroptosis . Koppula et al. also confirmed that KEAP1transcriptionally regulates Nrf2 levels, and the KEAP1/Nrf2 pathway regulates ferroptosis via a GPX4-independent mechanism. In an erastin-induced BMSC ferroptosis in vitro model, the import of excess iron via TFR1 and minimal export by ferroportin, alongside a dysregulated KEAP1 / Nrf2 / GPX4 axis and low SLC7A11 levels, collectively exacerbated BMSC ferroptosis . Deactivating the KEAP1 / Nrf2 / GPX4 pathway reduced BMSC ferroptosis, as indicated by decreased KEAP1 protein levels and increased Nrf2 and GPX4 protein levels. The cystathionine γ-lyase (CSE) / hydrogen sulfide (H 2 S) pathway is vital for redox homeostasis, including ROS scavenging, antioxidant activation, and cellular protection against OS - . The CSE / H 2 S pathway is crucial in modulating ferroptosis - . Regulation of this pathway in MSCs has been shown to protect against ferroptosis, improving low retention and engraftment rates post-MSC delivery. Notably, enhancing the CSE / H 2 S pathway induced Keap1 S-sulfhydration, activating Nrf2 and inhibiting ferroptosis, as evidenced by reduced iron levels and ROS production, and increased GPX4 protein levels . Thus, the CSE / H 2 S pathway exerts anti-ferroptosis effects by mediating the KEAP1 / Nrf2 / GPX4 in MSCs, ultimately enhancing MSC survival post-delivery into mice . Targeting ferroptosis has proven effective in enhancing the therapeutic efficacy of endogenous MSCs for treating IVDD. Increasing GPX4 expression in NP-derived MSCs (NPSCs) inhibited NPSC ferroptosis and promoted their proliferation, revealing significant therapeutic potential for endogenous MSCs in IVDD treatment . Additionally, activating Nrf2 in ADSCs effectively alleviated stem cell dysfunction and the cell death rate in degenerative IVDs and stimulated ADSC differentiation into an NPC-like phenotype . This suggests that targeting ferroptosis could enhance ADSC transplantation therapy for IVDD and is promising for improving stem cell transplantation efficacy. Although direct evidence is lacking for modulating the KEAP1/Nrf2/GPX4 pathway to inhibit MSC ferroptosis in treating IVDD, existing evidence supports enhancing the KEAP1 / Nrf2 / GPX4 axis in MSCs for use in preclinical and clinical trials of MSC-based therapies for IVDD.
In addition to targeting antioxidant pathways within MSCs, recent studies have utilized direct pre-treatment with antioxidants to protect MSCs from ferroptosis. Quercetin was shown to inhibit erastin-induced BMSC ferroptosis through antioxidant pathways, potentially converting into its metabolite quercetin diels-alder anti-dimer (QDAD) during this process . Both quercetin and QDAD exhibit strong antioxidant and anti-ferroptosis properties. Ebselen, an antioxidant with glutathione peroxidase-like activity, was found to prevent BMSC ferroptosis and reduce its inhibitory effect on osteogenic differentiation of BMSCs . Curcumin, known for its potent antioxidant properties, enhanced the antioxidant capacity of MSCs, as indicated by increased survival rates . Curcumin preconditioning also reduced MSC death in the hostile brain microenvironment and improved MSC therapeutic efficacy for treating intracerebral hemorrhage. Curcumin-preconditioned MSCs exhibited neuroprotective effects in an intracerebral hemorrhage model, characterized by reduced cellular injury and lower ROS levels in neuronal cells . Geraniin showed ferroptosis-inhibitory effects in BMSCs induced by erastin, involving inhibition of lipid peroxidation, iron chelation, and antioxidant actions . Picein, possessing antioxidant properties, ameliorated erastin-induced oxidative stress and enhanced the proliferation and migration of BMSCs . The protective mechanism of Picein involves activating the Nrf2 / Heme oxygenase 1 (HO-1) / GPX4 pathway. HO-1, regulated by Nrf2 and associated with antioxidant proteins, helps counter oxidative stress by facilitating Nrf2 nuclear translocation to bind the antioxidant response element of the HMOX1 gene - . Activation of the Nrf2 / HO-1 pathway increases GPX4 levels to counter OS - , highlighting the importance of the Nrf2/HO-1/GPX4 axis in countering BMSC ferroptosis. This class of antioxidants similarly exerts ferroptosis-inhibiting effects by activating intracellular antioxidant pathways. However, Yuan et al. presented a contrasting view, noting an upregulation of HO-1 in macrophages that increased intracellular Fe 2+ and promoted ferroptosis, indicating that the anti-ferroptosis effects of Picein in BMSCs require further exploration. Growing evidence has shown that antioxidant preconditioning effectively enhances MSC proliferation, migration, and yields better therapeutic outcomes for repairing degenerated intervertebral discs (IVDs). Icariin, a flavonoid with anti-ferroptotic activity, mitigates oxidative stress damage and promotes BMSC osteogenesis . Icariin treatment has been shown to improve the therapeutic efficacy of stem cell transplantation for repairing degenerated IVDs by alleviating pathological trends of IVDD and increasing collagen II and aggrecan levels in IVD tissues . Neochlorogenic acids, possessing antioxidant and anti-ferroptosis properties, potentially act through the Nrf2 / HO-1 pathway . These acids have been shown to reduce hydrogen peroxide (H 2 O 2 )-induced ROS production in BMSCs, thereby inhibiting ferroptosis. In vivo , BMSCs preconditioned with neochlorogenic acids exhibited enhanced protective effects for IVDD compared to other model groups, attributed to improved BMSC stability during the repair process . Similarly, co-encapsulating BMSCs with salvianolic acid B, a potent antioxidant, in 1% hyaluronic acid methacrylate hydrogel significantly reduced cell death rates compared to the BMSCs + hydrogel group . It was observed that salvianolic acid B efficiently reduced ferroptotic damage caused by H 2 O 2 to BMSCs and increased cell survival percentages in vitro . In a rat model, pretreatment of BMSCs with salvianolic acid B delayed disc degeneration progression compared to stem cell therapy alone, as evidenced by histological and immunohistochemical analyses . Thus, encapsulating BMSCs with salvianolic acid B presents potential research value for regenerative disc tissue repair. In summary, these findings collectively support the use of MSC transplantation with antioxidants as a viable approach for advancing MSC-based tissue engineering for IVDD repair, potentially through ferroptosis inhibition. This also offers a new perspective on suppressing MSC ferroptosis to increase the retention of grafted MSCs and enhance the efficiency of stem cell transplantation repair.
3.3.1 HIF-1α Hypoxia-inducible factor 1-alpha (HIF-1α) was identified as a key ferroptosis gene by analyzing differentially expressed genes from OS-induced BMSCs compared to controls, based on the Gene Expression Omnibus databases and a ferroptosis dataset - . The pivotal role of HIF-1α in regulating ferroptosis was further validated in an animal model, suggesting that targeting HIF-1α to suppress MSC ferroptosis might be a novel approach for treating IVDD . Previous studies have shown that a hypoxic microenvironment stabilizes cellular HIF-1α and enhances ferroptosis resistance in a HIF-1α-dependent manner, likely through increased HIF-1α expression and its interaction with hypoxia-inducible factor 1 beta to form the HIF-1 complex - . Activation of the HIF-1 / hypoxic response elements (HRE) signaling pathway then promotes the binding of HRE to the solute carrier family 2, facilitated glucose transporter member 12 (SLC2A12) promoter, elevating SLC2A12 expression, modulating glutathione metabolism, and conferring resistance to ferroptosis . Transplanting ADSCs into degenerative IVDs demonstrates the therapeutic potential of MSC-based therapy for IVDD, although the physiological hypoxic state and OS contribute to low retention of transplanted ADSCs - . Hypoxia-preconditioned ADSCs have shown increased cell proliferation and migration capabilities and promoted differentiation into NPC-like cells via HIF-1α, whereas inhibiting HIF-1α produced the opposite effect . He et al. observed significant decreases in HIF-1α levels and NPSC counts in degenerative rat and human IVDs. Similarly, hypoxia-preconditioned NPSCs demonstrated enhanced resistance to cell death by activating HIF-1α, involving HMOX1 and solute carrier family 2, facilitated glucose transporter member 1 . Overexpressing HIF-1α in NPSCs also resulted in higher survival rates post-transplantation into degenerative discs in vivo . These findings collectively support the anti-ferroptosis effect of HIF-1α under the severe hypoxic and OS conditions typical of degenerative discs, suggesting HIF-1α's crucial role in enhancing the therapeutic efficacy of MSCs for IVDD treatment. However, other research presents a contrary view, indicating that inhibition of HIF-1α enhanced the anti-ferroptosis effects of Fer-1 in chondrocytes , and downregulation of HIF-1α reduced ferroptosis and improved gastric and minor intestinal mucosal injury in an animal model , . Therefore, the approach of targeting HIF-1α to suppress MSC ferroptosis remains controversial, and extensive foundational research on MSC-based therapy for IVDD is essential to elucidate this strategy. 3.3.2 SIRT1 NAD-dependent histone deacetylase sirtuin-1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent enzyme that regulates critical metabolic proteins during oxidative stress (OS) - . The SIRT1 / Nrf2 signaling pathway provides anti-OS effects and can prevent ferroptosis, restoring redox balance. Sanz-Alcázar et al. reported that frataxin deficiency in dorsal root ganglion neurons disrupts iron homeostasis, reduces SIRT1 expression, diminishes Nrf2 activation, and impairs the cellular response to OS, ultimately leading to ferroptosis. Conversely, increased SIRT1 levels activate the liver kinase B1 homolog / 5'-AMP-activated protein kinase pathway, which in turn activates Nrf2, playing a vital role in the antioxidant response . Activation of the Sirt1/Nrf2/GPX4 pathway also protects BMSCs from erastin-induced ferroptosis and enhances cell viability, thereby improving the effectiveness of BMSC-based therapy . Interestingly, in fundamental research on MSC-based therapy for IVDD, activating SIRT1 reduced ROS accumulation, decreased expression of senescence-related proteins, and increased the proliferation ability of NPSCs . SIRT1 activation also confirmed the efficacy of NPSCs in alleviating IVDD in a puncture-induced IVDD rat model based on in vivo assessments . However, inhibiting SIRT1 partially reversed the therapeutic effects of NPSCs on degenerated IVDs - . Further research showed that overexpressing SIRT1 in MSCs effectively mitigated IVDD, as evidenced by the recovery of IVD height and volume and increased mRNA and protein levels of type II collagen and aggrecan, which inhibit the NF-kappaB p65 inflammatory pathway . Additionally, activating the SIRT1 / PPAR-γ co-activator 1α pathway, closely associated with mitochondrial function and antioxidant activities , , alleviated OS-induced mitochondrial dysfunctions, including reduced mitochondrial ROS production, increased mitochondrial membrane potentials, and enhanced survival rates of transplanted NPSCs, thus enhancing the therapeutic potential of MSCs for IVDD . Overexpression of SIRT1 could mitigate some limitations in MSC survival and adaptation under the severe conditions of disc degeneration, potentially due to SIRT1-mediated anti-ferroptosis effects in MSCs. Collectively, these findings indicate that targeting SIRT1 holds promise for treating IVDD by improving the therapeutic efficacy of MSC-based therapy. 3.3.3 PI3K/AKT pathway ROS accumulation in degenerative IVDs can trigger ferroptosis in MSCs and reduce the survival rate of engrafted MSCs. Suppressing MSC ferroptosis could potentially enhance their survival in the oxidative stress (OS) microenvironment of degenerative IVDs. Further mechanistic studies have shown that the phosphatidylinositol 3-kinase (PI3K) / threonine-protein kinase (AKT) signaling pathway can modulate ferroptosis, offering a novel approach for stem cell therapy in treating IVDD - . The PI3K / AKT signaling cascades are activated by hormones, growth factors, and other extracellular stimuli to regulate essential cellular functions such as cell proliferation, regulated cell death (RCD), and cell survival - . For example, activation of the PI3K / AKT pathway provides neuroprotection by phosphorylating Nrf2, which then enhances FTH1 transcription to store excess iron ions and prevent iron toxicity from excessive Fe 2+ accumulation . Furthermore, activation of the PI3K/AKT pathway has been shown to increase optic atrophy 1 expression, a member of the mitochondrial fusion protein family, promoting mitochondrial fusion and preventing mitochondrial structural and functional abnormalities, ultimately reducing ferroptosis , - . The PI3K / AKT pathway, associated with resistance mechanisms and cell survival, can be activated to desensitize cells to ferroptosis, offering an alternative approach to stem cell therapy for IVDD. In an OS microenvironment in vitro model of BMSCs, the PI3K / AKT signaling pathway was found to be involved in OS-induced BMSC ferroptosis . The upregulation of PI3K and AKT phosphorylation increased anti-oxidative gene expression and reduced intracellular ROS levels, thereby inhibiting BMSC ferroptosis and enhancing cell viability . Interestingly, activation of the PI3K / AKT signaling pathway also improved the antioxidant defense of BMSCs under the OS microenvironment and enhanced their osteogenic differentiation capacity, as demonstrated by increased mRNA transcripts of osteogenic markers . Activating the PI3K / AKT pathway has improved anti-OS processes in MSCs by suppressing ferroptosis, providing a theoretical foundation for enhancing stem cell-based therapy for treating IVDD by increasing the survival of transplanted MSCs in the OS microenvironment. NPSCs can differentiate into NPCs and exert paracrine effects that maintain the quantity and quality of IVD cells, thus enhancing stem cell-based therapy for intervertebral disc regeneration , . Activation of the PI3K / AKT pathway promotes NPSC proliferation and adaptation to the niche of degenerated IVDs, advancing the endogenous repair process - . In an in vitro oxidative stress (OS) microenvironment simulated by H2O2, activation of the PI3K / AKT pathway alleviated mitochondrial dysfunction, including changes in mitochondrial ultrastructure and mitochondrial ROS production, thereby reducing ROS levels and increasing NPSC viability . Conversely, pretreatment of NPSCs with a PI3K inhibitor diminished these protective effects and exacerbated NPSC ferroptosis. In rat models of mechanical loading stress-induced IVDD, excessive mechanical loading inactivated the PI3K / AKT pathway in human NPSCs, resulting in intracellular ROS accumulation, mitochondrial dysfunction, and decreased cell viability . This validated the role of the PI3K / AKT signaling pathway in regulating and inhibiting ferroptosis. Activation of the PI3K/AKT pathway reversed these effects and efficiently suppressed ferroptosis in NPSCs. Significantly, activation of the PI3K/AKT pathway alleviated cell death in human NPSCs in vivo models and substantially mitigated IVDD . The structure of the IVD was restored, the amount of extracellular matrix (ECM) increased, and cell numbers were augmented. Furthermore, in a rat IVDD model induced by needle puncture, activation of the PI3K/AKT pathway reduced ROS generation and maintained mitochondrial homeostasis in NPSCs, while inhibitors of this pathway attenuated these protective effects . The PI3K / AKT pathway alleviated excessive cell death of NPSCs induced by OS in the microenvironment of degenerative IVDs, as evidenced by X-ray, magnetic resonance imaging (MRI), and histological analyses . Thus, the PI3K / AKT signaling pathway emerges as a promising candidate for treating IVDD by increasing the survival rate of NPSCs. Therefore, suppressing MSC ferroptosis via the PI3K/AKT pathway could be adopted for IVDD treatment, establishing a specific therapeutic strategy to preserve MSC viability and potentially enhance IVD regeneration. 3.3.4 GDFs Growth differentiation factors (GDFs) modulate cellular processes such as proliferation, differentiation, and cell death - . Previous research has indicated that GDFs may mediate oxidative stress (OS) responses and be involved in ferroptosis - . For example, in a mouse model of sepsis-induced cardiomyopathy, GDF-15 provided protection to cardiomyocytes by inhibiting OS and suppressing ferroptosis, thereby reducing myocardial injury . Further research showed that GDF-15 promotes the transcription of GPX4, reducing lipid peroxidation in cardiomyocytes. In another model, GDF-15 knockout in ferroptosis induced by erastin led to decreased intracellular glutathione (GSH) levels and accelerated ferroptosis progression . This study also demonstrated that GDF-15 enhances the expression of SLC7A11, explaining its role in increasing intracellular GSH and GPX4 levels. In a sepsis model induced by cecal ligation and puncture in mice, overexpression of GDF-11 inhibited ferroptosis by upregulating SIRT1, thereby reducing lung tissue damage and inflammation, and preserving alveolar barrier integrity, thus presenting a promising molecular target for acute lung injury treatment . These findings underscore the role of GDFs in maintaining redox balance and regulating ferroptosis in BMSCs. GDF-5, part of the transforming growth factor-beta superfamily, enhances chondrogenic differentiation - . In an oxidative stress environment simulating IVDD, adding GDF-5 reduced OS-induced cell death in NPSCs and promoted their chondroid differentiation. This effect is possibly mediated by the RhoA / Rho-associated protein kinase (ROCK) signaling pathway . In models of myocardial injury and subarachnoid hemorrhage in mice and rats, modulation of the RhoA / ROCK pathway has been shown to influence cardiomyocyte and neuronal ferroptosis, respectively - . This suggests that GDF-5 may enhance MSC survival by suppressing ferroptosis through the RhoA/ROCK pathway, indicating potential for gene-targeted NPSCs. Additionally, in a rat tail IVDD model, GDF-5-preconditioned BMSCs significantly improved NP regeneration and outperformed the sole BMSC treatment . Okoro et al. also reported that GDF-5 enhanced differentiation of BMSCs into NP-like cells, supporting the NPC phenotype . A series of experiments suggests that GDF-5 not only inhibits ferroptosis in engrafted MSCs but also effectively induces their differentiation into NP-like cells both in vivo and in vitro , offering a feasible approach for NP regeneration and IVDD repair. In summary, targeting MSC ferroptosis using GDF-5 as a gene target has shown great potential and warrants further validation in additional in vivo IVDD animal models. 3.3.5 Activating Prominin-2 exerts anti-ferroptosis effects As a 100-kDa glycoprotein, Prominin-2 comprises five transmembrane domains and two glycosylated extracellular loops - . It is a member of cholesterol-binding proteins and is associated with plasma membrane protrusions . PROM2 (encoding the prominin 2) mRNA is expressed in various normal human tissues . Recently, Prominin-2 has been recognized as a ferroptosis inhibitor that can transport ferritin out of the cell, thereby reducing intracellular iron ion accumulation and preventing ferroptosis , - . Adamiec-Organisciok et al. confirmed that PROM2 is a ferro-resistance marker, as evidenced by increased PROM2 mRNA levels in resistant cell lines, and decreased levels in sensitive cell lines. Paris et al. discovered that overexpression of Prominin-2 is associated with ferroptosis resistance by reducing cytoplasmic Fe 2+ accumulation, thus reducing lipid peroxidation and ferroptosis. Brown et al. reported that Prominin-2 opposes ferroptotic cell death. They found that 4-hydroxynonenal, a lipid peroxidation by-product and a known ferroptosis biomarker, activates heat shock factor protein 1 (HSF1), which induces Prominin-2 expression through HSF1-dependent transcription of PROM2. Additionally, activating transcription factor 1 enhances ferroptosis resistance by upregulating N6-adenosine-methyltransferase subunit transcription, thereby stabilizing PROM2 mRNA . Increased Prominin-2 reduces erastin-induced ferroptosis and improves cell viability and proliferation rate . These findings demonstrate that Prominin-2-activated iron export contributes to ferroptosis resistance and has significant implications for enhancing MSC-based therapy for IVDD. Based on this research, we proposed that targeting the mechanism inducing Prominin-2 expression could expand the options to enhance MSC ferroptosis resistance. Activating Prominin-2 expression could be a viable approach for improving the therapeutic efficiency of MSC-based therapies for IVDD. Our preliminary work supports that ferroptosis is a major factor in the early, rapid, and extensive depletion of BMSCs under in vitro ROS stress, consistent with the aforementioned findings . Additionally, we transfected BMSCs with an overexpression lentivirus to generate Prominin-2-overexpressed BMSCs. We also confirmed that activating Prominin-2 expression significantly mitigates the loss of BMSCs during the initial stage after implantation into degenerative IVDs. Enhanced BMSC retention slows the depletion of engrafted BMSCs and effectively boosts their therapeutic efficacy in the harsh environment of degenerative IVDs. Although our previous findings are impactful for MSC-based therapy, targeted Prominin-2 activators are not yet available. In MSC-based therapy for IVDD, targeting Prominin-2-mediated ferroptosis defense offers a potential intervention. Our future work will delve into the involvement and underlying mechanisms of Prominin-2 in ferroptosis and reveal its additional roles. For instance, our preliminary work discovered that Prominin-2 not only transports iron ions beyond the cell but also physically interacts with BTB and CNC homolog 1 (BACH1) and facilitates its degradation . BACH1, a transcription factor, represses multiple antioxidant genes (including Nrf2) and thus disrupts cellular redox homeostasis - . Increasing evidence indicates that BACH1 enhances ferroptosis by inhibiting the transcription of various OS-induced protective genes - . Regarding its mechanism, Prominin-2 exerts a dual role in OS-induced ferroptosis through the iron metabolism pathway and the Prominin-2 / BACH1 antioxidant pathway (Figure ). Investigating the upstream and downstream nodes of this pathway could clarify the function of Prominin-2 and provide new insights for Prominin-2-activating drug discovery. Moreover, evidence has shown that hydrogels could boost MSC survival after engraftment in the adverse OS microenvironment - . In this context, we also plan to explore the potential of Prominin-2 activator-delivering hydrogels to enhance MSC resistance to OS. Activating Prominin-2 could prevent ferroptotic cell death of transplanted MSCs and improve the therapeutic effect of MSCs for IVDD. In summary, a deeper exploration of the anti-ferroptosis mechanism based on Prominin-2 is highly promising.
Hypoxia-inducible factor 1-alpha (HIF-1α) was identified as a key ferroptosis gene by analyzing differentially expressed genes from OS-induced BMSCs compared to controls, based on the Gene Expression Omnibus databases and a ferroptosis dataset - . The pivotal role of HIF-1α in regulating ferroptosis was further validated in an animal model, suggesting that targeting HIF-1α to suppress MSC ferroptosis might be a novel approach for treating IVDD . Previous studies have shown that a hypoxic microenvironment stabilizes cellular HIF-1α and enhances ferroptosis resistance in a HIF-1α-dependent manner, likely through increased HIF-1α expression and its interaction with hypoxia-inducible factor 1 beta to form the HIF-1 complex - . Activation of the HIF-1 / hypoxic response elements (HRE) signaling pathway then promotes the binding of HRE to the solute carrier family 2, facilitated glucose transporter member 12 (SLC2A12) promoter, elevating SLC2A12 expression, modulating glutathione metabolism, and conferring resistance to ferroptosis . Transplanting ADSCs into degenerative IVDs demonstrates the therapeutic potential of MSC-based therapy for IVDD, although the physiological hypoxic state and OS contribute to low retention of transplanted ADSCs - . Hypoxia-preconditioned ADSCs have shown increased cell proliferation and migration capabilities and promoted differentiation into NPC-like cells via HIF-1α, whereas inhibiting HIF-1α produced the opposite effect . He et al. observed significant decreases in HIF-1α levels and NPSC counts in degenerative rat and human IVDs. Similarly, hypoxia-preconditioned NPSCs demonstrated enhanced resistance to cell death by activating HIF-1α, involving HMOX1 and solute carrier family 2, facilitated glucose transporter member 1 . Overexpressing HIF-1α in NPSCs also resulted in higher survival rates post-transplantation into degenerative discs in vivo . These findings collectively support the anti-ferroptosis effect of HIF-1α under the severe hypoxic and OS conditions typical of degenerative discs, suggesting HIF-1α's crucial role in enhancing the therapeutic efficacy of MSCs for IVDD treatment. However, other research presents a contrary view, indicating that inhibition of HIF-1α enhanced the anti-ferroptosis effects of Fer-1 in chondrocytes , and downregulation of HIF-1α reduced ferroptosis and improved gastric and minor intestinal mucosal injury in an animal model , . Therefore, the approach of targeting HIF-1α to suppress MSC ferroptosis remains controversial, and extensive foundational research on MSC-based therapy for IVDD is essential to elucidate this strategy.
NAD-dependent histone deacetylase sirtuin-1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent enzyme that regulates critical metabolic proteins during oxidative stress (OS) - . The SIRT1 / Nrf2 signaling pathway provides anti-OS effects and can prevent ferroptosis, restoring redox balance. Sanz-Alcázar et al. reported that frataxin deficiency in dorsal root ganglion neurons disrupts iron homeostasis, reduces SIRT1 expression, diminishes Nrf2 activation, and impairs the cellular response to OS, ultimately leading to ferroptosis. Conversely, increased SIRT1 levels activate the liver kinase B1 homolog / 5'-AMP-activated protein kinase pathway, which in turn activates Nrf2, playing a vital role in the antioxidant response . Activation of the Sirt1/Nrf2/GPX4 pathway also protects BMSCs from erastin-induced ferroptosis and enhances cell viability, thereby improving the effectiveness of BMSC-based therapy . Interestingly, in fundamental research on MSC-based therapy for IVDD, activating SIRT1 reduced ROS accumulation, decreased expression of senescence-related proteins, and increased the proliferation ability of NPSCs . SIRT1 activation also confirmed the efficacy of NPSCs in alleviating IVDD in a puncture-induced IVDD rat model based on in vivo assessments . However, inhibiting SIRT1 partially reversed the therapeutic effects of NPSCs on degenerated IVDs - . Further research showed that overexpressing SIRT1 in MSCs effectively mitigated IVDD, as evidenced by the recovery of IVD height and volume and increased mRNA and protein levels of type II collagen and aggrecan, which inhibit the NF-kappaB p65 inflammatory pathway . Additionally, activating the SIRT1 / PPAR-γ co-activator 1α pathway, closely associated with mitochondrial function and antioxidant activities , , alleviated OS-induced mitochondrial dysfunctions, including reduced mitochondrial ROS production, increased mitochondrial membrane potentials, and enhanced survival rates of transplanted NPSCs, thus enhancing the therapeutic potential of MSCs for IVDD . Overexpression of SIRT1 could mitigate some limitations in MSC survival and adaptation under the severe conditions of disc degeneration, potentially due to SIRT1-mediated anti-ferroptosis effects in MSCs. Collectively, these findings indicate that targeting SIRT1 holds promise for treating IVDD by improving the therapeutic efficacy of MSC-based therapy.
ROS accumulation in degenerative IVDs can trigger ferroptosis in MSCs and reduce the survival rate of engrafted MSCs. Suppressing MSC ferroptosis could potentially enhance their survival in the oxidative stress (OS) microenvironment of degenerative IVDs. Further mechanistic studies have shown that the phosphatidylinositol 3-kinase (PI3K) / threonine-protein kinase (AKT) signaling pathway can modulate ferroptosis, offering a novel approach for stem cell therapy in treating IVDD - . The PI3K / AKT signaling cascades are activated by hormones, growth factors, and other extracellular stimuli to regulate essential cellular functions such as cell proliferation, regulated cell death (RCD), and cell survival - . For example, activation of the PI3K / AKT pathway provides neuroprotection by phosphorylating Nrf2, which then enhances FTH1 transcription to store excess iron ions and prevent iron toxicity from excessive Fe 2+ accumulation . Furthermore, activation of the PI3K/AKT pathway has been shown to increase optic atrophy 1 expression, a member of the mitochondrial fusion protein family, promoting mitochondrial fusion and preventing mitochondrial structural and functional abnormalities, ultimately reducing ferroptosis , - . The PI3K / AKT pathway, associated with resistance mechanisms and cell survival, can be activated to desensitize cells to ferroptosis, offering an alternative approach to stem cell therapy for IVDD. In an OS microenvironment in vitro model of BMSCs, the PI3K / AKT signaling pathway was found to be involved in OS-induced BMSC ferroptosis . The upregulation of PI3K and AKT phosphorylation increased anti-oxidative gene expression and reduced intracellular ROS levels, thereby inhibiting BMSC ferroptosis and enhancing cell viability . Interestingly, activation of the PI3K / AKT signaling pathway also improved the antioxidant defense of BMSCs under the OS microenvironment and enhanced their osteogenic differentiation capacity, as demonstrated by increased mRNA transcripts of osteogenic markers . Activating the PI3K / AKT pathway has improved anti-OS processes in MSCs by suppressing ferroptosis, providing a theoretical foundation for enhancing stem cell-based therapy for treating IVDD by increasing the survival of transplanted MSCs in the OS microenvironment. NPSCs can differentiate into NPCs and exert paracrine effects that maintain the quantity and quality of IVD cells, thus enhancing stem cell-based therapy for intervertebral disc regeneration , . Activation of the PI3K / AKT pathway promotes NPSC proliferation and adaptation to the niche of degenerated IVDs, advancing the endogenous repair process - . In an in vitro oxidative stress (OS) microenvironment simulated by H2O2, activation of the PI3K / AKT pathway alleviated mitochondrial dysfunction, including changes in mitochondrial ultrastructure and mitochondrial ROS production, thereby reducing ROS levels and increasing NPSC viability . Conversely, pretreatment of NPSCs with a PI3K inhibitor diminished these protective effects and exacerbated NPSC ferroptosis. In rat models of mechanical loading stress-induced IVDD, excessive mechanical loading inactivated the PI3K / AKT pathway in human NPSCs, resulting in intracellular ROS accumulation, mitochondrial dysfunction, and decreased cell viability . This validated the role of the PI3K / AKT signaling pathway in regulating and inhibiting ferroptosis. Activation of the PI3K/AKT pathway reversed these effects and efficiently suppressed ferroptosis in NPSCs. Significantly, activation of the PI3K/AKT pathway alleviated cell death in human NPSCs in vivo models and substantially mitigated IVDD . The structure of the IVD was restored, the amount of extracellular matrix (ECM) increased, and cell numbers were augmented. Furthermore, in a rat IVDD model induced by needle puncture, activation of the PI3K/AKT pathway reduced ROS generation and maintained mitochondrial homeostasis in NPSCs, while inhibitors of this pathway attenuated these protective effects . The PI3K / AKT pathway alleviated excessive cell death of NPSCs induced by OS in the microenvironment of degenerative IVDs, as evidenced by X-ray, magnetic resonance imaging (MRI), and histological analyses . Thus, the PI3K / AKT signaling pathway emerges as a promising candidate for treating IVDD by increasing the survival rate of NPSCs. Therefore, suppressing MSC ferroptosis via the PI3K/AKT pathway could be adopted for IVDD treatment, establishing a specific therapeutic strategy to preserve MSC viability and potentially enhance IVD regeneration.
Growth differentiation factors (GDFs) modulate cellular processes such as proliferation, differentiation, and cell death - . Previous research has indicated that GDFs may mediate oxidative stress (OS) responses and be involved in ferroptosis - . For example, in a mouse model of sepsis-induced cardiomyopathy, GDF-15 provided protection to cardiomyocytes by inhibiting OS and suppressing ferroptosis, thereby reducing myocardial injury . Further research showed that GDF-15 promotes the transcription of GPX4, reducing lipid peroxidation in cardiomyocytes. In another model, GDF-15 knockout in ferroptosis induced by erastin led to decreased intracellular glutathione (GSH) levels and accelerated ferroptosis progression . This study also demonstrated that GDF-15 enhances the expression of SLC7A11, explaining its role in increasing intracellular GSH and GPX4 levels. In a sepsis model induced by cecal ligation and puncture in mice, overexpression of GDF-11 inhibited ferroptosis by upregulating SIRT1, thereby reducing lung tissue damage and inflammation, and preserving alveolar barrier integrity, thus presenting a promising molecular target for acute lung injury treatment . These findings underscore the role of GDFs in maintaining redox balance and regulating ferroptosis in BMSCs. GDF-5, part of the transforming growth factor-beta superfamily, enhances chondrogenic differentiation - . In an oxidative stress environment simulating IVDD, adding GDF-5 reduced OS-induced cell death in NPSCs and promoted their chondroid differentiation. This effect is possibly mediated by the RhoA / Rho-associated protein kinase (ROCK) signaling pathway . In models of myocardial injury and subarachnoid hemorrhage in mice and rats, modulation of the RhoA / ROCK pathway has been shown to influence cardiomyocyte and neuronal ferroptosis, respectively - . This suggests that GDF-5 may enhance MSC survival by suppressing ferroptosis through the RhoA/ROCK pathway, indicating potential for gene-targeted NPSCs. Additionally, in a rat tail IVDD model, GDF-5-preconditioned BMSCs significantly improved NP regeneration and outperformed the sole BMSC treatment . Okoro et al. also reported that GDF-5 enhanced differentiation of BMSCs into NP-like cells, supporting the NPC phenotype . A series of experiments suggests that GDF-5 not only inhibits ferroptosis in engrafted MSCs but also effectively induces their differentiation into NP-like cells both in vivo and in vitro , offering a feasible approach for NP regeneration and IVDD repair. In summary, targeting MSC ferroptosis using GDF-5 as a gene target has shown great potential and warrants further validation in additional in vivo IVDD animal models.
As a 100-kDa glycoprotein, Prominin-2 comprises five transmembrane domains and two glycosylated extracellular loops - . It is a member of cholesterol-binding proteins and is associated with plasma membrane protrusions . PROM2 (encoding the prominin 2) mRNA is expressed in various normal human tissues . Recently, Prominin-2 has been recognized as a ferroptosis inhibitor that can transport ferritin out of the cell, thereby reducing intracellular iron ion accumulation and preventing ferroptosis , - . Adamiec-Organisciok et al. confirmed that PROM2 is a ferro-resistance marker, as evidenced by increased PROM2 mRNA levels in resistant cell lines, and decreased levels in sensitive cell lines. Paris et al. discovered that overexpression of Prominin-2 is associated with ferroptosis resistance by reducing cytoplasmic Fe 2+ accumulation, thus reducing lipid peroxidation and ferroptosis. Brown et al. reported that Prominin-2 opposes ferroptotic cell death. They found that 4-hydroxynonenal, a lipid peroxidation by-product and a known ferroptosis biomarker, activates heat shock factor protein 1 (HSF1), which induces Prominin-2 expression through HSF1-dependent transcription of PROM2. Additionally, activating transcription factor 1 enhances ferroptosis resistance by upregulating N6-adenosine-methyltransferase subunit transcription, thereby stabilizing PROM2 mRNA . Increased Prominin-2 reduces erastin-induced ferroptosis and improves cell viability and proliferation rate . These findings demonstrate that Prominin-2-activated iron export contributes to ferroptosis resistance and has significant implications for enhancing MSC-based therapy for IVDD. Based on this research, we proposed that targeting the mechanism inducing Prominin-2 expression could expand the options to enhance MSC ferroptosis resistance. Activating Prominin-2 expression could be a viable approach for improving the therapeutic efficiency of MSC-based therapies for IVDD. Our preliminary work supports that ferroptosis is a major factor in the early, rapid, and extensive depletion of BMSCs under in vitro ROS stress, consistent with the aforementioned findings . Additionally, we transfected BMSCs with an overexpression lentivirus to generate Prominin-2-overexpressed BMSCs. We also confirmed that activating Prominin-2 expression significantly mitigates the loss of BMSCs during the initial stage after implantation into degenerative IVDs. Enhanced BMSC retention slows the depletion of engrafted BMSCs and effectively boosts their therapeutic efficacy in the harsh environment of degenerative IVDs. Although our previous findings are impactful for MSC-based therapy, targeted Prominin-2 activators are not yet available. In MSC-based therapy for IVDD, targeting Prominin-2-mediated ferroptosis defense offers a potential intervention. Our future work will delve into the involvement and underlying mechanisms of Prominin-2 in ferroptosis and reveal its additional roles. For instance, our preliminary work discovered that Prominin-2 not only transports iron ions beyond the cell but also physically interacts with BTB and CNC homolog 1 (BACH1) and facilitates its degradation . BACH1, a transcription factor, represses multiple antioxidant genes (including Nrf2) and thus disrupts cellular redox homeostasis - . Increasing evidence indicates that BACH1 enhances ferroptosis by inhibiting the transcription of various OS-induced protective genes - . Regarding its mechanism, Prominin-2 exerts a dual role in OS-induced ferroptosis through the iron metabolism pathway and the Prominin-2 / BACH1 antioxidant pathway (Figure ). Investigating the upstream and downstream nodes of this pathway could clarify the function of Prominin-2 and provide new insights for Prominin-2-activating drug discovery. Moreover, evidence has shown that hydrogels could boost MSC survival after engraftment in the adverse OS microenvironment - . In this context, we also plan to explore the potential of Prominin-2 activator-delivering hydrogels to enhance MSC resistance to OS. Activating Prominin-2 could prevent ferroptotic cell death of transplanted MSCs and improve the therapeutic effect of MSCs for IVDD. In summary, a deeper exploration of the anti-ferroptosis mechanism based on Prominin-2 is highly promising.
Given the harsh OS microenvironment in degenerative IVDs sensitizes MSCs to ferroptosis, targeting transplanted MSC ferroptosis is essential for enhancing the clinical efficacy of cell transplantation therapy for IVDD. This includes modulating the ferroptosis signaling pathway in stem cells or enhancing the intracellular antioxidant defense system. It is important to note that injecting gene-modified or antioxidant preconditioned MSCs might cause a sudden increase in intradiscal pressure, contributing to the leakage of engrafted MSCs. Considering the challenge of injecting MSCs into degenerative IVDs, an optimal candidate carrier could be designed to address this challenge. Composite hydrogels exhibit good biocompatibility, are injectable for minimally invasive administration, and maintain stable mechanical strength after injection to prevent MSC leakage - . More importantly, hydrogel encapsulation creates a defensive shield for the transplanted MSCs and protects them from ferroptotic damage, thus reducing ferroptosis in stem cells , , . In vivo research on aged bone regeneration showed that the fabricated injectable hydrogel was highly sensitive to ROS and effectively scavenged intracellular ROS of BMSCs in the OS microenvironment . This demonstrated that composite hydrogels efficiently modulate the antioxidant function of BMSCs to defend against ferroptosis and improve the host microenvironment, thereby enhancing BMSCs' self-renewal ability and osteogenic capacity. Additionally, composite hydrogels could also be utilized for sustained drug release, including ferroptosis inhibitors and antioxidants , - . The sustained drug release from the hydrogel has a persistent and strong anti-ferroptosis effect on MSCs. In a rat-infected bone defect model, ferroptosis was identified as the primary cause of BMSC death under the infected bone microenvironment. Based on this, Yuan et al. designed a hydrogel composite scaffold with ROS-responsive and anti-ferroptosis properties, featuring long-term Fer-1 release to deliver BMSCs for repairing infected bone defects. Targeting ferroptosis mitigated OS damage to BMSCs and protected their cell viability, thus preserving osteogenic differentiation potential and facilitating osteogenic regeneration. Results from micro-CT, X-ray, and histological evaluations indicated that the hydrogel composite scaffold-loaded BMSCs targeted ferroptosis to better promote bone regeneration than the pure hydrogel group and are a promising therapeutic approach for repairing infected bone defects. As a result, composite hydrogels for MSC encapsulation provide a direct protective barrier and continuously inhibit ferroptosis through the OS pathway within the stem cells. Alternatively, it can achieve sustained and controlled release of ferroptosis inhibitors, thus achieving continuous suppression of ferroptosis. Increasing numbers of in vivo studies on IVDD show that composite hydrogels carrying MSCs are promising for enhancing the effectiveness of stem cell-based therapy , , , . These hydrogels have shown anti-ferroptosis effects on engrafted MSCs, thus promoting IVD regeneration. In rat models of IVDD, ADSCs or ADSC-laden hydrogels were transplanted into degenerative IVDs . Four weeks post-implantation, the retention of ADSCs in the composite hydrogels was higher, and the proportion of the NP area was also larger than in other groups. Notably, there was a significant increase in cell proliferation rate and a decrease in intracellular MDA levels in the ADSC-laden hydrogel group, indicating that hydrogel-encapsulated ADSCs effectively resisted ferroptotic damage under OS conditions. More importantly, these hydrogels showed a more substantial effect in delaying IVDD, preserving IVD tissue integrity, and boosting NP-like ECM production . In another study using a rat IVDD model, MSCs were encapsulated by alginate and gelatin microgel, and the biocompatible hydrogel loaded with MSCs was delivered into degenerative IVDs . It is noteworthy that damage to mitochondrial cristae was reduced under transmission electron microscopy in the hydrogel-encapsulated group, demonstrating that ferroptosis contributed to engrafted MSC death under OS conditions. Hydrogel encapsulation protected MSCs in the harsh disc microenvironment, prolonged MSC retention, and preserved their migration, proliferation, and differentiation properties, ultimately resulting in more effective reduction of disc degeneration compared to treatment with MSCs alone . Based on this defense against MSC ferroptosis after transplantation, Wang et al. designed a manganese oxide (MnOx) nanohydrogel to deliver BMSCs to repair IVDD . MnOx has strong antioxidant enzyme activities and can continuously eliminate ROS in degenerative IVDs and improve the OS microenvironment - . This injectable composite nanohydrogel enables MnOx to be more resistant to degradation, allowing it to maintain effective concentrations in the lesion area for extended periods. Importantly, MnOx nanohydrogel reduced ferroptotic damage to BMSCs, as evidenced by increased cell proliferation, enhanced BMSC metabolic activity, and lowered intracellular ROS levels . Due to its superior anti-ferroptosis effects, the BMSCs released by hydrogels maintain high stemness and low senescence under OS conditions, and also remodel NPCs' ECM. In vivo , injection of BMSC-loaded MnOx nanohydrogel showed the highest therapeutic efficacy, as assessed by gross evaluation, X-ray and MRI examinations, and histological staining . In conclusion, the biocompatible MSC hydrogel suppressed ferroptosis activation and improved MSC survival after intradiscal transplantation. Targeting the ferroptosis signaling pathway, combined with hydrogel encapsulation, shows great potential and could be an effective strategy to enhance the effectiveness of MSC-based therapy for treating IVDD diseases.
Existing evidence suggests that cell therapy is a feasible method for partially restoring the biological characteristics of degenerated IVDs. MSCs, noted for being non-tumorigenic and easily accessible, are commonly used as cell therapy source cells. Reports indicate that intradiscal engraftment of MSCs can boost the accumulation of proteoglycans and collagen, leading to improved radiological results and pain alleviation. However, the effectiveness of unmodified MSCs is largely limited. The ineffective therapeutic outcomes of MSC therapy are attributed to the OS microenvironment within the degenerated IVDs, which is characterized by ischemic and hypoxic stress that disrupts the balance between the synthesis and degradation of the ECM. It is increasingly clear that simply implanting stem cells in vivo is insufficient to maximize their healing capabilities. Therefore, protecting the engrafted MSCs and maintaining their physiological functions is critical to enhancing the efficacy of MSC-based therapy for IVDD. Moreover, inhibiting transplanted MSC cell death to promote MSC retention is essential. The harsh OS microenvironment could trigger apoptosis, necrosis, and pyroptosis in MSCs - . However, studies have shown that inhibiting apoptosis, necrosis, and pyroptosis could only partially restore the vitality of transplanted MSCs - , . It seems that other RCD pathways might also contribute to transplanted MSC death. The process leading to ferroptosis activation involves a complex disruption of cellular iron balance, resulting in elevated cytosolic metal ion levels that cause OS and irreversible damage to cell structures and biomolecules. Notably, the harsh OS microenvironment within degenerated IVDs compromises the balance between oxidative and antioxidative systems in MSCs, making them prone to ferroptosis. Therefore, ferroptosis-associated cell death could further reduce the therapeutic effects of MSC therapy. Conversely, protecting MSCs from ferroptosis could enhance the efficacy of MSC treatment. Increasing evidence suggests that targeting ferroptosis for stem cell transplantation therapy effectively enhances their retention within degenerated IVDs. By manipulating stem cells to intervene in ferroptosis, researchers and clinicians could further boost MSC survival rate and provide a significant advantage during the initial stages of in vivo placement, where MSCs face numerous challenges. Hence, suppressing transplanted MSC ferroptosis and enhancing their preservation under the harsh microenvironment of the discs are crucial for optimizing MSC-based transplantation therapy. This review focused on current ferroptosis-related strategies for enhancing in vivo cell survival and subsequent tissue regeneration by manipulating stem cells or their surrounding environment. The MSCs to the host microenvironment is critical for successful stem cell-based transplantation therapies. Strategies for preconditioning, including genetic modification, drug preconditioning, and hydrogel encapsulation, have been developed to enhance the adaptation and functionality of MSCs in pathological contexts. Recent advances in targeted ferroptosis treatments have proved beneficial in both in vitro and in vivo models of IVDD, employing stem cell transplantation. Moreover, these treatments are associated with reduced ROS accumulation and the activation of antioxidant pathways. Preconditioning MSCs through iron metabolic pathways can effectively inhibit ferroptosis by reducing intracellular ROS levels. The application of antioxidants during preconditioning significantly enhances the adaptability of engrafted MSCs to the harsh conditions of the IVD. This improvement may be due to the activation of the Nrf2/GPX4 signaling pathway. Non-canonical molecules, such as HIF-1α, SIRT1, PI3K / AKT, GDFs, and Prominin-2, modulate the ferroptosis process by influencing the balance between oxidation and antioxidation within MSCs. Therefore, reducing intracellular iron content and enhancing antioxidant defenses are key mechanisms for improving the adaptability of preconditioned MSCs to stressful environments. Developing MSC preconditioning strategies based on these aspects could enhance the survival rate of transplanted MSCs within the degenerated IVDs, better utilizing their intrinsic properties for repair (Figure ). In our review of preliminary clinical studies on stem cell transplantation for the repair of degenerated IVDs, various tissue-derived stem cells were considered. MSCs can be obtained from various tissues, such as bone marrow, umbilical cord, adipose tissue, each exhibiting distinct phenotypic and functional features . Although these stem cells from different sources have shown some repair effectiveness, comparing the results of various studies is challenging. One possible reason for this issue is the inherent and extensive heterogeneity among these stem cells . This variation in tissue sources may also contribute to inconsistencies in the outcomes of future clinical applications. As MSC-based therapies progress through multiple clinical trials, it is crucial to implement strategies to minimize product heterogeneity. Therefore, a standardized evaluation of the specific functions of MSCs is necessary in the future (for example, defining the optimal tissue source), moving towards more consistent and effective MSC-based therapy for IVDD . Moreover, there is currently a lack of studies that reproducibly and reliably confirm the potential of targeted MSC ferroptosis in early simulated clinical settings. Although targeted ferroptosis strategies have improved the regenerative effects of these stem cells from different sources in transplantation repair of degenerated IVDs, their preclinical and early clinical efficacy may be inconsistent and often requires verification in later trials. The susceptibility of these stem cells from different sources to ferroptosis varies in the OS environment of degenerated IVDs, leading to different retention rates after transplantation. Therefore, evaluating the efficacy of targeted MSC ferroptosis more effectively and reliably is essential. Approaches to address these differences may include carefully selecting tissue sources, donors, and specific MSC subpopulations, as well as standardized cultivation conditions and potency evaluations - . This is also a key focus area of our future research. This review outlines the potential applications of targeting MSC ferroptosis through iron metabolism and antioxidant pathways, critical molecules involved in ferroptosis, and MSC encapsulation with hydrogels. The delivery of preconditioned or genetically modified MSCs, combined with composite hydrogels, creates a three-dimensional protective environment that more effectively inhibits ferroptosis. It is particularly important to develop strategies that fully leverage the anti-ferroptosis effects of Prominin-2 in MSCs through dual-pathway inactivation of ferroptosis. Based on our previous research, activating Prominin-2 maintains intracellular iron ion homeostasis and may also repress BACH1 expression to regulate antioxidant pathway in MSCs. However, excessive and prolonged OS prevents Prominin-2 from effectively suppressing BACH1 expression in MSCs due to the cellular response to transcription factor BACH1 - . The BACH1 inhibitor hemin can effectively degrade BACH1, offering potential for regulating antioxidant metabolism within stem cells - . We envision combining these three strategies to target MSCs effectively, enhancing targeted ferroptosis effects. We plan to design a hydrogel composite scaffold that provides a cellular protective barrier and allows for sustained delivery of the BACH1 inhibitor hemin. This is expected to enable gene-modified MSCs to acquire barrier protection directly in the transplantation OS microenvironment and continuously inhibit intracellular BACH1 expression, thereby more effectively suppressing ferroptosis in MSCs through both oxidative and antioxidative pathways. A growing body of evidence confirms that MSCs preconditioned by targeting ferroptosis show potential for IVD regeneration in preclinical models, where engrafted MSCs increased disc height, upregulated ECM production, and elevated the expression of NP marker genes. In summary, targeted ferroptosis preconditioning is promising for promoting MSC adaptation to OS microenvironments, and introducing ferroptosis-targeting drugs can help MSCs survive in these environments and enhance their repair efficiency for IVDD.
Supported by preclinical and clinical studies, MSC-based cell therapy has emerged as a promising option for IVDD diseases and is gradually entering clinical practice. However, the low retention of MSCs in the harsh microenvironment is a major reason for their loss and suboptimal therapeutic outcomes after transplantation into degenerative IVDs. The OS microenvironment triggers a surge in intracellular ROS, destabilizing the balance between oxidation and antioxidation. This imbalance is a key pathway for ferroptosis and has been linked to the loss of MSC retention. Clinical translation strategies that target ferroptosis in MSCs could improve retention, prolong survival, and boost therapeutic outcomes. Our review emphasized the potential benefits of targeting ferroptosis inhibition in MSCs for treating IVDD diseases, drawing on a comprehensive assessment of basic research and early clinical translation. We also provided a comprehensive overview of pertinent basic research and early clinical translational studies. We believe that targeting ferroptosis in MSCs could offer new perspectives for the future treatment of IVDD diseases. However, there is currently a lack of clinical translational research focusing on targeting ferroptosis in MSCs to enhance the efficiency of degenerated IVD repair. In future research, we need to further confirm in vivo the impact of targeting ferroptosis on MSC retention rate and the efficiency of repairing degenerated IVDs. Additionally, there is an urgent need for specific biomarkers that are common to both MSCs and ferroptosis to accurately predict the efficiency of MSC ferroptosis inhibition. Moreover, the data from primary research still needs to meet the standards required for clinical application. Therefore, more effective strategies for inhibiting MSC ferroptosis need to be developed. Nevertheless, targeting MSC ferroptosis holds promise for novel therapies for IVDD diseases.
|
The role of direct oral anticoagulants in the era of COVID-19: are antiviral therapy and pharmacogenetics limiting factors? | f164f15c-44d1-4ce8-9a77-9b22a12a9ea8 | 9284020 | Pharmacology[mh] | In patients with COVID-19, thromboinflammation is a major cause of morbidity and mortality. COVID-19 causes a hypercoagulable disorder presenting as arterial and venous thrombotic incidents. Coagulopathy similar to disseminated intravascular coagulation (DIC) is present in a huge number of people hospitalized with COVID-19 . The etiology of COVID-19-associated coagulopathy is still unclear and involves many various cell types. In fact, observational research and case reports have demonstrated that some patients with COVID-19 admitted to the ICU met the International Society on Thrombosis and Hemostasis (ISTH) criteria for DIC. The most likely explanation for these differing statements is that although COVID-19-associated coagulopathy has some common pathophysiological elements with DIC, it has features of a separate entity. In COVID-19, the fluctuating condition of hypercoagulability also depends on the involvement of cells such as platelets, endothelial cells, and leukocytes and on the sampling time through the time of infection . The virus can promote the activation of the inflammatory response that includes increased inflammatory markers such as tumor necrosis factor (TNF), interferon-1 (INF-1), interleukin-6 (IL-6), and IL-12. This can result in the enlistment of immune cells, including neutrophils, leading to neutrophil extracellular trap (NET) formation. Increased levels of inflammatory markers in people with severe COVID-19 can initiate a hyperinflammatory reaction referred to as “cytokine storm,” which has been associated with poor outcomes. Furthermore, the virus may lead to the hyperactivation of platelets and endothelial injury, and the release of tissue factor, plasminogen activator inhibitor-1, and increased von-Willebrand factor, which activate the coagulation pathway . In patients with COVID-19, IL-6 levels correlate directly with fibrinogen levels, as well as with the increased levels of prothrombotic acute-phase reactants such as vWF, fibrinogen, and factor VIII. Prolonged immobilization also plays an important role in blood stasis, which is typical for the most serious forms of the illness . Patients with COVID-19 have a higher frequency of venous thromboembolism, most likely due to severe inflammation, coagulopathy, immobilization, and initial phases of DIC . While some reports suggest that thrombotic incidents may have been produced by immobilization and insufficient use of thromboprophylaxis, some authors reported thrombosis even with thromboprophylaxis, proposing an explicit association between COVID-19 and thrombosis . Coagulopathy is one of the most significant indicators of poor outcomes in COVID-19. For example, in patients with COVID-19 pneumonia, abnormal coagulation tests were associated with a fatal outcome . Furthermore, patients with COVID-19 pneumonia who later died had significantly higher D-dimers, fibrin degradation products (FDP), and longer prothrombin time (PT) upon hospital admission compared with surviving patients . COVID-19 patients have an elevated risk of thrombosis due to impaired mobility or immobility, acute inflammatory pathophysiological events leading to hypercoagulable blood, and possibly vascular endothelial damage, which represent all three elements of Virchow's triad. The ISTH recommends the determination of D-dimer, PT, and the platelet count in all patients with COVID-19, which may help to stratify patients in need of hospitalization. It also recommends monitoring PT, D-dimers, fibrinogen, and platelets in hospitalized COVID-19 patients. If these coagulation parameters deteriorate, a more aggressive treatment is likely to be necessary .
Pharmacodynamic and pharmacokinetic properties of DOACs limit the use of this class of anticoagulants in COVID-19-hospitalized patients. All DOACs are metabolized by the P-glycoprotein (P-gp) pathway, while rivaroxaban and apixaban are metabolized by the cytochrome P450 (CYP) 3A4 pathway (in a proportion of approximately 15% and 13%, respectively). Around 70% of commonly used drugs is metabolized by the CYP3A4 pathway, which makes it a critical route of drug metabolism . Therefore, DOACs are potentially involved in multiple drug-drug interactions with a variety of anti-COVID therapeutic drugs, which can modify their anticoagulant effects . The pharmacogenetics of DOACs affect their pharmacokinetic properties. Interindividual variability regarding the efficacy and safety profile of direct anticoagulants could be related to polymorphism of the genes responsible for pharmacokinetic processes . To date, several single nucleotide polymorphisms (SNPs) of genes coding the proteins participating in the metabolism of DOACs have been correlated with anticoagulant treatment response. Genes with a notable effect on dabigatran pharmacokinetics are ABCB1 gene encoding for P-gp, and CES1 and CES2 , liver caroboxylesterases that hydrolyze xenobiotics, since these pathways are important in the metabolism of dabigatran etexilate. P-gp pathway is also responsible for the metabolism of factor Xa inhibitors apixaban, edoxaban, and rivaroxaban, thus ABCB1 SNPs implicate alterations in the plasma levels of DOACs. Furthermore, factor Xa inhibitors are mainly metabolized by CYP-related enzymes, such as CYP3A4 and CYP2J2, making them genes of interest influencing the concentration of anticoagulants . However, due to the lack of sufficient evidence, pharmacogenetic testing of DOACs has still not been introduced in clinical practice, as is the case for VKA. However, a prior genotyping of patients could aid in choosing the most appropriate DOAC according to patients' individual characteristics. Antiviral drugs have been reported to have diverse levels of success in COVID-19 treatment. Moreover, hospitalized COVID-19 patients are often treated by polypharmacy, including the use of various classes of medications, such as antiviral drugs (lopinavir/ritonavir and darunavir), antibiotics, immunosuppressive agents (tocilizumab), steroids (dexamethasone, methylprednisolone), bronchodilatators, and antihypertensives . Indeed, many antiviral drugs are substrates of P-gp pathway, while remdesivir and lopinavir/ritonavir are also well known CYP3A4 inhibitors . Testa et al examined serum the levels of DOACs in hospitalized patients treated with antiviral medications such as lopinavir/ritonavir and darunavir. A cohort of 32 patients previously using DOACs were hospitalized for the treatment of COVID-19 pneumonia and 12 of them continued with DOACs regimen during hospitalization. All patients who remained on DOACs therapy had markedly increased serum levels of oral anticoagulants compared with prehospitalization levels . Furthermore, concomitant usage of DOACs and dexamethasone, a strong inducer of CYP3A4 and P-gp, is not recommended in patients with COVID-19-induced hypercoagulability due to potentially reduced anticoagulant effect of DOACs . Considering that the effect of dexamethasone lasts for around 7 days, DOACs should be continued at least one week after hospital discharge . Obviously, these drug-drug interactions can either enhance or reduce DOACs' anticoagulant effect, thus exposing patients to an increased risk of bleeding or thrombotic complications . Likewise, an immune response due to COVID-19 infection should be considered during the administration of anticoagulant drugs. Generally, severe SARS-CoV-2 infection is associated with high levels of various cytokines, especially with markedly increased levels of IL-6 . Targeting this key mediator of inflammation represents one of the main approaches to COVID-19 treatment . Besides immune dysregulation, IL-6 can downregulate CYP3A4 and P-gp, which suggests that the immune response by itself is capable of modulating metabolic pathways . In vitro studies demonstrated that IL-6 significantly affected the expression of liver-enriched nuclear receptors, pregnane X receptor (PXR) and constitutive androstane receptor (CAR), which are master regulators of genes involved in the elimination of xenobiotics . Thus, inflammatory burden characterized by increased levels of IL-6 suppresses PXR and CAR, and consequently their target genes including CYP3A4 . Additionally, blocking IL-6 receptor with tocilizumab or sarilumab can alter CYP3A4 enzymatic activity, altering the serum levels of DOACs . Taken together, many potential drug-drug interactions and metabolic changes due to acute inflammatory response may lead to an unpredictable and unstable DOACs' effect.
COVID-19 increases the risk for thrombosis. As discussed above, hospitalized COVID-19 patients commonly have elevated levels of D-dimers . Furthermore, elevated D-dimers are the predictors of mortality, which suggests that even asymptomatic patients with significantly elevated levels should be considered for hospitalization . The prevalence of venous thromboembolism in non-critically ill hospitalized patients is about 2.6% . This number significantly increases in patients admitted to the ICU. Several studies have reported an incidence of 25%-31% for venous thromboembolism in ICUs . Another study found venous thromboembolic events, especially pulmonary thromboembolism, to be significantly more prevalent in patients with COVID-19 ARDS than in patients with ARDS of other causes, despite anticoagulant treatment . Based on these and other accumulating data, anticoagulation prophylaxis has become a cornerstone of COVID-19 therapy. Therefore, the question of using DOACs emerged. One of the first relevant studies on the use of DOAC in COVID-19 was a Swedish register-based cohort study . This study compared the outcomes of patients with ongoing DOACs use due to nonvalvular atrial fibrillation (360 patients) and patients with known cardiovascular disease who did not use DOACs (1119 patients) before COVID-19 infection. Prior usage of oral anticoagulants did not reduce the risk of either hospitalization for acute COVID-19 or ICU admission or death due to COVID-19 . The study with the largest cohort of patients examining the effect and safety profile of DOACs before COVID-19 diagnosis used the data from TriNetX, a global federated health research network . This study included 738 423 patients, with a final sample of 26 006 patients after propensity score matching (13 003 on DOACs; 13 003 not on oral anticoagulants). This study, like the previous one, demonstrated that chronic DOAC administration before COVID-19 infection did not significantly improve clinical outcomes or rates of hospital admission in a period of one month . However, another study showed that patients with severe forms of COVID-19 are likely to develop a cytokine storm, which predisposes thromboembolic events and increases the already high thrombotic risk of patients taking oral anticoagulants . On the other hand, some studies supported the administration of DOACs in COVID-19 patients requiring hospital treatment. One of them showed that chronic DOACs administration was independently associated with a decreased rate of death in 70 patients (>70 years) with interstitial pneumonia . Another cohort study from Germany showed that hospitalized COVID-19 infected patients with pre-existing therapy with VKAs or DOACs had a lower risk for extracorporeal membrane oxygenation and invasive or non-invasive ventilation . As there is no consensus among available studies regarding DOACs and COVID-19, further studies are warranted that would elucidate the relationship of virus infection and coagulation abnormalities. The main evidence against DOACs benefit in COVID-19 is the finding that DOACs do not ameliorate microthrombosis in COVID-19. Some authors have proposed that hyperinflammation and coagulopathy-leukothrombosis (NETosis) are the main drivers of thrombus formation in COVID-19-infected patients. This suggests that anticoagulant doses of LMWH disrupt NET-associated thrombi while DOACs do not . Moreover, 71.4% of hospitalized patients who died of COVID-19 developed DIC, compared with only 0.6% of surviving patients . This dramatic derangement of clotting system induced by COVID-19 is not appropriate for DOAC anticoagulation, making heparins a preferred anticoagulant option. Furthermore, the anti-inflammatory properties of LMWH are beneficial in acute inflammatory conditions . Hospitalized patients with severe forms of COVID-19 infections are prone to develop the failure of other organ systems besides the respiratory system. For example, renal failure is common in acute COVID-19. Thus, heparins, especially unfractionated heparin, are a safer anticoagulation choice due to DOACs' renal-dependent metabolism . Furthermore, heparin binds to COVID-19 spike proteins and IL-6, which are elevated in COVID-19 patients . Considering this mechanism of action, LMWH and UFH are considered to be the best anticoagulation agents for hospitalized patients . According to the European Society of Cardiology, LMWH should be the first option for thromboprophylaxis in all patients who do not require hemodialysis. In patients with creatinine clearance less than 15 mL/min, the use of unfractionated heparin should be considered . The American College of Cardiology recommends LMWH to be considered in all hospitalized patients (both ICU and non-ICU patients) and does not recommend treatment-dose DOAC-rivaroxaban as a thromboprophylaxis strategy .
A vast majority of patients on chronic DOAC treatment after COVID-19 are treated as outpatients. This group of patients is recommended to continue the usual oral anticoagulant regimen as long as they are well hydrated to maintain adequate renal function, and specific antiviral interfering drugs are not required . With exception of patients with mechanical heart valves and those with antiphospholipid syndrome, it is even recommended to switch the patients from VKA to DOACs due to restricted access to blood monitoring during lockdown periods . Furthermore, COVID-19 outpatients with cardiometabolic diseases treated with DOACs had a reduced risk of arterial and venous thrombotic outcomes compared with those treated with VKA . In addition, outpatient anticoagulation with DOACs before COVID-19 diagnosis was associated with a 43% reduced risk for hospital admission . Interestingly, COVID-19 outpatients on chronic oral anticoagulation (DOACs/VKA) had a lower risk of all-cause mortality compared with non-anticoagulated patients . However, in high-risk patients regarding age or comorbidities, a prophylactic usage of rivaroxban showed no effect on COVID-19 disease progression in patients with mild form of disease .
COVID-19 is undoubtedly associated with increased rates of thrombotic events, mainly venous thromboembolism. These deleterious incidents have mostly occurred in hospitalized patients with severe forms of the disease . Thus, anticoagulation management represents one of the major therapies for the treatment of COVID-19 patients. Only parenteral anticoagulants, LMWH or UFH, are currently recommended in COVID-19 hospitalized patients with acute disease who receive anti-viral therapy. COVID-19 antiviral medications can significantly interact with pharmacodynamic and pharmacokinetic properties of DOACs, changing their efficacy and safety profile. Furthermore, patients with severe acute COVID-19 often exhibit notable coagulation derangements, consequently demanding parenteral anticoagulation. Due to their short half-life, fewer drug interactions, and potential antiviral/anti-inflammatory effects, heparins are the standard treatment option for COVID-19-induced venous thromboembolism and antithrombotic prophylaxis in hospital-treated patients . The same therapeutic scheme is recommended for hospitalized patients using oral anticoagulants before hospitalization for COVID-19 treatment. On the contrary, asymptomatic COVID-19 outpatients can continue their DOACs treatment in the usual manner. Although recent meta-analyses demonstrated that chronic oral coagulation, whether it be DOAC or VKA, did not reduce the high risk of all-cause mortality in COVID-19 patients . Further studies are needed to clarify the mechanisms of COVID-19-induced hypercoagulability as well as to propose appropriate anticoagulant treatment options for various forms of the disease.
|
Percutaneous tracheostomy simulation training for ENT physicians in the treatment of COVID-19-positive patients | 19ccb3fb-2554-4db1-9936-fc9a954f0c71 | 7284274 | Otolaryngology[mh] | Introduction The COVID-19 pandemic requires current practice to be adapted to emerging needs and enhanced protection of patients and caregivers . Patients in intensive care usually need prolonged respiratory assistance , blocking access for new cases when the saturation point is reached. Tracheostomy could hasten termination of respiratory assistance and reduce ICU stay . However, it incurs a high risk of SARS-CoV-2 viral dissemination as it involves ventilation circuit disconnection and aerosolisation. Based on experience from Asia in 2004 and 2020 , the American Academy ( https://www.entnet.org/content/tracheostomy-recommendations-during-covid-19-pandemic ) and the ENT-UK National Tracheostomy Safety Project ( https://www.entuk.org/tracheostomy-guidance-during-covid-19-pandemic ) drew up guidelines for surgical tracheostomy; intensive care physicians, however, seem to favour the percutaneous technique, which is quicker and does not require transfer to theatre, without more immediate complications , . Likewise, according to the recent guidelines from the French ENT Society (SFORL) on tracheostomy under the COVID-19 pandemic ( https://www.sforl.org/wp-content/uploads/2020/04/SFCCF-SFORL-COVID-19-2i%C3%A8me-article.pdf ), the percutaneous technique is to be preferred, to limit aerosolisation-related viral contamination of care staff and avoid theatre transfer. The procedure has been well described and is used in intensive care , , but usually requires 2 operators (1 at the patient's head for flexible endoscopic control, and 1 for the tracheostomy) and an anaesthesiologist to deal with the respirator and the drugs. To free up time for intensive care specialists, ENT physicians may be called upon to perform percutaneous tracheostomy, having the requisite anatomic knowledge and experience in dealing with the upper airway. This has been the case in the Nancy University Hospital and the Metz Military Hospital. However, the technique requires training, especially in the COVID-19 context. To minimise error, simulation can be of great help, and the present technical note describes a training schedule for ENTY physicians in percutaneous tracheostomy in COVID-19+ patients.
Technique A 3-hour half-day session can include 4–6 participants working in 2–3 pairs in a simulation room equipped with a video camera and microphones. The scenario requires 1 facilitator (anaesthetist), and can be supervised by 1 or 2 session leaders. Session procedure is shown in . A 15-min video introduction presents the kit, step-by-step breakdown of the manual procedure and error screening in a sample procedure; technical pitfalls and clinical risks are highlighted. Trainees are then given a percutaneous tracheostomy kit (Ultraperc kit Portex™, Smiths Medical, Minneapolis, Minnesota, USA) to handle ahead of simulation. The second phase involves a homemade low-tech procedure simulator, costing less than €20 ( ), for practice ahead of full-scale simulation. Trainees wear a full protective outfit: surgical clothing, FFP2 mask, protective glasses, hood, overshoes and 2 pairs of sterile gloves. An “observer” is useful, to oversee donning and doffing, as recommended in the French health authority's methodology guide ( https://solidarites-sante.gouv.fr/IMG/pdf/guide-covid-19-phase-epidemique-v15-16032020.pdf ). A dedicated checklist forming an acrostic from A to M enables trainees to prepare fully before entering the room, so as not to have to go out during the procedure: • Anaesthesia: local anaesthesia. Needle and syringe with aesthetic and vasoconstrictor; • Balloon: tracheostomy and cannula balloon to be checked and syringe for inflation to be prepared in advance; • Curare: or at least deep sedation, to avoid coughing; • Disinfection/drapes: Skin disinfection and sterile drapes (ideally 4 to allow wide view of landmarks); • Extra kit: Fall-back kit, prepared in advance “just in case”; • Flexible endoscope; • Goitre: prior identification of any goitre, laryngeal deviation, short neck or other technical difficulty; • Heat/humidity exchanger with antiviral filter and T-tube check-valve; • Intubation issue: screening for intubation issues and foreseeing possible reintubation; • Jacket: gloves and other PPE; • Kelly clamp/Kit; • Laminar flow: Ultra-filtered laminar airflow or mobile decontamination unit; • Marker: Surgical Skin Marker Pen. Using a SimMan 3G™ mannequin (Laerdal, Stavenger, Norway), 4 scenarios are implemented (standard tracheostomy, small goitre, pneumothorax, intubation difficulty) ( ), in pairs plus 1 facilitator (anaesthesiology resident–actor, with earphones) ( ). The number of persons present is kept to an absolute minimum, to observe due COVID-19 precautions; the other trainees watch a direct retransmission on a screen in an adjacent room. A group debriefing is held after each scenario, with trainees, facilitator, spectators and training team. Trainees provide feedback, then analysis focuses on the technical steps, COVID-19 transmission risk, and a need for speedy execution due to the patient's respiratory frailty. Scenarios are of incremental difficulty, confronting trainees with different clinical situations and corresponding options.
Discussion Percutaneous tracheostomy is fairly easy to learn, but, in the context of a highly contagious lung disease such as COVID-19, with severely impaired respiratory capacity, it has to be performed especially efficiently and safely, adapting the steps of Ciaglia's procedure ( ). We therefore thought it essential to formalise training in a safe environment by means of a simulation workshop. Having conducted this training module 3 times, we are able to draw some lessons and lay out the best debriefing approach. The detailed results are presented in . Although, with experience, surgeons’ self-assessments seem well correlated with their real skill , trainees are prone to overestimate their skills, especially in non-technical areas . We therefore advise associating self-assessment to assessment by the supervisor or supervisors ( and ). The initial sessions highlighted technical difficulties encountered by all trainee pairs ( ), especially in managing the orotracheal intubation catheter: in half of the scenarios, the balloon got pierced or the patient was unintentionally extubated, either of which can lead to ventilation circuit leakage, impairing oxygenation for the patient and incurring viral risk for the care personnel. Errors causing extubation mainly consisted in focusing visually on the needle instead of on catheter positioning, defective control of withdrawal (the hand used should be leaning on the patient's head), and defective location of catheter position. Errors causing balloon piercing mainly consisted in focusing on the cervical part of the procedure and trying to visualise the needle as soon as the puncture was made without having first checked the position of the intubation tube. The last of these problems may be due to poor anatomy in the mannequin; the endoscopic anatomy of the mannequin should be checked in advance to avoid this bias. In the light of these findings, we introduced the low-tech pre-training step ( ), supervised use of which may shorten the learning curve by practice with the various parts of the kit and with the successive steps. The low-tech simulator provided prior identification of technical issues liable to slow down the procedure, and enhanced its safety. In the third group, which had had a short 15-min experience with the low-tech simulator, there was only 1 extubation in the 4 scenarios and no balloon piercing, and procedure time was never more than 20 minutes. Installation also caused problems in the various scenarios. Bed height needs adjusting, and a support block can be useful. To avoid asepsis errors and equipment falling on the floor, a table bridging the patient's legs could, according to trainees, be a good solution. The simulations and the discussions between trainees and supervisors notably highlighted the question of managing the intubation catheter, which is the most delicate point in the procedure. The recommendation was to have the more experienced ENT physician at the patient's head (endoscope), as this position requires good experience of intubation and flexible endoscopy to secure optimal positioning in the airway. It also makes the leader free to synchronise ventilation with the anaesthetist and guide the trainee performing the percutaneous tracheostomy as such, which was generally agreed to be more technically straightforward. The intubation catheter should be freed from its attachments and positioned in the axis of the incisors to optimise endoscopy. When the catheter has to be moved, the movement should be slow and careful; the hand holding the catheter should lean on the patient's face to limit the risk of unintentional extubation on exposing the inferior edge of the cricoid. It is recommended to verbally call out the mark on the catheter (with respect to the dental arcades) before the catheter is raised and once it has been positioned below the glottis. In case of accidental extubation, the flexible endoscope is the best guide for reintubation, but an Eschmann stylet and a laryngoscope should also be readily available. To minimise leakage, a finger is placed on the trocar at cervical level as soon as possible, and the physician handling the endoscope also attempts to minimise leakage at the endoscope entry point. The respirator is put on prolonged expiratory pause when leakage is most likely, if the patient can tolerate this. A further precaution against aerosolisation would be to have a portable air purifier in the room throughout the procedure, to filter out airborne viral particles before, during and after tracheostomy. In the present case, all the scenarios were played out on the “high-tech” SimMan 3G simulator, which allows real-time adjustment of physiological constants transmitted to an intensive care screen, modelling complications (pneumothorax), modifying neck conformation (to simulate goitre, laryngeal deviation, etc.) and simulating difficult intubation ( ). A less sophisticated simulator could be used, but such full-scale simulation allows consensus to be reached on difficulties, however rare, that can be encountered in practice. In the course of the procedures and in the light of previous studies, a data collection and assessment form was designed ( ). The fact that the other trainees were able to watch a given pair's simulation in real time and take part in the debriefing ironed out some difficulties for the subsequent scenarios, so that procedure time constantly decreased despite the increasing difficulty ( ). The present checklist, with its “A-to-M” mnemonic form, can be of great importance as, in the context of intensive care for COVID-19 patients, the room must be closed and the team needs to be completely self-sufficient, which requires having all necessary equipment to hand in sufficient quantity to ensure the safety of both patient and staff. Supervised doffing revealed some errors: hands too close to the collar in removing the cap, the need to put all the clothing in the trash can without having to push it in, so as to avoid aerosolisation, and errors in removing the mask (need to pull the elastic bands from behind to in front of the skull to remove the mask from the face without raising it to the hairline). In conclusion, simulation of percutaneous tracheostomy with a training module covering theory with video support, technical practice on the low-tech simulator, then clinical practice on the full-scale high-tech simulator seems suited for training ENT physicians. The module is also an opportunity to stress the specificities of protection against COVID-19 in the ICU setting. We consider the format reproducible in most simulation centres equipped with high-tech simulators, and that the low-tech simulator is easy and cheap to produce for the purely technical aspect of the training.
The authors declare that they have no competing interest. Elsewhere, Valentin Favier received funding from the French College of ENT and Head and Neck Surgery for a 1-year research project on simulation in head and neck surgery.
|
Risk and benefit for umbrella trials in oncology: a systematic review and meta-analysis | ed4aa2ba-ac01-4a5a-906e-0faf4875c8c8 | 9264503 | Internal Medicine[mh] | Precision oncology is a strategy aiming to divide cancer patients into groups that will most likely respond to a given therapy. Treatment is tailored to the molecular makeup of a tumor rather than the site or stage of disease . Umbrella trials are novel trial designs commonly used in precision oncology defined as trials with “many different treatment arms within one trial. People are assigned to a particular treatment arm of the trial based on their type of cancer and the specific molecular makeup of their cancer” . The umbrella design is a type of master protocol which allows for testing multiple agents simultaneously and may include specified modifications while the trial is ongoing [ – ]. For example, adaptive randomization is often used to assign patients to the most effective experimental treatment based on continuous data accumulation and interim analyses . These features of umbrella design are considered as having the potential to accelerate the process of drug development and maximize the benefits for trial participants . Umbrella trials may be also classified as platform trials when participants are adaptively randomized and the protocol permits considerable flexibility to add new arms when novel targets and drugs are identified or to discontinue arms with ineffective treatments [ , , , ]. However, some researchers argue that the prospect of patient clinical benefit from umbrella trials is limited [ – ]. Since umbrella trials’ implementation in 2006 , many statistical objections and ethical challenges have been identified . A favorable risk-benefit ratio is one of the fundamental ethical requirements of conducting research with human participants . The evaluation of risks and potential benefits to study participants requires a careful ethical analysis based on relevant data . The safety and toxicity rates of anticancer agents in standard phase I–III clinical trials have already been estimated [ – ]. The recent analyses were focused on targeted therapies which play an important role in precision medicine ; the performance of targeted therapies is enhanced when used in combination with cytotoxic drugs . However, the only RCT in precision oncology was negative for survival . Yet, little is known about the risk-benefit profile for umbrella oncology trials. The objective of our systematic review with meta-analysis was to evaluate the risks and benefits in umbrella clinical trials testing targeted drugs or a combination of targeted agents with chemotherapy. Specifically, our analysis addresses four issues: (1) the utility of a new strategy of clinical trials (umbrella designs) in oncology, (2) the utility of precision oncology, (3) the utility of pooling populations across arms and across chemotherapies, and (4) the likelihood that a drug works in more than one specific population.
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines . Eligibility criteria The inclusion and exclusion criteria were defined prospectively in the study protocol , and they are summarized in Table . The key inclusion criteria were as follows: (1) cancer umbrella clinical trials as defined by the American Society of Clinical Oncology , platform umbrella trials, or sub-studies being a part of the cancer umbrella trial; (2) adult or mixed population studies in which at least 50% participants were ≥ 18 years; (3) patients were diagnosed with any malignancy (solid or hematological) at any stage; and (4) assessment of drug-related toxicity and/or response of targeted therapy drug/s (monoclonal antibodies or small molecules or antibody-drug conjugates ) or a combination of targeted therapy with chemotherapy regimens in at least one experimental arm or sub-trial. We excluded studies evaluating the following: (1) hormone therapies, immunotherapies (e.g., monoclonal antibodies that were also immunotherapy), or chemotherapy only regimens; (2) a combination of targeted therapy with immunotherapy (response profile in such combination is much different from targeted solo therapies, making comparisons with regimens included in the review impossible); and (3) non-pharmacological modalities (e.g., radiotherapy, surgery, stem cell therapy, or any of these, except for targeted therapy, combined with surgery). Data sources and search strategy We systematically searched Embase and PubMed for umbrella trial articles and abstracts published between 1 January 2006 and 7 October 2019, using strategies that included keywords and suggested MeSH and Emtree entry terms, their synonyms, and closely related words (Additional file : Table S1). Searches were not limited by language. The starting date of our search period was determined by the year of launching the first umbrella study . Our search strategies were checked using the Canadian Agency for Drugs and Technologies in Health peer-review checklist for search strategies . Study selection process Two experienced coders (KS, MTW) independently screened the records for the initial study inclusion and performed a full-text screening to determine the final inclusions. Disagreements were resolved by discussion, and when necessary, a third person, an arbiter, was involved (MW). Data extraction We created and piloted a data extraction form. Based on the pilot, we refined and prepared the final version (available from the Open Science Framework (OSF), https://osf.io/kuyaz/ ). Data were extracted from each publication independently by two reviewers (KS, MB). Discrepancies were resolved by discussion, and when necessary, an arbiter was involved (MW). An experienced medical oncologist had a supervisory role (BG). In the case of multiple publications for the same study, the results from the full publication and/or the most recent version were used in the extraction. If the NCT number was provided, additional information was searched and extracted from ClinicalTrials.gov. Umbrella trials are very heterogeneous; some of them are studies with multiple arms (Fig. A), and others have a hierarchical structure with sub-trials having a unique registration number (Fig. B). We extracted data only from the arms or sub-trials testing targeted therapy drugs or a combination of targeted therapy with chemotherapy. If the umbrella trial or sub-trial included a placebo, control group, or non-match arm, data from these arms were extracted separately for further comparison of matched versus non-matched therapy. We considered the therapy as “matched” to the disease when at least one tested agent was administered based on the specific molecular features of the patient’s tumor, e.g., a drug matched to the specific genetic change. If patients were treated in (1) biomarker-negative sub-study/arm, (2) so-called non-match sub-study/arm (defined as the arm or sub-study recruiting patients that did not match any of the prespecified biomarkers), (3) placebo, or (4) control group testing only chemotherapy agents, we considered these therapies as not matching the specific tumor molecular characteristics. For each arm or sub-study, we extracted data related to study characteristics (e.g., phase, location, study status), patient characteristics (e.g., number of enrolled and eligible participants, type of malignancy), intervention (e.g., therapy type, agent names), and outcomes (e.g., objective responses, drug-related adverse events). For more details, see our extraction form ( https://osf.io/kuyaz/ ). Data curation We defined a “sub-study” of the umbrella trial as a separate trial within the umbrella protocol with a unique registration number provided by the study authors. In cases where the separate registration number was not provided, we used the term “arm.” The glossary of key manuscript terms is presented in the online appendix (Additional file : Table S2). Umbrella trials generally measure short-term clinical outcomes to yield information about preliminary drug efficacy . We included various measures of clinical benefit reported in umbrella trials: we classified the objective response rate (ORR) and progression-free survival (PFS) as proxies of therapeutic benefit and overall survival (OS) as the direct measure of clinical benefit. We defined the objective response rate as the proportion of participants with partial and/or complete response (reported separately or as an objective response rate) as defined by the study authors. For PFS and OS analyses, we used medians provided by the study authors. Risks were assessed in terms of patients experiencing severe adverse events, such as the proportion of participants experiencing grade 3, 4, or 5 drug-related AEs as defined by the Common Toxicity Criteria for Adverse Events, version 5.0 (and earlier versions) . An AE was considered as related to the study drug if it was clearly stated by the study authors; expressions such as “AEs attributed to treatment” and “AEs possibly, probably, or definitely related to study drug” were also acceptable. In cases where an event was not clearly described as treatment-related, we excluded it from our risk analysis. Risk of bias assessment Two authors (KS, MB) independently assessed the risk of bias for all included studies using the Cochrane risk of bias tools for randomized or non-randomized studies . Every sub-trial/arm was assessed separately by reading all relevant literature. Judgments were based on the algorithms proposed by the authors of ROBINS and RoB2 tools, adjusted to fit the specific aspects of our analysis. Disagreements were resolved by discussion. Statistical analysis Objective response rates, treatment-related fatal (grade 5) AE rates, and treatment-related grade 3/4 AEs rates were calculated as the number of each of these outcomes in the sub-study/arm divided by the total number of patients evaluated for response or toxicity in that sub-study/arm. Standard errors and confidence intervals (CIs) for a single proportion were derived. Pooled rates were estimated using meta-analysis for proportion. Modeling with random effects and the restricted maximum likelihood (REML) estimator were used to account for between-study heterogeneity. I 2 statistics were calculated to provide a measure of the proportion of overall variation attributable to between-study heterogeneity. Meta-regression was used to explore potential sources of heterogeneity in rates related to (1) categories of therapy type in experimental sub-trials/arms and (2) types of sub-trials/arms (experimental vs non-match/control/placebo), as well as (3) study definition and (4) number of drugs tested. The results are presented as rates with 95% CI in each category and p value from the Q test for heterogeneity in meta-regression. The average number of treatment-related grade 3 and 4 AEs per person with a 95% confidence interval was estimated using a Poisson regression model. Unweighted median with 95% CI was calculated for PFS and OS by bootstrap methods using the “boot” package in the R software. Meta-analysis was conducted using the metafor package (R version 3.2.3); p < 0.05 was considered statistically significant. All tests were 2-sided.
The inclusion and exclusion criteria were defined prospectively in the study protocol , and they are summarized in Table . The key inclusion criteria were as follows: (1) cancer umbrella clinical trials as defined by the American Society of Clinical Oncology , platform umbrella trials, or sub-studies being a part of the cancer umbrella trial; (2) adult or mixed population studies in which at least 50% participants were ≥ 18 years; (3) patients were diagnosed with any malignancy (solid or hematological) at any stage; and (4) assessment of drug-related toxicity and/or response of targeted therapy drug/s (monoclonal antibodies or small molecules or antibody-drug conjugates ) or a combination of targeted therapy with chemotherapy regimens in at least one experimental arm or sub-trial. We excluded studies evaluating the following: (1) hormone therapies, immunotherapies (e.g., monoclonal antibodies that were also immunotherapy), or chemotherapy only regimens; (2) a combination of targeted therapy with immunotherapy (response profile in such combination is much different from targeted solo therapies, making comparisons with regimens included in the review impossible); and (3) non-pharmacological modalities (e.g., radiotherapy, surgery, stem cell therapy, or any of these, except for targeted therapy, combined with surgery).
We systematically searched Embase and PubMed for umbrella trial articles and abstracts published between 1 January 2006 and 7 October 2019, using strategies that included keywords and suggested MeSH and Emtree entry terms, their synonyms, and closely related words (Additional file : Table S1). Searches were not limited by language. The starting date of our search period was determined by the year of launching the first umbrella study . Our search strategies were checked using the Canadian Agency for Drugs and Technologies in Health peer-review checklist for search strategies .
Two experienced coders (KS, MTW) independently screened the records for the initial study inclusion and performed a full-text screening to determine the final inclusions. Disagreements were resolved by discussion, and when necessary, a third person, an arbiter, was involved (MW).
We created and piloted a data extraction form. Based on the pilot, we refined and prepared the final version (available from the Open Science Framework (OSF), https://osf.io/kuyaz/ ). Data were extracted from each publication independently by two reviewers (KS, MB). Discrepancies were resolved by discussion, and when necessary, an arbiter was involved (MW). An experienced medical oncologist had a supervisory role (BG). In the case of multiple publications for the same study, the results from the full publication and/or the most recent version were used in the extraction. If the NCT number was provided, additional information was searched and extracted from ClinicalTrials.gov. Umbrella trials are very heterogeneous; some of them are studies with multiple arms (Fig. A), and others have a hierarchical structure with sub-trials having a unique registration number (Fig. B). We extracted data only from the arms or sub-trials testing targeted therapy drugs or a combination of targeted therapy with chemotherapy. If the umbrella trial or sub-trial included a placebo, control group, or non-match arm, data from these arms were extracted separately for further comparison of matched versus non-matched therapy. We considered the therapy as “matched” to the disease when at least one tested agent was administered based on the specific molecular features of the patient’s tumor, e.g., a drug matched to the specific genetic change. If patients were treated in (1) biomarker-negative sub-study/arm, (2) so-called non-match sub-study/arm (defined as the arm or sub-study recruiting patients that did not match any of the prespecified biomarkers), (3) placebo, or (4) control group testing only chemotherapy agents, we considered these therapies as not matching the specific tumor molecular characteristics. For each arm or sub-study, we extracted data related to study characteristics (e.g., phase, location, study status), patient characteristics (e.g., number of enrolled and eligible participants, type of malignancy), intervention (e.g., therapy type, agent names), and outcomes (e.g., objective responses, drug-related adverse events). For more details, see our extraction form ( https://osf.io/kuyaz/ ).
We defined a “sub-study” of the umbrella trial as a separate trial within the umbrella protocol with a unique registration number provided by the study authors. In cases where the separate registration number was not provided, we used the term “arm.” The glossary of key manuscript terms is presented in the online appendix (Additional file : Table S2). Umbrella trials generally measure short-term clinical outcomes to yield information about preliminary drug efficacy . We included various measures of clinical benefit reported in umbrella trials: we classified the objective response rate (ORR) and progression-free survival (PFS) as proxies of therapeutic benefit and overall survival (OS) as the direct measure of clinical benefit. We defined the objective response rate as the proportion of participants with partial and/or complete response (reported separately or as an objective response rate) as defined by the study authors. For PFS and OS analyses, we used medians provided by the study authors. Risks were assessed in terms of patients experiencing severe adverse events, such as the proportion of participants experiencing grade 3, 4, or 5 drug-related AEs as defined by the Common Toxicity Criteria for Adverse Events, version 5.0 (and earlier versions) . An AE was considered as related to the study drug if it was clearly stated by the study authors; expressions such as “AEs attributed to treatment” and “AEs possibly, probably, or definitely related to study drug” were also acceptable. In cases where an event was not clearly described as treatment-related, we excluded it from our risk analysis.
Two authors (KS, MB) independently assessed the risk of bias for all included studies using the Cochrane risk of bias tools for randomized or non-randomized studies . Every sub-trial/arm was assessed separately by reading all relevant literature. Judgments were based on the algorithms proposed by the authors of ROBINS and RoB2 tools, adjusted to fit the specific aspects of our analysis. Disagreements were resolved by discussion.
Objective response rates, treatment-related fatal (grade 5) AE rates, and treatment-related grade 3/4 AEs rates were calculated as the number of each of these outcomes in the sub-study/arm divided by the total number of patients evaluated for response or toxicity in that sub-study/arm. Standard errors and confidence intervals (CIs) for a single proportion were derived. Pooled rates were estimated using meta-analysis for proportion. Modeling with random effects and the restricted maximum likelihood (REML) estimator were used to account for between-study heterogeneity. I 2 statistics were calculated to provide a measure of the proportion of overall variation attributable to between-study heterogeneity. Meta-regression was used to explore potential sources of heterogeneity in rates related to (1) categories of therapy type in experimental sub-trials/arms and (2) types of sub-trials/arms (experimental vs non-match/control/placebo), as well as (3) study definition and (4) number of drugs tested. The results are presented as rates with 95% CI in each category and p value from the Q test for heterogeneity in meta-regression. The average number of treatment-related grade 3 and 4 AEs per person with a 95% confidence interval was estimated using a Poisson regression model. Unweighted median with 95% CI was calculated for PFS and OS by bootstrap methods using the “boot” package in the R software. Meta-analysis was conducted using the metafor package (R version 3.2.3); p < 0.05 was considered statistically significant. All tests were 2-sided.
We retrieved 6207 references from searching databases. After duplicate removal, we screened 4738 records, from which we reviewed 215 full-text documents. In the next step, we searched references of the initially included studies and ClinicalTrials.gov entries and found 2 extra articles. Finally, we included 29 records of 31 sub-trials or arms of 9 umbrella trials. Figure summarizes the search results and reasons for exclusion. A full list of included studies with therapy type, malignancy names, and agents tested is presented in the online appendix (Additional file : Table S3) [ – ]. Trial and patient characteristics We included 31 sub-trials or arms of nine umbrella trials ( N = 1637). Six out of 9 umbrella trials can be also classified as platform umbrella trials (Additional file : Table S3). The majority (19; 61.2%) of included studies were sub-studies with a separate registration number, ten (32.3%) were umbrella trial arms without a separate registration number, and two (6.5%) were arms of the included sub-studies (Table ). The majority (27; 87.1%) were biomarker-based experimental sub-studies/arms, two (6.5%) were non-biomarker specific, and one (3.2%) was either a placebo or a control group. Twenty-one sub-studies/arms (67.8%) tested targeted therapies, eight (25.8%) tested a combination of targeted therapy with chemotherapy, one (3.2%) tested standard-of-care chemotherapy, and one (3.2%) was a placebo arm. The majority of included sub-trials/arms (25; 80.6%) were phase II studies and tested one investigational drug (22; 71.0%). Six sub-trials/arms (19.4%) were funded by private sponsors, four (12.9%) by public institutions, and 21 (67.7%) by both private and public sponsors. Thirteen sub-trials/arms (42.0%) were conducted in North America, 12 (38.7%) in Asia, five (16.1%) in Europe, and one (3.2%) in Australia. In 13 sub-studies/arms, the median age of participants was below 65 years; in 5 sub-studies/arms, the median age was 65 years or higher; and in the remaining 13 sub-studies/arms, the median age was not reported (Table ). All sub-trials/arms involved only patients with solid tumors. Benefit in experimental sub-trials/arms Twenty-two of 27 experimental sub-trials/arms reported response data. We identified 185 objective responses (including 142 partial and 5 complete reported separately and 38 reported as objective responses) among 879 participants evaluated for response. One targeted therapy experimental arm was excluded from the meta-analysis because only 1 patient was evaluated for response in that arm. The pooled ORR across 21 sub-trials/arms (878 patients) was 19.7% (95% CI 10.5–28.8; I 2 = 97.3%; Fig. ). The ORR for targeted therapies was significantly lower than the ORR for combination of targeted therapy drugs with chemotherapy: 13.3% (95% CI 4.6–21.9) vs 39.0% (95% CI 21.3–56.8), p = 0.005. The median PFS ranging from 1.2 to 17.0 months was reported in 16 experimental sub-trials/arms . The pooled median PFS was 2.5 months (95% CI 2.0–7.8). The median OS ranging from 4.7 to 10.4 months was reported in 12 experimental sub-trials/arms. The pooled median OS was 6.8 months (95% CI 6.0–8.0). We did not compare pooled PFS and OS between targeted therapy and a combination of targeted therapy with chemotherapy because only one sub-study/arm testing combination of the therapies reported these outcomes. Risk in experimental sub-trials/arms We analyzed 9 drug-related grade 5 AEs among 999 participants evaluated for toxicity in 15 experimental sub-trials/arms (including 9 drug-related deaths in 5 experimental sub-trials/arms and in the remaining 10 sub-trials/arms the drug-related deaths were reported to be 0). The pooled drug-related death rate across these sub-trials/arms was 0.7% (95% CI 0.1–1.2; I 2 = 4.5%; Fig. ). The pooled drug-related grade 5 AE rate for 12 experimental targeted therapy sub-trials/arms (among 502 participants evaluated for toxicity) was 1.1% (95% CI 0.2–2.0) and for 3 experimental targeted therapy combined with chemotherapy sub-trials/arms (497 participants) was 0.5% (95% CI 0.1–1.2). Due to the small number of events, statistical comparisons between the different groups were not performed. Ninety-one patients (34.0%; 95% CI 15.2–52.9) experienced treatment-related grade 3/4 AEs in 5 sub-trials (Fig. ). The treatment-related grade 3/4 rate in four of these sub-trials/arms testing targeted therapy drugs was 42.6% (95% CI 31.2–53.9). Three hundred eleven drug-related grade 3/4 AEs were reported in 11 experimental sub-trials/arms (10 targeted therapy, one targeted therapy with chemotherapy) among 695 toxicity evaluable patients, with an average drug-related grade 3/4 AE rate per person of 0.45 (95% CI 0.40–0.50). Overall benefit and risk Twenty-five of 31 sub-trials/arms reported 212 objective responses (including 169 partial and 5 complete reported separately and 38 reported as objective responses) among 1148 participants evaluated for response. One arm was excluded from the meta-analysis because only 1 patient was evaluated for response in that arm. The pooled overall ORR across 24 sub-trials/arms (1147 patients) was 17.7% (95% CI 9.5–25.9; I 2 = 97.3%; Fig. ). We did not find a significant difference in ORR between experimental sub-trials/arms versus non-matched therapies: 19.7% (95% CI 10.5–28.8) vs 7.1% (95% CI 0.0–13.5); p = 0.25. The median PFS ranging from 1.2 to 17.0 months was reported in 19 sub-trials/arms. The pooled median PFS was 2.4 months (95% CI 1.9–2.9). For 3 sub-trials/arms with non-matched therapies the pooled median PFS was 2.0 months (95% CI 1.2–3.5). The median OS ranging from 4.7 to 10.5 months was reported in 13 sub-trials/arms. The pooled median OS was 7.1 months (95% CI 6.1–8.4). We did not compare the pooled OS rates between sub-trial/arm types because there was only one arm reporting this outcome in the non-matched group. We identified 12 drug-related grade 5 AEs among 1233 patients evaluable for toxicity in 17 sub-trials/arms. The overall pooled drug-related death rate across these sub-trials/arms was 0.8% (95% CI 0.3–1.4; I 2 = 7.32%; Fig. ). We did not compare pooled drug-related death rates between sub-trial/arm types because there were two sub-trials/arms reporting this outcome in the non-match sub-category. Treatment-related grade 3/4 AEs were not reported in non-matched sub-trials/arms. The overall treatment-related grade 3/4 AE rate and the average drug-related grade 3/4 AE rate per person remain the same as for the experimental arms described previously. Sub-group analysis of benefit and risk We observed significant differences in ORR between Phase II and Phase III trials: 14.0% (95% CI 6.9-21.2) vs 36.4% (95% CI 3.4-69.3); p = 0.03 (Additional file : Table S4). Other benefit and risk measures in sub-categories are shown in the online appendix (Additional file : Table S4). Risk of bias assessment The risk of bias of the included sub-studies/arms is available in the online appendix (Figs. - ). There were two randomized sub-studies with “low” or “some concerns” risk of bias among all domains. There were 27 nonrandomized sub-trials/arms, and 16 of them (59%) were assessed as having the overall risk of bias as “serious” or “critical.” High levels of bias were mainly due to bias in the selection of the reported result.
We included 31 sub-trials or arms of nine umbrella trials ( N = 1637). Six out of 9 umbrella trials can be also classified as platform umbrella trials (Additional file : Table S3). The majority (19; 61.2%) of included studies were sub-studies with a separate registration number, ten (32.3%) were umbrella trial arms without a separate registration number, and two (6.5%) were arms of the included sub-studies (Table ). The majority (27; 87.1%) were biomarker-based experimental sub-studies/arms, two (6.5%) were non-biomarker specific, and one (3.2%) was either a placebo or a control group. Twenty-one sub-studies/arms (67.8%) tested targeted therapies, eight (25.8%) tested a combination of targeted therapy with chemotherapy, one (3.2%) tested standard-of-care chemotherapy, and one (3.2%) was a placebo arm. The majority of included sub-trials/arms (25; 80.6%) were phase II studies and tested one investigational drug (22; 71.0%). Six sub-trials/arms (19.4%) were funded by private sponsors, four (12.9%) by public institutions, and 21 (67.7%) by both private and public sponsors. Thirteen sub-trials/arms (42.0%) were conducted in North America, 12 (38.7%) in Asia, five (16.1%) in Europe, and one (3.2%) in Australia. In 13 sub-studies/arms, the median age of participants was below 65 years; in 5 sub-studies/arms, the median age was 65 years or higher; and in the remaining 13 sub-studies/arms, the median age was not reported (Table ). All sub-trials/arms involved only patients with solid tumors.
Twenty-two of 27 experimental sub-trials/arms reported response data. We identified 185 objective responses (including 142 partial and 5 complete reported separately and 38 reported as objective responses) among 879 participants evaluated for response. One targeted therapy experimental arm was excluded from the meta-analysis because only 1 patient was evaluated for response in that arm. The pooled ORR across 21 sub-trials/arms (878 patients) was 19.7% (95% CI 10.5–28.8; I 2 = 97.3%; Fig. ). The ORR for targeted therapies was significantly lower than the ORR for combination of targeted therapy drugs with chemotherapy: 13.3% (95% CI 4.6–21.9) vs 39.0% (95% CI 21.3–56.8), p = 0.005. The median PFS ranging from 1.2 to 17.0 months was reported in 16 experimental sub-trials/arms . The pooled median PFS was 2.5 months (95% CI 2.0–7.8). The median OS ranging from 4.7 to 10.4 months was reported in 12 experimental sub-trials/arms. The pooled median OS was 6.8 months (95% CI 6.0–8.0). We did not compare pooled PFS and OS between targeted therapy and a combination of targeted therapy with chemotherapy because only one sub-study/arm testing combination of the therapies reported these outcomes.
We analyzed 9 drug-related grade 5 AEs among 999 participants evaluated for toxicity in 15 experimental sub-trials/arms (including 9 drug-related deaths in 5 experimental sub-trials/arms and in the remaining 10 sub-trials/arms the drug-related deaths were reported to be 0). The pooled drug-related death rate across these sub-trials/arms was 0.7% (95% CI 0.1–1.2; I 2 = 4.5%; Fig. ). The pooled drug-related grade 5 AE rate for 12 experimental targeted therapy sub-trials/arms (among 502 participants evaluated for toxicity) was 1.1% (95% CI 0.2–2.0) and for 3 experimental targeted therapy combined with chemotherapy sub-trials/arms (497 participants) was 0.5% (95% CI 0.1–1.2). Due to the small number of events, statistical comparisons between the different groups were not performed. Ninety-one patients (34.0%; 95% CI 15.2–52.9) experienced treatment-related grade 3/4 AEs in 5 sub-trials (Fig. ). The treatment-related grade 3/4 rate in four of these sub-trials/arms testing targeted therapy drugs was 42.6% (95% CI 31.2–53.9). Three hundred eleven drug-related grade 3/4 AEs were reported in 11 experimental sub-trials/arms (10 targeted therapy, one targeted therapy with chemotherapy) among 695 toxicity evaluable patients, with an average drug-related grade 3/4 AE rate per person of 0.45 (95% CI 0.40–0.50).
Twenty-five of 31 sub-trials/arms reported 212 objective responses (including 169 partial and 5 complete reported separately and 38 reported as objective responses) among 1148 participants evaluated for response. One arm was excluded from the meta-analysis because only 1 patient was evaluated for response in that arm. The pooled overall ORR across 24 sub-trials/arms (1147 patients) was 17.7% (95% CI 9.5–25.9; I 2 = 97.3%; Fig. ). We did not find a significant difference in ORR between experimental sub-trials/arms versus non-matched therapies: 19.7% (95% CI 10.5–28.8) vs 7.1% (95% CI 0.0–13.5); p = 0.25. The median PFS ranging from 1.2 to 17.0 months was reported in 19 sub-trials/arms. The pooled median PFS was 2.4 months (95% CI 1.9–2.9). For 3 sub-trials/arms with non-matched therapies the pooled median PFS was 2.0 months (95% CI 1.2–3.5). The median OS ranging from 4.7 to 10.5 months was reported in 13 sub-trials/arms. The pooled median OS was 7.1 months (95% CI 6.1–8.4). We did not compare the pooled OS rates between sub-trial/arm types because there was only one arm reporting this outcome in the non-matched group. We identified 12 drug-related grade 5 AEs among 1233 patients evaluable for toxicity in 17 sub-trials/arms. The overall pooled drug-related death rate across these sub-trials/arms was 0.8% (95% CI 0.3–1.4; I 2 = 7.32%; Fig. ). We did not compare pooled drug-related death rates between sub-trial/arm types because there were two sub-trials/arms reporting this outcome in the non-match sub-category. Treatment-related grade 3/4 AEs were not reported in non-matched sub-trials/arms. The overall treatment-related grade 3/4 AE rate and the average drug-related grade 3/4 AE rate per person remain the same as for the experimental arms described previously.
We observed significant differences in ORR between Phase II and Phase III trials: 14.0% (95% CI 6.9-21.2) vs 36.4% (95% CI 3.4-69.3); p = 0.03 (Additional file : Table S4). Other benefit and risk measures in sub-categories are shown in the online appendix (Additional file : Table S4).
The risk of bias of the included sub-studies/arms is available in the online appendix (Figs. - ). There were two randomized sub-studies with “low” or “some concerns” risk of bias among all domains. There were 27 nonrandomized sub-trials/arms, and 16 of them (59%) were assessed as having the overall risk of bias as “serious” or “critical.” High levels of bias were mainly due to bias in the selection of the reported result.
To our knowledge, we report the first systematic review and meta-analysis providing risk and benefit estimates for cancer umbrella trials testing targeted therapies or a combination of targeted therapies with chemotherapy. These analyses include four aspects: (1) the utility of a new strategy of clinical trials (umbrella designs) in oncology, (2) the utility of precision oncology, (3) the utility of pooling populations across arms and across chemotherapies, and (4) the likelihood that a drug works in more than one specific population. Maximizing benefit and minimizing harm for participants of clinical research are one of the crucial ethical requirements. Umbrella trial designs are believed not only to be more flexible than traditional designs by allowing simultaneous evaluation of multiple treatment options but also to have a better risk/benefit ratio for trial participants . However, our findings do not support the expectations of an increased benefit/risk ratio for participants of cancer umbrella trials. Normally, the risk-benefit ratio in clinical trials is a function of the drug. However, clinical trial design may also affect this equation. Some elements of the umbrella design are expected to provide a more favorable risk/benefit ratio for participants than the classical trial designs. For example, the “precision medicine” approach of assigning patients to treatment arms based on the genetic characteristics of their tumor may lead to an expectation of higher responses and lower adverse event rates. Moreover, most of the umbrella trials in our study used the platform design and an adaptive design. This means that more patients may be enrolled in an arm with more promising health benefits, during the course of the study. Our results cast doubts on such an assumption. For example, the majority of the sub-trials/arms in our sample were phase II trials with pooled ORR of 14.0%. This result is similar to the pooled ORR in the previous meta-analysis of phase II single-agent studies (12.7%) but lower than the overall ORR in eight cancer basket trials published until 31 March 2018 (25%) . Our findings suggest that in all umbrella clinical trials in oncology published before 7 October 2019, the chances of responding from targeted therapies were lower than from the combination of targeted therapies with chemotherapy. This observation is consistent with the previous findings showing the increased overall response rates in trials testing a combination of targeted therapies with cytotoxic drugs . Unfortunately, a comparison of the overall benefits and risks in our meta-analysis to other studies is limited as the included sub-trials/arms were very heterogeneous, e.g., they were of different phases, and included heterogeneous populations and various types of cancers. The majority of the included sub-trials/arms reported surrogate outcomes: ORR (25; 80.6%) or PFS (19; 61.3%). Thirteen sub-trials/arms (41.9%) reported OS. A variety of endpoints makes a comparison of outcomes in meta-research more demanding. Importantly, although sometimes accepted by regulatory agencies for approval, ORR and PFS are surrogate markers that have shown poor correlation with OS and quality of life in most tumor types. Furthermore, early phases of clinical trials also poorly predict phase III success. Therefore, surrogate measures should be considered only hypothesis-generating and not a marker of true clinical benefit [ – ]. We did not find significant differences in objective response rates between therapies matched to the specific cancer biomarkers versus non-matched therapies or controls. This finding may suggest that the approaches to maximize the direct benefit in umbrella trials, e.g., genome-driven stratification and assignment to the most promising arm, may not be sufficient to deliver the appropriate therapy matching the heterogeneous and mutable tumor . This also raises the question if biomarkers used to define the target in precision oncology may be suboptimal . However, other systematic reviews analyzing pediatric phase I and phase II cancer trials showed high objective response rates in trials with target-specific enrolment or in trials adopting personalized treatment approach . The drug-related death rate of 1.1% in phase II sub-trials/arms in our sample (Additional file : Table S4) is also similar to the drug-related grade 5 AEs rate in the previous meta-analysis of phase II single-agent studies that used personalized strategy (1.5%) . This may indicate that phase II umbrella trials do not offer a lower risk for trial participants than classical phase II clinical trials. Debates about whether precision medicine is an illusion or an objective reality continues in oncology . Our findings do not support the expectation of increased participants’ benefit in cancer umbrella trials. Patients should be clearly informed that the majority of participants (82.3%) of the first launched umbrella trials testing targeted therapy agents or combination of targeted therapy drugs with chemotherapy did not respond to a given therapy. The objective of our study was to analyze the risk/benefit ratio for umbrella clinical trials testing targeted drugs or a combination of targeted therapy with chemotherapy. Analyses of that type are crucial sources of information for participants, researchers, ethics committee members, and other stakeholders and decision-makers. When performing our analysis, we found that the complexity of the umbrella design and the low quality of reporting makes a comparison of the results from trials and sub-trials very difficult. Many umbrella arms and sub-trials of the umbrella trials included in our study were closed without any explanation and without a report of the results. Our findings may be used to improve umbrella trials design reporting.
Our study should be interpreted in light of the following limitations. First, we included all umbrella trials or sub-trials being a part of one umbrella trial in which one cancer type was divided into sub-types to test the different drugs. Because umbrella trials are very heterogeneous and have a hierarchical structure and every arm may be of different phase, we did not compare the whole umbrella trials but compared the sub-trials and arms. We created this novel methodology to analyze the outcomes of a complex umbrella trial design but our methods may be further modified and improved. Second, we observed inconsistency in reporting the outcomes and selective reporting of the results in the majority of the included umbrella trials. For example, in the VIKTORY trial , outcomes were not provided for six out of 10 arms. Low quality of reporting of umbrella trials is a serious issue not only because of the difficulties in performing meta-research but also for other ethical reasons, including patients’ safety and decision-making process in designing new trials. Third, ORR and PFS reported in most umbrella trials are considered a surrogate benefit and are not markers of direct clinical benefit . We analyze them because they are the best available surrogates of clinical benefit. Fourth, our systematic review is restricted to umbrella trials which results were reported between 2006 and 2019 and does not include trials developed based on the findings of those initial trials. As relatively novel designs, umbrella trials are expected to improve methodologically over time and may produce results leading to different conclusions than those presented here. Fifth, the limited number of eligible sub-trials reporting the outcomes of interest did not allow for all possible comparisons, for example, pooled PFS and OS. Sixth, the majority of the analyzed studies reported summary results. Thus, we could not test whether risk and benefit in umbrella trials depend on the line of treatment or cancer histology.
This is the first systematic review with meta-analysis assessing the risk and benefits of umbrella clinical trials. We found that the overall objective response rate in umbrella trials testing targeted drugs or a combination of targeted therapy with chemotherapy was 17.7%, and the overall drug-related death rate was 0.8%. Patients enrolling in umbrella trials should be clearly informed about the risk and benefit predictions for these trials. Our findings do not support the expectation of increased patients’ benefits in cancer umbrella trials. Further studies should investigate whether umbrella trial design and precision oncology approach improve patient outcomes. Our study identified serious problems with reporting and transparency of umbrella design which may undermine a promise of more efficient and patient-centered trials. Registration and protocol The study protocol was prospectively registered in PROSPERO (CRD42020171494) .
The study protocol was prospectively registered in PROSPERO (CRD42020171494) .
Additional file 1: Table S1 . Search strategy. Table S2 . Glossary of key manuscript terms. Table S3 . List and characteristics of included studies. Table S4 . Objective response rates and drug-related fatal toxicity rates assessed in subgroups. Fig. S1 . Summary risk of bias graph of randomized sub-studies. Fig. S2 . Review authors’ judgements about risk of bias of randomized sub-studies. Fig. S3 . Summary risk of bias graph of all non-randomized sub-studies/arms. Fig. S4 . Review authors’ judgements about risk of bias for non-randomized sub-studies/arms.
|
Development of a rapid and economic | 8e016a03-d486-4345-a86b-a67d2198c46d | 6207748 | Physiology[mh] | Electrophysiology is a unique component of biomedical science capable of investigating the electrical properties of an individual cell, organ, or complete organism in the context of physiology. Regarding the heart, the process of recording cardiac electrical activity is known as electrocardiography (ECG). Generally, multiple electrodes are attached to specific sites on test subject’s body surface to record the electrical signals generated from the cardiac conduction system, which represent the polarization and depolarization of cardiac muscle tissues. These signals can then be interpreted to reveal normal conduction or specific diseases that result in cardiac arrhythmia. Thus, we can recognize a health condition emerging in real time by examining the in vivo ECG recording and identifying relevant electrophysiological alterations in the heart. Since ECG reflects in vivo cardiac function, current FDA (Food and Drug Administration) regulation requires pharmaceutical companies to perform animal ECG assessments for cardiac toxicity when developing a new drug at the preclinical stage. These assessments are required to avoid adverse drug effects on the human heart, such as arrhythmia or heart failure. Therefore, there is an important need to develop efficient electrocardiogram methods for predictive assays of cardiotoxicity in animal models. Given the rapid advancement in gene editing technology, spontaneous heart disease models have become easier to generate in zebrafish, as zebrafish are an accessible model organism for genetic modification and crossbreeding. Ideally, an animal heart disease model should be easily manipulated, be reproducible, exhibit representative characteristics of human pathophysiology and be ethically sound. The low cost and easy manipulation of zebrafish for cardiovascular research make it an increasingly popular animal model to be considered for ECG studies . Zebrafish has only two chambers in its heart, but the cardiac electrophysiology of zebrafish is highly similar to that of the four-chambered heart of human. Cardiac action potentials (AP) in both human and zebrafish are generated by the movement of ions through the transmembrane ion channels in cardiac cells . It is noteworthy that ion channels dominating the AP upstroke in zebrafish are well conserved in human. Consequently, zebrafish heart also presents a distinct P-wave, QRS-complex, and T-wave on ECG recording, all of which are comparable to the ECG features of human , . However, zebrafish ECG is not yet an easily accessible technique. Current zebrafish ECG recordings typically require specialized devices and software, including an amplifier, a bandpass filter, and digitized data-processing software, which collectively come at high cost. In this paper, we describe the setup of an economic zebrafish ECG system that is based on integration of a ready-to-use electrophysiological recoding kit with custom-made needle electrode probe, which should be highly accessible for most research and teaching laboratories. For general testing of the optimized protocol, we used this system to monitor the cardiac physiological responses of adult zebrafish to common anesthetics and selected antiarrhythmic medications in real time. We anticipate that the devices and protocol described in this study can be established in any laboratory, which would greatly benefit educational and research practices with zebrafish.
Constructing the adult zebrafish ECG system The simple and ready-to-use ECG kit (Ez-Instrument Technology Co., Taiwan) was originally developed for teaching purposes at high-school biology laboratories. The original kit comprised an integrated signal receiver and amplifier, and a packaged software for signal visualization and basic data processing tools. We explored the use of the kit on adult zebrafish ECG for more advanced research applications by re-designing the specialized electrode probe for this new purpose. After reviewing the commercially available electrode probes and published protocols on adult zebrafish ECG , we custom-made a three-needle electrode probe that could be directly connected to the ready-to-use ECG kit for real-time recording of ECG signals on anesthetized adult zebrafish (Supplemental Fig. ). The design of the three-point needle electrode probe integrated a pectoral electrode, an abdominal electrode and a grounding electrode (Supplemental Fig. ). Each electrode harbored a stainless-steel needle coated with insulating paint on most of its surface to reduce noise from the aquatic environment. The needle head was uncoated to allow a 1-to-1.5 mm exposed area for signal detection, whereas the tail end was welded to the connecting wire and clustered into a 3-pole auxiliary connector. The probe had a plastic holder to secure the stainless-steel needle and connective wire and enable the probe to be fastened onto the micromanipulator (Supplemental Fig. ). The ECG system was ready for experiments once the customized electrode probe was connected to the ECG kit and the analysis software was running. In summary, the zebrafish ECG system consisted of a three-needle electrode probe, two micromanipulators to hold the pectoral and abdominal needles, an ECG kit, and a laptop computer preloaded with the provided software. ECG recording of adult zebrafish To record adult zebrafish ECG in real time, we referenced previously described protocols to develop an optimized procedure . During ECG recording, the zebrafish was sedated in the anesthetic water bath while wedged into a cleft in a damp sponge to maintain its dorsal side up. A concave triangular section of the sponge was cut away to enable the fish to move its gill opercula during the experiment. The pectoral scales above the heart were removed with sharp tweezers to allow penetration of the electrode needle tip. The field of operation, i.e., the triangular region between the pectoral fins below the head, was monitored under a microscope. With the micromanipulator, the pectoral electrode needle was gently inserted into the thorax to a depth to detect the ECG signal without damaging the heart tissue (Fig. ). The abdominal probe was gently inserted into the cloaca (i.e., the posterior anal/reproductive orifice) with a second micromanipulator. The insertion depth for both of the pectoral and abdominal needle electrodes was approximately 1–1.5 mm. The grounding electrode was placed at the corner of the damp sponge as a reference electrode (Fig. ). During ECG recoding, it was necessary to adjust the pectoral electrode probe slightly until the P wave, QRS complex and T wave were clearly recognized on the software’s display window. With the tricaine methanesulfonate (MS-222)/isoflurane combination anesthetics described below, the zebrafish could be safely sedated for 30 minutes. The recording time window was adjusted according to the individual assay and drug response. After recording, the zebrafish was immediately transferred to a recovery tank with clean system water. Electrocardiography of adult zebrafish The system and procedure described above allowed reliable detection and recording of real-time ECG signals from adult zebrafish. There was no need for additional data fitting or processing. An example of the baseline real-time zebrafish ECG waveform is shown in Fig. , which is highly comparable to the human ECG. Key features of the zebrafish ECG signal, such as the P wave, QRS complex and T wave, can be easily recognized (Fig. ). To establish standard zebrafish ECG parameters, we determined the mean heart rates of wild-type AB zebrafish to be 148 ± 15 beats per minute (bpm). After statistical analysis, we found that in normal, 10- to 12-month-old zebrafish, the average PR interval was 62 ± 4 millisecond (ms), the average QRS interval was 44 ± 3 ms, the average RR interval was 469 ± 54 ms, the average QT interval was 215 ± 43 ms, and the mean HR corrected QT interval (QTc) was 279 ± 60 ms. These results were highly consistent with previous findings , which further demonstrated that this economical ECG system is comparable to more complex ECG systems. Electrocardiography of zebrafish under prolonged sedation Before performing the chemical-induced arrhythmic response assays, we verified the anesthetic effects on zebrafish cardiac physiology using the ECG system. We did so because MS-222, the only FDA-approved anesthetic for fishes, has been shown to affect heart rate in adult zebrafish during sedation. As an alternative anesthetic approach, we used the 140 ppm combined anesthetic formula (70 ppm MS-222 + 70 ppm isoflurane) previously developed in our laboratory, which shows minimal effects on the zebrafish heart rate . In the MS-222-alone group, the initial heart rate at the first minute was 108 ± 16 bpm, which was significantly lower than that of the combined-anesthetic-formula group, i.e., 148 ± 15 bpm (Fig. ). As the sedation time increased, the heart rate in the MS-222 group significantly decreased to 89 ± 17 bpm at 5 minutes, whereas that of the combined-formula group was sustained at 137 ± 16 bpm, which was not statistically different from the rate at one minute. As expected, after 10 minutes of sedation in MS-222, the heart rate further decreased to 64 ± 18 bpm, whereas the heart rate of the MS-222/isoflurane-combination group remained at 131 ± 19 bpm. These data are consistent with previously published findings . Notably, most of the adult zebrafish in the MS-222-alone group did not recover after 10 minutes of sedation (data not shown). We next focused on analyzing the heart rate variation under prolonged sedation with the combined anesthetic formula (Fig. ). After 1, 5, and 10-minute sedation under MS-222/isoflurane anesthesia, the average heart rate was 148 ± 15 bpm, 137 ± 16 bpm and 131 ± 9 bpm, respectively. These data are consistent with previous findings . Prolonged sedation at 20 and 30 minutes yielded an average heart rate of 121 ± 11 bpm and 113 ± 7 bpm, respectively. Taken together, these data demonstrated that MS-222 may not be a suitable anesthetic for ECG experiments with adult zebrafish. Furthermore, prolonged sedation under the MS-222/isoflurane-combination formula led to a gradual decrease in heart rate beyond 10-minute sedation. We hence recommend that ECG recording experiments be performed using the combined anesthetic formula within the first five minutes; otherwise, the anesthetic effects may result in considerable interference to subsequent analysis. Effect of isoproterenol treatment on drug-induced bradycardia After establishing the optimized ECG assay conditions, we analyzed the effects of common cardiovascular drugs that are frequently used in clinical practice. We started with isoproterenol, which is a nonselective β-adrenergic agonist that is an isopropyl amine analog of adrenaline. Having a well-studied mechanism of action and pharmacological effects on cardiac muscle contractility, isoproterenol can increase the human heart rate and has been prescribed for the treatment of bradycardia . Since MS-222 alone was shown in our experiments to reduce heart rate in adult zebrafish, we used this chemical to mimic drug-induced bradycardia and test zebrafish’s response to isoproterenol treatment. Before isoproterenol treatment, the baseline ECG showed the average heart rate to be 159 ± 13 bpm at the first minute. After 5 minutes sedation in 160 ppm MS-222 alone, the heart rate had decreased to 130 ± 16 bpm, as expected. After retro-orbital injection of isoproterenol, a change in heart rate was observed within 60 seconds (Fig. ). The average heart rate was significantly increased to 155 ± 16 bpm with 5 μl of 10 μM isoproterenol and was higher than that under the MS-222-induced bradycardia condition. We also tested the effect of a lower dose of isoproterenol. We found that isoproterenol had a dose-dependent effect on adult zebrafish heart rate (Fig. ). In the group administered with 5 μl of 10 μM isoproterenol, the heart rate was increased by 1.25-fold after isoproterenol injection (Fig. ), whereas in the group administered with 0.5 μM isoproterenol, the heart rate only increased 1.04-fold, and 1.12-fold increase in 1 μM group, 1.14-fold increase in 5 μM group and 1.22-fold increase in 7.5 μM group, respectively (Fig. ). These results are similar to previous findings reported in human . Induction of bradycardia under verapamil treatment Verapamil is a non-dihydropyridine calcium channel antagonist and a common antihypertensive with an anti-angina effect. However, verapamil can have negative cardiovascular effects in humans, such as abnormal ECG and reduced heart rate . We thus investigated whether these cardiac effects of verapamil could be observed in zebrafish and monitored by ECG in real time. Before verapamil treatment, the baseline ECG showed the average heart rate to be 155 ± 13 bpm (Fig. ). After retro-orbital injection of verapamil, the heart rate decreased to 116 ± 15 bpm within 60 seconds, representing a 25% reduction (Fig. ). After 5 minutes, the heart rate had further decreased to 88 ± 22 bpm, a 43% reduction (Fig. ). As expected, the results were similar to previous findings , and the effects mimicked the heart rate-lowering effects of verapamil reported in humans . Our results demonstrate that zebrafish and humans have highly conserved action potential responses to verapamil, which confirm the feasibility of using zebrafish ECG for the screening of calcium-channel blocking agents. Effects of amiodarone on heart rate, QRS interval, QT interval, PR interval and QTc interval Amiodarone is a class III anti-arrhythmic drug and has been used to treat and prevent various types of arrhythmia, including ventricular tachycardia and atrial fibrillation . Amiodarone can cause bradycardia and prolong the QT interval . Therefore, we explored whether these cardiovascular effects can be induced in zebrafish. We first immersed the zebrafish in a tank with 100 μM amiodarone in a one-liter water system to mimic acute treatment in human. After one hour of immersion of adult zebrafish in the amiodarone bath, the zebrafish heart rate decreased to an average of 60 ± 10 bpm, which was significantly lower than that of the control group. Analysis of the ECG signals revealed effects of the amiodarone treatment on several ECG features: The QRS interval (79 ± 21 ms) and PR interval (103 ± 19 ms) were found to increase relative to the pretreatment condition values. It is noteworthy that significant QT prolongation was also observed (481 ± 58 ms), and the mean HR corrected QT interval (QTc) was 475 ± 52 ms, which represented a striking 2-fold increase over that of the pretreatment condition (Fig. ). Bradycardia and QT prolongation indicated the drug’s effects on the ion channels, leading to a decrease in cardiomyocyte excitability and ventricular tachyarrhythmia, respectively. Therefore, these results suggest that zebrafish and humans may have highly conserved ion channels and similar reactions to amiodarone. Prolongation of QTc after quinidine treatment Quinidine is a voltage-gated sodium channel blocker that acts as a class I antiarrhythmic agent (Ia) in the heart to prevent ventricular arrhythmias . Quinidine leads to increase the cardiac action potential duration which also prolongs QT interval and increases the risks of torsade de pointes in human . We then tested these cardiac effects of quinidine in adult zebrafish. Before drug treatment, the baseline ECG was at the average heart rate to be 166 ± 27 bpm. After retro-orbital injection of quinidine (250 μM), the heart rate was significantly decreased to 92 ± 39 bpm (Fig. ). Specifically, the baseline QT interval was 200 ± 29 ms (Fig. ). The QT interval was significantly prolonged to 303 ± 40 ms after post-injection of quinidine (Fig. ). The QTc interval prolongation were also seen after drug treatment (from 280 ± 51 to 355 ± 67 ms). PR interval and QRS interval did not have significant change after drug treatment. Thus, zebrafish and humans have highly conserved ion channels and similar ventricular tachyarrhythmia response in the heart. Veratridine induce AV-block in adult zebrafish Veratridine is a plant alkaloid which acts as a neurotoxin and known to depolarize excitable cells by preventing inactivation of voltage-dependent Na + channels . This positive inotropic effect causes an increase in both Na + and Ca 2+ influx and then increases nerve excitability and cardiac contractility , . Hence, we used veratridine to simulate gain-of-function on sodium channels to see what proarrhythmic effect could be induced in the adult zebrafish heart. Prolonged PR interval (from 58 ± 9 ms to 85 ± 9 ms) was observed in zebrafish injected with veratridine. The Fig. showed first-degree AV block with significant PR interval prolongation at 3 minutes post-injection (8 out of 10 zebrafish). However, heart rate, QT interval, QRS interval and QTc interval did not have any significant change after veratridine treatment (Fig. ). These results suggested that the increase in both Na + and Ca 2+ influx may prolong the PR interval and induce the AV block, but not the QT interval in adult zebrafish heart. These data indicated that veratridine affects the atrioventricular conduction more significantly, however, veratridine may not affect the depolarization and repolarization of the ventricles in the adult zebrafish heart.
The simple and ready-to-use ECG kit (Ez-Instrument Technology Co., Taiwan) was originally developed for teaching purposes at high-school biology laboratories. The original kit comprised an integrated signal receiver and amplifier, and a packaged software for signal visualization and basic data processing tools. We explored the use of the kit on adult zebrafish ECG for more advanced research applications by re-designing the specialized electrode probe for this new purpose. After reviewing the commercially available electrode probes and published protocols on adult zebrafish ECG , we custom-made a three-needle electrode probe that could be directly connected to the ready-to-use ECG kit for real-time recording of ECG signals on anesthetized adult zebrafish (Supplemental Fig. ). The design of the three-point needle electrode probe integrated a pectoral electrode, an abdominal electrode and a grounding electrode (Supplemental Fig. ). Each electrode harbored a stainless-steel needle coated with insulating paint on most of its surface to reduce noise from the aquatic environment. The needle head was uncoated to allow a 1-to-1.5 mm exposed area for signal detection, whereas the tail end was welded to the connecting wire and clustered into a 3-pole auxiliary connector. The probe had a plastic holder to secure the stainless-steel needle and connective wire and enable the probe to be fastened onto the micromanipulator (Supplemental Fig. ). The ECG system was ready for experiments once the customized electrode probe was connected to the ECG kit and the analysis software was running. In summary, the zebrafish ECG system consisted of a three-needle electrode probe, two micromanipulators to hold the pectoral and abdominal needles, an ECG kit, and a laptop computer preloaded with the provided software.
To record adult zebrafish ECG in real time, we referenced previously described protocols to develop an optimized procedure . During ECG recording, the zebrafish was sedated in the anesthetic water bath while wedged into a cleft in a damp sponge to maintain its dorsal side up. A concave triangular section of the sponge was cut away to enable the fish to move its gill opercula during the experiment. The pectoral scales above the heart were removed with sharp tweezers to allow penetration of the electrode needle tip. The field of operation, i.e., the triangular region between the pectoral fins below the head, was monitored under a microscope. With the micromanipulator, the pectoral electrode needle was gently inserted into the thorax to a depth to detect the ECG signal without damaging the heart tissue (Fig. ). The abdominal probe was gently inserted into the cloaca (i.e., the posterior anal/reproductive orifice) with a second micromanipulator. The insertion depth for both of the pectoral and abdominal needle electrodes was approximately 1–1.5 mm. The grounding electrode was placed at the corner of the damp sponge as a reference electrode (Fig. ). During ECG recoding, it was necessary to adjust the pectoral electrode probe slightly until the P wave, QRS complex and T wave were clearly recognized on the software’s display window. With the tricaine methanesulfonate (MS-222)/isoflurane combination anesthetics described below, the zebrafish could be safely sedated for 30 minutes. The recording time window was adjusted according to the individual assay and drug response. After recording, the zebrafish was immediately transferred to a recovery tank with clean system water.
The system and procedure described above allowed reliable detection and recording of real-time ECG signals from adult zebrafish. There was no need for additional data fitting or processing. An example of the baseline real-time zebrafish ECG waveform is shown in Fig. , which is highly comparable to the human ECG. Key features of the zebrafish ECG signal, such as the P wave, QRS complex and T wave, can be easily recognized (Fig. ). To establish standard zebrafish ECG parameters, we determined the mean heart rates of wild-type AB zebrafish to be 148 ± 15 beats per minute (bpm). After statistical analysis, we found that in normal, 10- to 12-month-old zebrafish, the average PR interval was 62 ± 4 millisecond (ms), the average QRS interval was 44 ± 3 ms, the average RR interval was 469 ± 54 ms, the average QT interval was 215 ± 43 ms, and the mean HR corrected QT interval (QTc) was 279 ± 60 ms. These results were highly consistent with previous findings , which further demonstrated that this economical ECG system is comparable to more complex ECG systems.
Before performing the chemical-induced arrhythmic response assays, we verified the anesthetic effects on zebrafish cardiac physiology using the ECG system. We did so because MS-222, the only FDA-approved anesthetic for fishes, has been shown to affect heart rate in adult zebrafish during sedation. As an alternative anesthetic approach, we used the 140 ppm combined anesthetic formula (70 ppm MS-222 + 70 ppm isoflurane) previously developed in our laboratory, which shows minimal effects on the zebrafish heart rate . In the MS-222-alone group, the initial heart rate at the first minute was 108 ± 16 bpm, which was significantly lower than that of the combined-anesthetic-formula group, i.e., 148 ± 15 bpm (Fig. ). As the sedation time increased, the heart rate in the MS-222 group significantly decreased to 89 ± 17 bpm at 5 minutes, whereas that of the combined-formula group was sustained at 137 ± 16 bpm, which was not statistically different from the rate at one minute. As expected, after 10 minutes of sedation in MS-222, the heart rate further decreased to 64 ± 18 bpm, whereas the heart rate of the MS-222/isoflurane-combination group remained at 131 ± 19 bpm. These data are consistent with previously published findings . Notably, most of the adult zebrafish in the MS-222-alone group did not recover after 10 minutes of sedation (data not shown). We next focused on analyzing the heart rate variation under prolonged sedation with the combined anesthetic formula (Fig. ). After 1, 5, and 10-minute sedation under MS-222/isoflurane anesthesia, the average heart rate was 148 ± 15 bpm, 137 ± 16 bpm and 131 ± 9 bpm, respectively. These data are consistent with previous findings . Prolonged sedation at 20 and 30 minutes yielded an average heart rate of 121 ± 11 bpm and 113 ± 7 bpm, respectively. Taken together, these data demonstrated that MS-222 may not be a suitable anesthetic for ECG experiments with adult zebrafish. Furthermore, prolonged sedation under the MS-222/isoflurane-combination formula led to a gradual decrease in heart rate beyond 10-minute sedation. We hence recommend that ECG recording experiments be performed using the combined anesthetic formula within the first five minutes; otherwise, the anesthetic effects may result in considerable interference to subsequent analysis.
After establishing the optimized ECG assay conditions, we analyzed the effects of common cardiovascular drugs that are frequently used in clinical practice. We started with isoproterenol, which is a nonselective β-adrenergic agonist that is an isopropyl amine analog of adrenaline. Having a well-studied mechanism of action and pharmacological effects on cardiac muscle contractility, isoproterenol can increase the human heart rate and has been prescribed for the treatment of bradycardia . Since MS-222 alone was shown in our experiments to reduce heart rate in adult zebrafish, we used this chemical to mimic drug-induced bradycardia and test zebrafish’s response to isoproterenol treatment. Before isoproterenol treatment, the baseline ECG showed the average heart rate to be 159 ± 13 bpm at the first minute. After 5 minutes sedation in 160 ppm MS-222 alone, the heart rate had decreased to 130 ± 16 bpm, as expected. After retro-orbital injection of isoproterenol, a change in heart rate was observed within 60 seconds (Fig. ). The average heart rate was significantly increased to 155 ± 16 bpm with 5 μl of 10 μM isoproterenol and was higher than that under the MS-222-induced bradycardia condition. We also tested the effect of a lower dose of isoproterenol. We found that isoproterenol had a dose-dependent effect on adult zebrafish heart rate (Fig. ). In the group administered with 5 μl of 10 μM isoproterenol, the heart rate was increased by 1.25-fold after isoproterenol injection (Fig. ), whereas in the group administered with 0.5 μM isoproterenol, the heart rate only increased 1.04-fold, and 1.12-fold increase in 1 μM group, 1.14-fold increase in 5 μM group and 1.22-fold increase in 7.5 μM group, respectively (Fig. ). These results are similar to previous findings reported in human .
Verapamil is a non-dihydropyridine calcium channel antagonist and a common antihypertensive with an anti-angina effect. However, verapamil can have negative cardiovascular effects in humans, such as abnormal ECG and reduced heart rate . We thus investigated whether these cardiac effects of verapamil could be observed in zebrafish and monitored by ECG in real time. Before verapamil treatment, the baseline ECG showed the average heart rate to be 155 ± 13 bpm (Fig. ). After retro-orbital injection of verapamil, the heart rate decreased to 116 ± 15 bpm within 60 seconds, representing a 25% reduction (Fig. ). After 5 minutes, the heart rate had further decreased to 88 ± 22 bpm, a 43% reduction (Fig. ). As expected, the results were similar to previous findings , and the effects mimicked the heart rate-lowering effects of verapamil reported in humans . Our results demonstrate that zebrafish and humans have highly conserved action potential responses to verapamil, which confirm the feasibility of using zebrafish ECG for the screening of calcium-channel blocking agents.
Amiodarone is a class III anti-arrhythmic drug and has been used to treat and prevent various types of arrhythmia, including ventricular tachycardia and atrial fibrillation . Amiodarone can cause bradycardia and prolong the QT interval . Therefore, we explored whether these cardiovascular effects can be induced in zebrafish. We first immersed the zebrafish in a tank with 100 μM amiodarone in a one-liter water system to mimic acute treatment in human. After one hour of immersion of adult zebrafish in the amiodarone bath, the zebrafish heart rate decreased to an average of 60 ± 10 bpm, which was significantly lower than that of the control group. Analysis of the ECG signals revealed effects of the amiodarone treatment on several ECG features: The QRS interval (79 ± 21 ms) and PR interval (103 ± 19 ms) were found to increase relative to the pretreatment condition values. It is noteworthy that significant QT prolongation was also observed (481 ± 58 ms), and the mean HR corrected QT interval (QTc) was 475 ± 52 ms, which represented a striking 2-fold increase over that of the pretreatment condition (Fig. ). Bradycardia and QT prolongation indicated the drug’s effects on the ion channels, leading to a decrease in cardiomyocyte excitability and ventricular tachyarrhythmia, respectively. Therefore, these results suggest that zebrafish and humans may have highly conserved ion channels and similar reactions to amiodarone.
Quinidine is a voltage-gated sodium channel blocker that acts as a class I antiarrhythmic agent (Ia) in the heart to prevent ventricular arrhythmias . Quinidine leads to increase the cardiac action potential duration which also prolongs QT interval and increases the risks of torsade de pointes in human . We then tested these cardiac effects of quinidine in adult zebrafish. Before drug treatment, the baseline ECG was at the average heart rate to be 166 ± 27 bpm. After retro-orbital injection of quinidine (250 μM), the heart rate was significantly decreased to 92 ± 39 bpm (Fig. ). Specifically, the baseline QT interval was 200 ± 29 ms (Fig. ). The QT interval was significantly prolonged to 303 ± 40 ms after post-injection of quinidine (Fig. ). The QTc interval prolongation were also seen after drug treatment (from 280 ± 51 to 355 ± 67 ms). PR interval and QRS interval did not have significant change after drug treatment. Thus, zebrafish and humans have highly conserved ion channels and similar ventricular tachyarrhythmia response in the heart.
Veratridine is a plant alkaloid which acts as a neurotoxin and known to depolarize excitable cells by preventing inactivation of voltage-dependent Na + channels . This positive inotropic effect causes an increase in both Na + and Ca 2+ influx and then increases nerve excitability and cardiac contractility , . Hence, we used veratridine to simulate gain-of-function on sodium channels to see what proarrhythmic effect could be induced in the adult zebrafish heart. Prolonged PR interval (from 58 ± 9 ms to 85 ± 9 ms) was observed in zebrafish injected with veratridine. The Fig. showed first-degree AV block with significant PR interval prolongation at 3 minutes post-injection (8 out of 10 zebrafish). However, heart rate, QT interval, QRS interval and QTc interval did not have any significant change after veratridine treatment (Fig. ). These results suggested that the increase in both Na + and Ca 2+ influx may prolong the PR interval and induce the AV block, but not the QT interval in adult zebrafish heart. These data indicated that veratridine affects the atrioventricular conduction more significantly, however, veratridine may not affect the depolarization and repolarization of the ventricles in the adult zebrafish heart.
In this technical report, we presented a cost-effective, real-time ECG recording system that can be used to monitor ECG signals in adult zebrafish. The easy-to-operate adult zebrafish ECG system could also be extended to pharmacological studies, especially on drugs with proarrhythmic action. We demonstrated that several clinically relevant responses in humans, such as QT prolongation and heart rate variability, can be observed in adult zebrafish. We anticipate that the optimized, straight-forward procedure and highly accessible ECG system shall enhance productivity and learning in both research and teaching laboratories. To assist other researchers to reproduce our results, we have described in detail on how to set up the ECG system that do not need to be shielded in a Faraday cage during measuring. We also emphasized on optimizing the placement sites of the needle electrode probe (Fig. ). There are some additional points to consider before initiating ECG recording: (1) When the pectoral electrode probe is inserted too deep into the zebrafish dermis, it can cause a reversed ECG waveform and excessive bleeding of the fish. (2) The p waveform can appear to be sharp and magnified when the pectoral needle probe is moved even slightly toward the right of the center body line. (3) When the pectoral probe was moved toward the left of the center body line, the p waveform can vanish. We reasoned that such changes of needle position and ECG waveform are anatomically linked. This is because when the zebrafish is positioned with its abdominal side up, the atrium is located slightly toward the right of the center body line . Therefore, extra attention should be paid to properly positioning the needle electrodes at the appropriate sites, which might require some practice when performing zebrafish ECG recording for the first time. We believe that by following the protocol provided in this study, any researcher can carry out successful ECG recording on adult zebrafish. As for sedating zebrafish during ECG recording, anesthetic agents are recommend to immobilize the fish and relieve its discomfort . To date, MS-222 is the only FDA-approved anesthetic for fishes, which is a sodium channel blocker and has been reported to affect zebrafish heart rate in numerous studies , . However, we demonstrated that MS-222 not only significantly decreased the zebrafish heart rate but could also result in zebrafish mortality after 10-minute sedation. To support ‘3R’ efforts, i.e., the refinement, reduction and replacement of animal studies, we suggest the combined formula of MS-222 and isoflurane to provide safer sedation for adult zebrafish. While the heart rate could generally decrease after prolonged sedation under the combined MS-222 and isoflurane formula, we found that all zebrafish could be revived even after 30 minutes of prolonged sedation. Most importantly, we demonstrated that the heart rate of adult zebrafish could remain in the normal range within the first five minutes of sedation. We also found that during testing the effect of veratridine, in comparison to MS-222 only, the combined formula of MS-222 and isoflurane could effectively prevent zebrafish death (data not shown). Together, these findings indicated the advantage of the combination formula of MS-222 and isoflurane, which should be especially useful when applying ECG methods for cardiovascular drug screening , , . Regarding other zebrafish ECG methods, in vitro recording of adult zebrafish heart ECG has been reported, but such in vitro ECG may not capture the integrated responses in the whole living organism in real time . In a preliminary experiment, we have tested our in vivo system for recording mouse ECG (data not shown). We found the system could indeed be employed to monitor heart rate variability in mouse. However, the system may not yet be suited for continuous long-term monitoring of mouse ECG with the current configuration; therefore, we plan to conduct further study to improve the system for mouse ECG monitoring and drug screening. We tested the zebrafish ECG system in regard to recording the QT interval in adult zebrafish under the influence of selected cardiovascular medications. The QT interval is measured between the onset of ventricular depolarization and the end of the repolarization. Due to its heart-rate dependence, the QT interval may be altered by various pathophysiologic and pharmacologic influences. Thus, the QT interval is often corrected for heart rate, which is annotated as QTc interval. QTc prolongation has been shown to be associated with various forms of tachycardia, and it may also arise from drugs that delay cardiac repolarization. It shall be noted that the Fridericia formula was used in this study to calculate QTc. Although the Bazett formula is the most widely used correction method in clinical practice, the Fridericia formula is recommended by the U.S. Food and Drug Administration (FDA) for clinical trials on drug safety . QT prolongation can promote lethal arrhythmias such as torsade de pointes (TdP) and have severe adverse effects on patients at risk. We tested two drugs with different mechanism of action that commonly cause prolonged QT in human, and both drugs could induce QTc prolongation in adult zebrafish. We also tested the class I antiarrhythmic agent quinidine, which induces TdP in only 1–3% of quinidine treated human patients. While classical TdP was not observed in quinidine-treated zebrafish, we did observe significant drug-induced high-degree AV blocks at 500 uM quinidine (Supplemental Fig. ). Thus, the ECG system presented in this study has the potential to expedite the use of adult zebrafish for cardiac toxicity screening. Although the combination of MS-222 and isoflurane had a weaker proarrhythmic effect than that of MS-222 alone and prolonged the survival time of adult zebrafish over that with MS-222 alone, we noted that this combination did not eliminate gill movement completely during the experiment, which might have interfered with the ECG recording. Other anesthetic agents might stop the gill movement during ECG recording, but they can also induce hypoxia and bradycardia , . To mitigate such detrimental effects, a perfusion system has been proposed to maintain the zebrafish viable during ECG recording. Moreover, it is possible to apply a low-pass filter in data processing to reduce the signal noise from gill movement and perfusion pulse, which are the most significant sources of noise of zebrafish ECG , , . In this study, we adapted the Biosppy toolbox in Python which had been modified to process adult zebrafish ECG signals and to filter the noise form gill movement. However, we plan to continue to modify the system and data processing technique to improve the ECG signal quality. Until such improvements are made, the current ECG system might serve as a convenient and the most economical zebrafish ECG system for the research community. In summary, the three-needle electrode probe and in vivo ECG recording system described in this study could be an entry-level ECG assay platform for teaching and research in zebrafish laboratories. The system could also be used for small-scale cardiovascular drug research or forward genetic screening. We consider this ECG assay system to have promising potential to promote educational and translational applications of zebrafish model systems in future heart research.
Zebrafish Adult zebrafish (wild type AB strain) aged from 10 to 12 months (body lengths approximately 3–3.5 cm) were used in this study. All zebrafish were reared in specialized tanks (AZOO, Taiwan) with a circulating water system at a density of approximately 30–50 fish per 10 liters of water under standardized conditions as reported previously . All animal protocols in this study were reviewed and approved by the Experimental Animal Care and Use Committee of National Tsing Hua University, Hsinchu, Taiwan (approval number: 10048). All experiments were performed in accordance with relevant guidelines and regulations. Anesthetics preparation We followed a previously described protocol to sedate adult zebrafish for ECG recording . Tricaine (Sigma, USA) was dissolved in distilled water to a final concentration of 2,000 parts-per-million (ppm) as a stock, and the pH value was adjusted to 7.2 with sodium hydroxide (Sigma-Aldrich). Isoflurane (Baxter, USA) was dissolved in absolute ethanol to create a stock solution of 100,000 ppm (isoflurane: ethanol = 1:9) and maintained in a brown glass bottle. Stocks were all stored at 4 °C. For combined use of tricaine and isoflurane, each stock solution was added to the fish tank to the prescribed final concentration immediately before use. Electrode probes In preparing the electrode probes, we tested three types of electrode materials: tungsten filament, stainless steel and silver wire. During testing, we found that the tungsten filament could be made very thin (25 μm) and that it inflicted minimal injury to the zebrafish when inserted through the dermis. However, the tungsten filament was overly soft and thus easily deformed. The 100% silver needle probes were also easily deformed when inserted directly into the fish dermis. We also tested a silver needle probe composited with 70% silver (350 μm) and obtained results similar to those obtained with the stainless steel probes. Last, we tested stainless steel probes (330 μm) and found they were the most suitable type of electrodes for our ECG system; these probes are also widely used in electrophysiology research , . The general characteristics of the stainless steel were its relatively high conductivity and high tensile strength, enabling the needle to penetrate the fish dermis easily and provide strong electrical signals. The design of the needle electrode set is illustrated in Supplemental Fig. , and its construction is described in results. ECG kit The ECG signals were recorded using a commercial ECG kit, which uses an USB cable to connect the instrument to a computer that runs the packaged software “ECG Recording System” provided by the manufacturer (model No. EZ-BIO-01-S1-E, Ez-Instrument Technology Co., Taiwan; www.ezinstrument.com/en ). The ECG kit records at a data rate of 600 SPS with a digital low pass filter at 80 Hz. The kit also contains an AC-line filter at 60 Hz to eliminate line noise. The kit’s signal-to-noise ratio evaluation was shown in Supplemental Fig. . ECG signal processing The ECG signal of adult zebrafish was processed using Biosppy toolbox . The ECG R-peak segmentation algorithm implemented was based on the literature , . Specifically, a finite impulse response (FIR) band-pass filter, cut off frequency set between 3 and 45 Hz, was used to filter raw ECG signals. The filtered signals were then passed through additional low-pass filter and high-pass filter to remove low and high frequency noise, and then we calculated its first derivatives . We used window of moving average for smoothing and removing 50 Hz power and muscle activity noise (Supplemental Fig. ). The QRS detection algorithm was based on the literatures , and the thresholds were set for detecting and segment every P waves, QRS complexes and T waves as templates. After that, the templates were filtered using moving average window (N is around 10). We averaged all templates of each ECG recording as the final ECG template (Supplemental Fig. ). This template was then used as the reference wave. Drug treatment The retro-orbital injection method was used to administrate isoproterenol and verapamil into the zebrafish during ECG recording. The procedure followed a previously described protocol with some modifications . Briefly, the injection site was the retro-orbital venous sinus, which is located beneath the zebrafish eyeball. A 26S-gauge Hamilton syringe filled with the prescribed drug was positioned above the anesthetized fish’s eye at the 7 o’clock position at a 45-degree angle to the fish body. After insertion of the needle into the eye socket, the drug was gently injected without moving the fish during continuous ECG recoding. For amiodarone drug testing, adult zebrafish were pre-exposed to the chemical by immersion in a water bath containing 100 μM amiodarone for one hour. Statistical analysis Data were processed by SigmaPlot v.10 and expressed as the means ± SEM (standard error of the mean). Statistical significance was determined by the Student t -test. Significant differences were assessed at p values of <0.05 (*) and <0.001 (**). QTc intervals were normalized to heart rate using the standard Fridericia’s formula: [12pt]{minimal}
$${}=/[3]{{}}.$$ QTc = QT / RR 3 .
Adult zebrafish (wild type AB strain) aged from 10 to 12 months (body lengths approximately 3–3.5 cm) were used in this study. All zebrafish were reared in specialized tanks (AZOO, Taiwan) with a circulating water system at a density of approximately 30–50 fish per 10 liters of water under standardized conditions as reported previously . All animal protocols in this study were reviewed and approved by the Experimental Animal Care and Use Committee of National Tsing Hua University, Hsinchu, Taiwan (approval number: 10048). All experiments were performed in accordance with relevant guidelines and regulations.
We followed a previously described protocol to sedate adult zebrafish for ECG recording . Tricaine (Sigma, USA) was dissolved in distilled water to a final concentration of 2,000 parts-per-million (ppm) as a stock, and the pH value was adjusted to 7.2 with sodium hydroxide (Sigma-Aldrich). Isoflurane (Baxter, USA) was dissolved in absolute ethanol to create a stock solution of 100,000 ppm (isoflurane: ethanol = 1:9) and maintained in a brown glass bottle. Stocks were all stored at 4 °C. For combined use of tricaine and isoflurane, each stock solution was added to the fish tank to the prescribed final concentration immediately before use.
In preparing the electrode probes, we tested three types of electrode materials: tungsten filament, stainless steel and silver wire. During testing, we found that the tungsten filament could be made very thin (25 μm) and that it inflicted minimal injury to the zebrafish when inserted through the dermis. However, the tungsten filament was overly soft and thus easily deformed. The 100% silver needle probes were also easily deformed when inserted directly into the fish dermis. We also tested a silver needle probe composited with 70% silver (350 μm) and obtained results similar to those obtained with the stainless steel probes. Last, we tested stainless steel probes (330 μm) and found they were the most suitable type of electrodes for our ECG system; these probes are also widely used in electrophysiology research , . The general characteristics of the stainless steel were its relatively high conductivity and high tensile strength, enabling the needle to penetrate the fish dermis easily and provide strong electrical signals. The design of the needle electrode set is illustrated in Supplemental Fig. , and its construction is described in results.
The ECG signals were recorded using a commercial ECG kit, which uses an USB cable to connect the instrument to a computer that runs the packaged software “ECG Recording System” provided by the manufacturer (model No. EZ-BIO-01-S1-E, Ez-Instrument Technology Co., Taiwan; www.ezinstrument.com/en ). The ECG kit records at a data rate of 600 SPS with a digital low pass filter at 80 Hz. The kit also contains an AC-line filter at 60 Hz to eliminate line noise. The kit’s signal-to-noise ratio evaluation was shown in Supplemental Fig. .
The ECG signal of adult zebrafish was processed using Biosppy toolbox . The ECG R-peak segmentation algorithm implemented was based on the literature , . Specifically, a finite impulse response (FIR) band-pass filter, cut off frequency set between 3 and 45 Hz, was used to filter raw ECG signals. The filtered signals were then passed through additional low-pass filter and high-pass filter to remove low and high frequency noise, and then we calculated its first derivatives . We used window of moving average for smoothing and removing 50 Hz power and muscle activity noise (Supplemental Fig. ). The QRS detection algorithm was based on the literatures , and the thresholds were set for detecting and segment every P waves, QRS complexes and T waves as templates. After that, the templates were filtered using moving average window (N is around 10). We averaged all templates of each ECG recording as the final ECG template (Supplemental Fig. ). This template was then used as the reference wave.
The retro-orbital injection method was used to administrate isoproterenol and verapamil into the zebrafish during ECG recording. The procedure followed a previously described protocol with some modifications . Briefly, the injection site was the retro-orbital venous sinus, which is located beneath the zebrafish eyeball. A 26S-gauge Hamilton syringe filled with the prescribed drug was positioned above the anesthetized fish’s eye at the 7 o’clock position at a 45-degree angle to the fish body. After insertion of the needle into the eye socket, the drug was gently injected without moving the fish during continuous ECG recoding. For amiodarone drug testing, adult zebrafish were pre-exposed to the chemical by immersion in a water bath containing 100 μM amiodarone for one hour.
Data were processed by SigmaPlot v.10 and expressed as the means ± SEM (standard error of the mean). Statistical significance was determined by the Student t -test. Significant differences were assessed at p values of <0.05 (*) and <0.001 (**). QTc intervals were normalized to heart rate using the standard Fridericia’s formula: [12pt]{minimal}
$${}=/[3]{{}}.$$ QTc = QT / RR 3 .
Supplementary Information
|
Advancing Advocacy: Implementation of a Child Health Advocacy Curriculum in a Pediatrics Residency Program | 0f32c5d8-2e74-4393-b48e-20f5bb3ad719 | 7062538 | Pediatrics[mh] | After completing this advocacy curriculum, trainees will be able to: 1. Identify key child health issues in their community. 2. Demonstrate improved knowledge of state and federal legislation processes and of current state-specific legislation regarding child health. 3. Communicate with state and federal representatives regarding a child health issue. 4. Express opinions via communication with child health organizations and through op-ed writing. 5. Design and present a state-specific action plan to advocate to a legislative body on a child health issue. 6. Demonstrate an improved comfort level with child health advocacy as a component of their future careers as pediatricians.
Currently, the ACGME specifies that pediatric residency should include five educational units of ambulatory experiences, including elements of community pediatrics and child advocacy. Health advocacy comprises advocacy at the individual, community, state, and national levels. There has been a growing focus on child health legislative activities in pediatric residency programs, including greater resident involvement in providing legislative testimony. Recently, pilot innovative curricula and independent curricula have been developed for community-level advocacy. – Tools have been developed to link advocacy training to resident competence and for community-based project planning. However, it can be a challenge for pediatric programs to find easily implemented and locally relevant curricula encompassing various levels of health advocacy. To better accomplish this goal of training future pediatricians to be capable of participating effectively in community-, state-, and national-level child health advocacy, we developed a relevant and easily implementable curriculum in our pediatric residency program. Available pediatric curricula have focused on generalized advocacy training models and assessment tools. , We add to the existing literature by describing in depth the development, implementation, and results of a locally relevant tested framework to help educate the next generation of pediatricians to advocate for their patients in a continuously evolving political advocacy landscape. We aimed to empower residents to be effective advocates by improving resident knowledge regarding state and federal children's health legislation and legislative processes and by teaching specific skills such as op-ed writing and the art of negotiation during child health advocacy meetings with federal and state representatives. Here, we describe the process of development of this curriculum, as well as results and outputs from the pilot within our residency program, and share our curricular framework and resources that could be utilized by other programs. Our hope is that by outlining the stepwise approach we took in development, implementation, and evaluation, as well as by sharing the end product as a resource, we can help residency educators looking to implement locally relevant advocacy curricula at their institutions.
We structured our curriculum development using Kern's six steps. Step 1: Problem Identification and General Needs Assessment We conducted an extensive literature search and identified a paucity of locally relevant curricula to effectively teach pediatric residents the art of community-, state-, and national-level child health advocacy. Out of 200 articles reviewed about advocacy in medicine, only seven introduced advocacy curricula for pediatric trainees. With support from an American Academy of Pediatrics (AAP) Community Pediatrics Training Initiative grant to develop a curriculum, we attended the AAP Legislative Conference in Washington, DC, in April 2016. This conference further highlighted gaps in pediatric trainee advocacy teaching and taught attendees basic advocacy skills via workshops, seminars, and meetings with federal representatives. Skills acquired were used for the layout of the sessions of our curriculum. Step 2: Targeted Needs Assessment We formed a workgroup for a targeted needs assessment at our hospital, Children's Hospital of the King's Daughters. The workgroup included local and national child advocacy leaders from the hospital, advocacy representatives from the Eastern Virginia Medical School, the director of graduate medical education, and representatives of our state AAP chapter. Furthermore, residents were given a precurriculum survey that revealed resident gaps in advocacy knowledge and skills. Thirty-seven percent of residents responded that they were not aware of pertinent local child health issues, 58% indicated that they were not familiar with advocacy resources, and 43% felt intimidated by the legislative process. The workgroup guided incorporation of high-yield advocacy skills into the curriculum based on the AAP Legislative Conference and resident survey results. For our interactive workshops, we chose to focus on three child health issues of local and national importance: pediatric nutrition, sudden infant death syndrome (SIDS), and child mental health. We also chose three important advocacy skills: communicating with state representatives, the art of negotiation, and op-ed writing. Step 3: Curriculum Goals and Objectives shows our objectives. Knowledge, attitude, and skills objectives were formulated after community and resident needs assessments and aligned with the pediatric milestones and ACGME competencies. Step 4: Educational Strategies The curriculum consisted of reading materials, didactic lectures, and interactive workshops. Our goal was twofold: advocacy education through reading and didactics and practical skill development through workshops, so as to fulfill both of these ACGME requirements. The learning methods chosen were selected primarily with a focus on ease of implementation in complex resident schedules. Since there was protected time for noon teaching sessions, didactic lectures and interactive workshop sessions were the learning forms most easily scheduled in these blocks and executed with available resources. Interactive workshops started with a 15-minute didactic describing a child health issue and existing state and national programs, as well as an introduction to an advocacy skill. A case scenario related to the child health issue was then introduced, and residents in each group practiced the focused advocacy skill. The groups were facilitated by local AAP members, faculty, and advocacy leaders. Role-play in particular was utilized, as the act of political advocacy necessarily involves interpersonal interactions and this was an excellent way to practice such interactions. Each group had reference materials and checklists for guidance about effectively discussing the case scenario. At the end of the workshop, each group shared key points of the experience. Step 5: Implementation We administered the curriculum as a pilot from October 2016 to June 2017 with the intent to incorporate it permanently into the pediatric residency program curriculum after postimplementation evaluation. These lectures and workshops took place in the hour allotted to noon didactics for residents. Printouts of the case scenarios and PowerPoint presentations were distributed at each session. The average preparation time for each workshop was 1 month, involving two formal committee meetings prior to each workshop. Four facilitators were available during the role-play, and each facilitator supervised six to eight residents. The curriculum activities, also described in detail in the Lectures subsection, below, included the following: 1. Basics of advocacy and an introduction to the curriculum (facilitator: curriculum organizers): • Highlight the basics of advocacy, relevance to pediatricians, educational resources, and curriculum layout. 2. Overview of the legislative process and policy (facilitator: local legislative representative or advocacy leader): • Host a discussion on state child health including history, current laws, current challenges in child health policy, current views of state legislatures on child health policies, gaps, and opportunities. 3. Child health concerns in Virginia and misconceptions preventing trainees from participating in advocacy (facilitator: local advocacy leader): • Host a discussion on the importance of advocacy for residents and share personal experiences. Highlight three main child health issues facing your state, existing work being conducted, and the level of legislature input. 4. AAP child health priorities and important currently pending legislation (facilitator: local legislative representative or AAP state lobbyist): • Organize a didactic with an introduction of your local AAP chapter, legislative successes, and upcoming legislative priorities. 5. Child nutrition and how to meet your representative (facilitator: local advocacy leaders): • Didactic: ○ Child nutrition and Special Supplemental Nutrition Program for Women, Infants, and Children in your state. ○ Existing state Department of Health programs for nutrition. ○ How to have interviews with your legislature. ○ How do you locate your legislature? • Workshop: ○ Case-based practice on conducting mock interviews with your legislature (case on infant nutrition). 6. SIDS and the art of negotiation (facilitator: local advocacy leaders): • Didactic: ○ SIDS in your state. ○ Department of Health programs to counter SIDS. ○ Art of negotiation: What are ways to effect change? • Workshop: ○ Case-based practice on the art of negotiation with decision makers (case on SIDS). 7. Child mental health and op-ed writing (facilitator: local advocacy leaders): • Didactic: ○ Child mental health statistics in your state. ○ Tips on writing op-eds. ○ Introduction to the local AAP chapter. ○ Introduction to the federal and state affairs office and section on medical students, residents, fellows, and trainees. • Workshop: ○ Case-based practice on op-ed writing (audience-choice cases on mental health). The project was approved by the Eastern Virginia Medical School Institutional Review Board. Reading materials The precurriculum reading material provided was optional. The AAP Advocacy Guide was introduced to residents. This guide is free and accessible online. Residents were advised to read chapters 1, 3, and 4 prior to Lecture 1, Lecture 2, and Workshop 1, respectively. Lectures Lecture 1 covered the basics of health advocacy and an introduction to curriculum goals and real-life cases to stimulate interest . This lecture could be given by any faculty or resident. In our curriculum, it was delivered by one of the residents organizing the curriculum. Lecture 2 was an overview of the legislative and bill implementation process, reviewing current priorities for child health policy and past progress in child health policy . This lecture should be given by an individual familiar with the legislative process, such as the local or state representative of the area. We found these individuals to be very interested in community outreach on subjects relating to child health in their constituencies. Collaborating with the local AAP chapter is a great way to identify and invite such individuals. In our curriculum, this lecture was delivered by the lieutenant governor of Virginia because he was a physician on faculty at our institution. Lecture 3 described the fundamentals of advocacy and major child health issues in the area, and discussed challenges and misconceptions preventing trainees from participating in advocacy . This lecture should be given by an individual familiar with advocating to representatives and legislative bodies on behalf of child health issues. Considering that such advocacy is one of the important functions of the AAP, the local AAP chapter would be a good resource through which to identify a speaker. Alternatively, faculty members who are familiar with advocating can also deliver this didactic. In our curriculum, this lecture was delivered by the former Virginia health commissioner. Lecture 4 focused on local AAP child health priorities, an in-depth overview of the state legislative process, and current pending legislation for which the AAP was actively advocating . A lecturer who is familiar with current legislative priorities should be identified through the local AAP chapter. In our curriculum, this lecture was delivered by an AAP state lobbyist. Interactive workshops Workshops can be facilitated by in-house or invited faculty members who have some baseline knowledge of health advocacy. In our curriculum, in-house faculty members who had been involved in child health advocacy facilitated the workshops. Workshop 1 taught the issue of pediatric nutrition along with the advocacy skill of communicating with state representatives. The interactive scenario was called “Meet Your State Senator to Advocate for the Child Nutrition Authorization Bill 2016.” An example of a PowerPoint that highlighted the health issue of interest and proceeded to teach the skill in theory is shared in . The skill was then practically taught in group format. The group facilitator acted as the senator, and residents advocated for a nutrition bill that was being considered in the House and Senate using a skill checklist that was developed by curriculum organizers . is a framework learners use in engaging with the workshop. The answers to “What are the member/staff priorities for the coming year?” and “What committees does the representative/senator sit on?” should be known by the session facilitator in order for the facilitator to accurately embody the state senator's position. In our experience, this information was found online on the state legislature's government (.gov) website or on GovTrack.us ( www.govtrack.us ), where active bills and legislators' positions on them were described. Workshop 2 taught the art of negotiation in the context of the child health issue of SIDS . This involved a scenario where residents practiced how to negotiate with a facilitator who acted as a Medicaid representative. Workshop 3 taught residents how to write an op-ed article on a child mental health issue. A PowerPoint teaching the theoretical points of op-ed writing is shared in . The small-group discussion culminated with each small group generating an outline of an op-ed and sharing it with the whole group. At the conclusion of the curriculum, residents, with the help of curriculum facilitators, created a state-specific advocacy action plan (see the ). Residents devised a stepwise approach to facilitate effective communication with decision makers regarding health policies that were under current consideration in the legislature. The stepwise plan involved identifying bills and delegates using reliable and easily accessible websites; identifying channel, mode, and timing of contact; and establishing a follow-up plan. Residents then utilized the state-specific action plan by advocating at Virginia General Assembly Day 2017. Individuals in other programs can use the figure as a practical guide should they wish to communicate with their representatives. Step 6: Evaluation and Feedback Resident participation in the advocacy curriculum was voluntary. All who elected to participate received a precurriculum survey . Survey questions were developed using the AAP Advocacy Guide. The survey questions had not been previously validated. We distributed the reading materials described earlier to participants. Residents who had reviewed the reading materials completed postcurriculum surveys that were identical to the precurriculum surveys. We numerically coded surveys to allow anonymous pre/post comparison between individuals. Participants were asked for written feedback at each session. Additionally, the Community Pediatrics Training Initiative leadership was involved in ongoing feedback via conference calls. A longitudinal collaboration was established and maintained throughout the curriculum implementation period in which the other three AAP grant recipients shared documents, educational modules, and other information about their advocacy projects in an online forum and on conference call discussions.
We conducted an extensive literature search and identified a paucity of locally relevant curricula to effectively teach pediatric residents the art of community-, state-, and national-level child health advocacy. Out of 200 articles reviewed about advocacy in medicine, only seven introduced advocacy curricula for pediatric trainees. With support from an American Academy of Pediatrics (AAP) Community Pediatrics Training Initiative grant to develop a curriculum, we attended the AAP Legislative Conference in Washington, DC, in April 2016. This conference further highlighted gaps in pediatric trainee advocacy teaching and taught attendees basic advocacy skills via workshops, seminars, and meetings with federal representatives. Skills acquired were used for the layout of the sessions of our curriculum.
We formed a workgroup for a targeted needs assessment at our hospital, Children's Hospital of the King's Daughters. The workgroup included local and national child advocacy leaders from the hospital, advocacy representatives from the Eastern Virginia Medical School, the director of graduate medical education, and representatives of our state AAP chapter. Furthermore, residents were given a precurriculum survey that revealed resident gaps in advocacy knowledge and skills. Thirty-seven percent of residents responded that they were not aware of pertinent local child health issues, 58% indicated that they were not familiar with advocacy resources, and 43% felt intimidated by the legislative process. The workgroup guided incorporation of high-yield advocacy skills into the curriculum based on the AAP Legislative Conference and resident survey results. For our interactive workshops, we chose to focus on three child health issues of local and national importance: pediatric nutrition, sudden infant death syndrome (SIDS), and child mental health. We also chose three important advocacy skills: communicating with state representatives, the art of negotiation, and op-ed writing.
shows our objectives. Knowledge, attitude, and skills objectives were formulated after community and resident needs assessments and aligned with the pediatric milestones and ACGME competencies.
The curriculum consisted of reading materials, didactic lectures, and interactive workshops. Our goal was twofold: advocacy education through reading and didactics and practical skill development through workshops, so as to fulfill both of these ACGME requirements. The learning methods chosen were selected primarily with a focus on ease of implementation in complex resident schedules. Since there was protected time for noon teaching sessions, didactic lectures and interactive workshop sessions were the learning forms most easily scheduled in these blocks and executed with available resources. Interactive workshops started with a 15-minute didactic describing a child health issue and existing state and national programs, as well as an introduction to an advocacy skill. A case scenario related to the child health issue was then introduced, and residents in each group practiced the focused advocacy skill. The groups were facilitated by local AAP members, faculty, and advocacy leaders. Role-play in particular was utilized, as the act of political advocacy necessarily involves interpersonal interactions and this was an excellent way to practice such interactions. Each group had reference materials and checklists for guidance about effectively discussing the case scenario. At the end of the workshop, each group shared key points of the experience.
We administered the curriculum as a pilot from October 2016 to June 2017 with the intent to incorporate it permanently into the pediatric residency program curriculum after postimplementation evaluation. These lectures and workshops took place in the hour allotted to noon didactics for residents. Printouts of the case scenarios and PowerPoint presentations were distributed at each session. The average preparation time for each workshop was 1 month, involving two formal committee meetings prior to each workshop. Four facilitators were available during the role-play, and each facilitator supervised six to eight residents. The curriculum activities, also described in detail in the Lectures subsection, below, included the following: 1. Basics of advocacy and an introduction to the curriculum (facilitator: curriculum organizers): • Highlight the basics of advocacy, relevance to pediatricians, educational resources, and curriculum layout. 2. Overview of the legislative process and policy (facilitator: local legislative representative or advocacy leader): • Host a discussion on state child health including history, current laws, current challenges in child health policy, current views of state legislatures on child health policies, gaps, and opportunities. 3. Child health concerns in Virginia and misconceptions preventing trainees from participating in advocacy (facilitator: local advocacy leader): • Host a discussion on the importance of advocacy for residents and share personal experiences. Highlight three main child health issues facing your state, existing work being conducted, and the level of legislature input. 4. AAP child health priorities and important currently pending legislation (facilitator: local legislative representative or AAP state lobbyist): • Organize a didactic with an introduction of your local AAP chapter, legislative successes, and upcoming legislative priorities. 5. Child nutrition and how to meet your representative (facilitator: local advocacy leaders): • Didactic: ○ Child nutrition and Special Supplemental Nutrition Program for Women, Infants, and Children in your state. ○ Existing state Department of Health programs for nutrition. ○ How to have interviews with your legislature. ○ How do you locate your legislature? • Workshop: ○ Case-based practice on conducting mock interviews with your legislature (case on infant nutrition). 6. SIDS and the art of negotiation (facilitator: local advocacy leaders): • Didactic: ○ SIDS in your state. ○ Department of Health programs to counter SIDS. ○ Art of negotiation: What are ways to effect change? • Workshop: ○ Case-based practice on the art of negotiation with decision makers (case on SIDS). 7. Child mental health and op-ed writing (facilitator: local advocacy leaders): • Didactic: ○ Child mental health statistics in your state. ○ Tips on writing op-eds. ○ Introduction to the local AAP chapter. ○ Introduction to the federal and state affairs office and section on medical students, residents, fellows, and trainees. • Workshop: ○ Case-based practice on op-ed writing (audience-choice cases on mental health). The project was approved by the Eastern Virginia Medical School Institutional Review Board. Reading materials The precurriculum reading material provided was optional. The AAP Advocacy Guide was introduced to residents. This guide is free and accessible online. Residents were advised to read chapters 1, 3, and 4 prior to Lecture 1, Lecture 2, and Workshop 1, respectively. Lectures Lecture 1 covered the basics of health advocacy and an introduction to curriculum goals and real-life cases to stimulate interest . This lecture could be given by any faculty or resident. In our curriculum, it was delivered by one of the residents organizing the curriculum. Lecture 2 was an overview of the legislative and bill implementation process, reviewing current priorities for child health policy and past progress in child health policy . This lecture should be given by an individual familiar with the legislative process, such as the local or state representative of the area. We found these individuals to be very interested in community outreach on subjects relating to child health in their constituencies. Collaborating with the local AAP chapter is a great way to identify and invite such individuals. In our curriculum, this lecture was delivered by the lieutenant governor of Virginia because he was a physician on faculty at our institution. Lecture 3 described the fundamentals of advocacy and major child health issues in the area, and discussed challenges and misconceptions preventing trainees from participating in advocacy . This lecture should be given by an individual familiar with advocating to representatives and legislative bodies on behalf of child health issues. Considering that such advocacy is one of the important functions of the AAP, the local AAP chapter would be a good resource through which to identify a speaker. Alternatively, faculty members who are familiar with advocating can also deliver this didactic. In our curriculum, this lecture was delivered by the former Virginia health commissioner. Lecture 4 focused on local AAP child health priorities, an in-depth overview of the state legislative process, and current pending legislation for which the AAP was actively advocating . A lecturer who is familiar with current legislative priorities should be identified through the local AAP chapter. In our curriculum, this lecture was delivered by an AAP state lobbyist. Interactive workshops Workshops can be facilitated by in-house or invited faculty members who have some baseline knowledge of health advocacy. In our curriculum, in-house faculty members who had been involved in child health advocacy facilitated the workshops. Workshop 1 taught the issue of pediatric nutrition along with the advocacy skill of communicating with state representatives. The interactive scenario was called “Meet Your State Senator to Advocate for the Child Nutrition Authorization Bill 2016.” An example of a PowerPoint that highlighted the health issue of interest and proceeded to teach the skill in theory is shared in . The skill was then practically taught in group format. The group facilitator acted as the senator, and residents advocated for a nutrition bill that was being considered in the House and Senate using a skill checklist that was developed by curriculum organizers . is a framework learners use in engaging with the workshop. The answers to “What are the member/staff priorities for the coming year?” and “What committees does the representative/senator sit on?” should be known by the session facilitator in order for the facilitator to accurately embody the state senator's position. In our experience, this information was found online on the state legislature's government (.gov) website or on GovTrack.us ( www.govtrack.us ), where active bills and legislators' positions on them were described. Workshop 2 taught the art of negotiation in the context of the child health issue of SIDS . This involved a scenario where residents practiced how to negotiate with a facilitator who acted as a Medicaid representative. Workshop 3 taught residents how to write an op-ed article on a child mental health issue. A PowerPoint teaching the theoretical points of op-ed writing is shared in . The small-group discussion culminated with each small group generating an outline of an op-ed and sharing it with the whole group. At the conclusion of the curriculum, residents, with the help of curriculum facilitators, created a state-specific advocacy action plan (see the ). Residents devised a stepwise approach to facilitate effective communication with decision makers regarding health policies that were under current consideration in the legislature. The stepwise plan involved identifying bills and delegates using reliable and easily accessible websites; identifying channel, mode, and timing of contact; and establishing a follow-up plan. Residents then utilized the state-specific action plan by advocating at Virginia General Assembly Day 2017. Individuals in other programs can use the figure as a practical guide should they wish to communicate with their representatives.
The precurriculum reading material provided was optional. The AAP Advocacy Guide was introduced to residents. This guide is free and accessible online. Residents were advised to read chapters 1, 3, and 4 prior to Lecture 1, Lecture 2, and Workshop 1, respectively.
Lecture 1 covered the basics of health advocacy and an introduction to curriculum goals and real-life cases to stimulate interest . This lecture could be given by any faculty or resident. In our curriculum, it was delivered by one of the residents organizing the curriculum. Lecture 2 was an overview of the legislative and bill implementation process, reviewing current priorities for child health policy and past progress in child health policy . This lecture should be given by an individual familiar with the legislative process, such as the local or state representative of the area. We found these individuals to be very interested in community outreach on subjects relating to child health in their constituencies. Collaborating with the local AAP chapter is a great way to identify and invite such individuals. In our curriculum, this lecture was delivered by the lieutenant governor of Virginia because he was a physician on faculty at our institution. Lecture 3 described the fundamentals of advocacy and major child health issues in the area, and discussed challenges and misconceptions preventing trainees from participating in advocacy . This lecture should be given by an individual familiar with advocating to representatives and legislative bodies on behalf of child health issues. Considering that such advocacy is one of the important functions of the AAP, the local AAP chapter would be a good resource through which to identify a speaker. Alternatively, faculty members who are familiar with advocating can also deliver this didactic. In our curriculum, this lecture was delivered by the former Virginia health commissioner. Lecture 4 focused on local AAP child health priorities, an in-depth overview of the state legislative process, and current pending legislation for which the AAP was actively advocating . A lecturer who is familiar with current legislative priorities should be identified through the local AAP chapter. In our curriculum, this lecture was delivered by an AAP state lobbyist.
Workshops can be facilitated by in-house or invited faculty members who have some baseline knowledge of health advocacy. In our curriculum, in-house faculty members who had been involved in child health advocacy facilitated the workshops. Workshop 1 taught the issue of pediatric nutrition along with the advocacy skill of communicating with state representatives. The interactive scenario was called “Meet Your State Senator to Advocate for the Child Nutrition Authorization Bill 2016.” An example of a PowerPoint that highlighted the health issue of interest and proceeded to teach the skill in theory is shared in . The skill was then practically taught in group format. The group facilitator acted as the senator, and residents advocated for a nutrition bill that was being considered in the House and Senate using a skill checklist that was developed by curriculum organizers . is a framework learners use in engaging with the workshop. The answers to “What are the member/staff priorities for the coming year?” and “What committees does the representative/senator sit on?” should be known by the session facilitator in order for the facilitator to accurately embody the state senator's position. In our experience, this information was found online on the state legislature's government (.gov) website or on GovTrack.us ( www.govtrack.us ), where active bills and legislators' positions on them were described. Workshop 2 taught the art of negotiation in the context of the child health issue of SIDS . This involved a scenario where residents practiced how to negotiate with a facilitator who acted as a Medicaid representative. Workshop 3 taught residents how to write an op-ed article on a child mental health issue. A PowerPoint teaching the theoretical points of op-ed writing is shared in . The small-group discussion culminated with each small group generating an outline of an op-ed and sharing it with the whole group. At the conclusion of the curriculum, residents, with the help of curriculum facilitators, created a state-specific advocacy action plan (see the ). Residents devised a stepwise approach to facilitate effective communication with decision makers regarding health policies that were under current consideration in the legislature. The stepwise plan involved identifying bills and delegates using reliable and easily accessible websites; identifying channel, mode, and timing of contact; and establishing a follow-up plan. Residents then utilized the state-specific action plan by advocating at Virginia General Assembly Day 2017. Individuals in other programs can use the figure as a practical guide should they wish to communicate with their representatives.
Resident participation in the advocacy curriculum was voluntary. All who elected to participate received a precurriculum survey . Survey questions were developed using the AAP Advocacy Guide. The survey questions had not been previously validated. We distributed the reading materials described earlier to participants. Residents who had reviewed the reading materials completed postcurriculum surveys that were identical to the precurriculum surveys. We numerically coded surveys to allow anonymous pre/post comparison between individuals. Participants were asked for written feedback at each session. Additionally, the Community Pediatrics Training Initiative leadership was involved in ongoing feedback via conference calls. A longitudinal collaboration was established and maintained throughout the curriculum implementation period in which the other three AAP grant recipients shared documents, educational modules, and other information about their advocacy projects in an online forum and on conference call discussions.
The curriculum generated several noteworthy outcomes, including (1) residents creating an advocacy action plan for community- and state-level advocacy, (2) residents collaborating with the local AAP to advocate at the local general assembly day, (3) strengthening of relationships with state- and community-level resources, (4) the curriculum being implemented as an annual offering for pediatrics residents, and (5) demonstrated improvement in resident knowledge and comfort level with health advocacy. Each of these is described in the following paragraphs. After creation of the state-level legislative advocacy action plan, residents utilized skills acquired in real time and collaborated with the Virginia AAP to advocate at Virginia General Assembly Day 2017. Fifteen residents met their state representatives to advocate for legislation pertinent to opioid prescribing in the context of neonatal abstinence syndrome. We had representative speakers from community organizations and the Virginia AAP who delivered a didactic session. This enabled the residents to network and learn more about resources available in the community not only for patients but also for potential future collaborations. Sixty-four out of 70 residents (91%) completed the precurriculum survey and were invited to participate in the curriculum. Sixty-one out of 64 residents (95%) completed the postcurriculum survey. Thirty-three percent of curriculum participants were PGY 1, 31% were PGY 2, 30% were PGY 3, and 6% were PGY 4. In the precurriculum survey, most residents received news about pediatric health issues from AAP emails (87%) and the AAP website (66%). Few residents had been involved in community-, state-, or federal-level advocacy (18%, 2%, and 1%, respectively). Many residents were somewhat (31%) or moderately interested (33%) in learning how to effect change. However, almost half of pediatric residents were not comfortable communicating with (47%) or finding (41%) their representative. Five questions on the postcurriculum survey had also been asked on the precurriculum survey. All five questions showed significant improvement in mean scores postcurriculum : likelihood of speaking out for a child health issue, familiarity with the process of a bill becoming law, familiarity with online resources for advocacy training, familiarity with finding a legislative representative online, and comfort level in communicating with state and local representatives.
Our experience shows successful implementation of a locally relevant advocacy curriculum now in its second year. The curriculum updated residents on community, state, and national child health issues, and it taught them the skills needed to advocate effectively. It was well received by residents and resulted in improvements in resident knowledge and attitudes regarding child health advocacy. The goals, objectives, and educational strategies of this curriculum are reproducible and feasible. They can be utilized by other pediatric residency programs and tailored to state-specific health issues. Using a state-specific child health needs assessment can be informative in choosing health issues around which workshops can be modeled. This can enhance residents' awareness of local health issues and empower them to direct their future advocacy efforts toward pressing child health concerns. Most programs have the capability to form the types of collaborations utilized in our curriculum, such as with their local AAP chapter. The local chapter can facilitate communication and extend invitations to local representatives who can deliver didactics in pediatric facilities. These collaborations can also be helpful in providing appropriate tools that can be utilized to teach various advocacy skills. Most programs have mandatory educational conferences into which didactics and workshops could be incorporated. The didactics were highly attended in our curriculum, perhaps in part due to our high-profile speakers. This may not be reproducible if institutions do not have proximity to such speakers; however, one of our best-attended invited speaker lectures was given by an AAP lobbyist who was not known to the residents. Additionally, a stepwise advocacy action plan can easily be created for different states. This can further be modified by choosing a specific bill of interest and generating steps to best address an individual's or group's position on certain legislation. In addition, most pediatric residency programs have the ability to participate in a local advocacy day or general assembly day where residents can practice the skills acquired from such a curriculum. As with exposure to quality improvement programs, advocacy exposure should be longitudinal throughout postgraduate medical education, and an effort should be made to maximize time allocation where possible. Studies have shown that exposure to advocacy teaching and training during residency leads to a higher rate of resident participation in community involvement and advocacy during and after residency. , We found that blocks of protected time were useful for effectively teaching residents practical advocacy skills. To increase participation in curricular activities, we suggest that interested residents be provided with protected time in which they can focus on their advocacy interests. Multiple challenges were encountered during this project. Specific roles were not assigned to members in the steering committee. In retrospect, assigning roles would have facilitated smoother preparation, work division, and delivery of the curriculum. Core facilitator availability for each workshop was difficult to arrange due to conflicting schedules. Having a variety of faculty facilitators for each workshop may be feasible for future workshops. Resident attendance due to clinical duties was another barrier. We found that when on inpatient wards, residents were unable to attend the workshops or would have to leave midway. Resident participation in the survey was challenging because the survey was in paper form instead of electronic. We did not formally assess real-world application of skills taught or long-term change in practices with regard to advocacy. A limitation of our study is the lack of long-term data on residents going on to advocate for bills or participate in legislative advocacy days in the future. A future study is planned to survey residents who took part in the curriculum to assess long-term influence of the curriculum on their involvement in political advocacy in their future careers. We have shown that a reproducible, feasible, and locally relevant longitudinal advocacy curriculum can be successfully developed and implemented to improve child health advocacy knowledge and attitudes in pediatric residents. Since the implementation of this curriculum, a core advocacy committee, including residents and faculty, has been formed in our program that will continue educational activities annually. The committee further aims to assess the influence of this curriculum on resident behavior and real-world application of advocacy skills. The AAP continues to be a key partner in yearly curriculum evolution. The ACGME alludes to the importance of advocacy training during pediatric residency, and we have suggested the development of specific recommendations with regard to the content and layout of such curricula. Other pediatric residency programs can utilize our curricular framework and resources to implement a longitudinal advocacy curriculum.
A. Lecture 1.pptx B. Lecture 2.pptx C. Lecture 3.ppt D. Lecture 4.pptx E. Workshop 1.pptx F. Workshop 1 Skill Checklist.pdf G. Workshop 2.pptx H. Workshop 3.pptx I. Curriculum Survey.docx All appendices are peer reviewed as integral parts of the Original Publication.
|
Los retos de la prevención y promoción de la salud, y los del PAPPS | 160d1136-bbcb-4d61-be2b-7634e02d2a5b | 6836932 | Preventive Medicine[mh] | El autor declara no tener ningún conflicto de intereses. |
High Smad7 marks inflammation in patients with chronic pouchitis | 91bdb673-89d4-4d69-a9b7-bcfa6a6e52fd | 11911167 | Digestive System[mh] | In the last decades, the advent of biologics and small molecules has contributed to reducing the rates of colectomy in patients with ulcerative colitis (UC) ( ), although this is still needed in nearly 10% of patients at 10 years ( ). Pouchitis is the most frequent complication after colectomy with ileal-anal pouch anastomosis (IPAA) ( , ). Overall, 25% to 50% of UC patients who undergo IPAA surgery experience at least one episode of pouchitis within 10 years, which has a favorable response to antibiotics ( ). Nonetheless, nearly one-fifth of these patients develop a chronic phenotype, either antibiotic-dependent or antibiotic-resistant, that requires further therapy, including biologics or small molecules. Unfortunately, more than 50% of these patients have an inadequate response to this treatment, which can lead to pouch failure and require pouch excision ( – ). Although the pathogenesis of chronic pouchitis (CP) remains poorly characterized, it has been hypothesized that CP arises in individuals with genetic susceptibility as a result of an inadequate response of the mucosal immune system to the local microbiota ( – ). Identification of the factors/mechanisms that amplify pouchitis-associated inflammatory response could help develop novel treatments. Accumulating evidence indicates that the inflamed gut of patients with UC and patients with Crohn’s disease (CD) contains elevated levels of Smad7, a protein that blocks TGF-β1 signaling, thus contributing to amplifying pathogenic immune responses ( , ). In fact, the knockdown of Smad7 with a specific antisense oligonucleotide (AS) restores TGF-β1 signaling with the downstream effect of inhibiting inflammatory pathways both in vitro and in mouse models of colitis ( – ). These findings were consistent with the results of the Phase 1 and Phase 2 clinical trials showing that the administration of a pharmaceutical compound containing the Smad7 AS (termed Mongersen) in patients with active CD induced clinical and endoscopic improvement ( – ). However, subsequently, a large multicenter, randomized, double-blind, placebo-controlled, phase 3 trial was prematurely discontinued due to an interim analysis showing no effect of Mongersen on CD activity ( ). However, further investigation of the pharmaceutical properties of Mongersen batches used in the phase 3 study revealed that most of them were unable to knock down Smad7 in cultured cells, highlighting the need to maintain consistent manufacturing requirements for clinical AS, as well as the potential benefits of in vitro bioassays as part of quality control ( ). This study aimed to investigate the expression of Smad7 in CP.
Patients and samples The modified pouchitis disease activity index (mPDAI) was used for the diagnosis of pouchitis (i.e. mPDAI ≥ 5) ( ). Mucosal samples were taken from the inflamed pouch of active CP patients. Controls included mucosal biopsy samples taken from the uninflamed pouch of 6 patients without clinical/endoscopic evidence of pouchitis, from the terminal ileum of normal or inflamed, pre-pouch (pre-pouch ileitis) of patients with active CP and normal controls who underwent colonoscopy for colorectal carcinoma screening programs. Each patient who took part in the study gave their informed written consent and the study protocol was approved by the local Ethics Committee (Tor Vergata University Hospital, Rome, R.S. 58.23). Western blotting Total proteins were extracted from mucosal biopsy samples. Samples were lysed on ice in a buffer containing 10 mM HEPES (pH 7.9), 10 mM potassium chloride (KCl), 0.1 mM ethylenediaminetetraacetic acid (EDTA), 0.2 mM ethylene glycol-bis (β-aminoethyl ether)- N,N,N’,N’-tetraacetic acid (EGTA), and 0.5% Nonidet P40 supplemented with 1 mM dithiothreitol (DTT), 10 mg/ml aprotinin, 10 mg/ml leupeptin, 1 mM phenylmethylsulphonyl fluoride (PMSF), 1 mM Na3VO4, and 1 mM sodium fluoride (NaF). The lysates were separated on sodium dodecyl sulfate (SDS)-polyacrylamide gels and transferred to nitrocellulose membranes using a Trans-Blot Turbo apparatus (Bio-Rad Laboratories, Hercules, CA). The membranes were incubated with antibodies anti-human Smad7 (1:1000, #MAB2029, R&D Systems, Minneapolis, MN), or anti-human-Vinculin (1:10000, #ab129002, Abcam, Cambridge, UK), followed by a secondary antibody conjugated to horseradish peroxidase (1:20000, # P0448, Dako, Santa Clara, CA). Membrane imaging was performed using chemiluminescence with the ChemiDoc Imaging System (Bio-Rad Laboratories). Immunofluorescence Immunofluorescence was performed on frozen sections of ileal and pouch samples taken from CP patients, and ileal samples from CTR. The samples were embedded in a cryostat mounting medium (Neg–50, #6502, Epredia, Kalamazoo, Michigan), snap frozen, and stored at -80°C. Sections of mucosal biopsy samples were fixed with 4% paraformaldehyde for 10 min and permeabilized with 0.1% TritonX-100 for 20 min at room temperature. The sections were then blocked for 1 hour at room temperature (PBS, BSA 1%, goat serum 10%), and incubated overnight at 4°C with a mouse primary antibody against human Smad7 (1:150, R&D Systems), a rabbit primary antibody against human Smad7 (1:100, #37036, SAB, Greenbelt, Maryland) and with a mouse primary antibody against human EpCAM (1:1000, #2929, Cell signaling Technology, EuroClone, Milan, Italy). After washing with PBS 1X, the secondary goat anti-mouse Alexa488 antibody (1:1000, #A11017, Thermo Fisher Scientific, Waltham, MA), goat anti-rabbit Alexa488 antibody (1:2000, #A11008, Thermo Fisher Scientific) and goat anti-mouse Alexa568 antibody (1:2000, #A11004, Thermo Fisher Scientific), were applied for 1 hour at room temperature. After washing with PBS 1X, the sections were mounted using the prolonged gold antifade reagent with DAPI (#P36931, Thermo Fisher Scientific), and analyzed using the LEICA DMI4000 B microscope with the LEICA application suite software (V4.6.2) (Leica, Wetzlar, Germany). Cell isolation and culture Lamina propria mononuclear cells (LPMCs) were isolated from mucosal biopsy samples using dithiothreitol (DTT)–ethylenediaminetetraacetic acid (EDTA) and through enzymatic digestion. Briefly, pieces of intestinal mucosa were washed in Hank’s balanced salt solution containing 1 mM DTT and antibiotics for 15 min at room temperature to remove mucus. The samples were then minced and incubated in Hank’s balanced salt solution containing 1 mM EDTA and antibiotics for 20 min at 37 °C to remove epithelial cells. After two washes in Hank’s balanced salt solution, the samples were incubated with Liberase TM (200 μg/Ml, #05401127001) and DNase I (200 μg/mL, #11284932001) (both from Roche Diagnostics GmbH, Mannheim, Germany), for 30 min at 37°C. After enzymatic digestion, mononuclear cells were collected. To determine Smad7-expressing cells in CP LPMCs, the collected cells were analyzed by flow cytometry. Flow cytometry CP LPMCs were stained with LIVE/DEAD cell viability assay (1:1000 for 10 6 cells, #L34966A, Thermo Fisher Scientific). Cells were then washed, and stained for 30 min at room temperature with anti-human CD3-PerCP-Cyanine 5.5 (#45-0037-42, Thermo Fisher Scientific) CD45-APC-H7 (#641417, BD Biosciences, San Diego, CA), CD8-PE-Cy7 (#557746, BD Biosciences), CD56-AlexaFluor 647 (#318314, Biolegend, San Diego, CA), CD19-FITC (#345776, BD Biosciences), CD14-PE-Cy7 (#557742, BD Biosciences), CD68-APC (#333810, Biolegend), CD11c-PerCP-Cy 5.5 (#337210, Biolegend) (all used at 1:50 final dilution). After washing, cells were fixed and permeabilized with IC-Fixation Buffer (Bioscience) and Permeabilization Buffer (Thermo Fisher Scientific), respectively. Anti-human Smad7-PE (#orb485741, Biorbyt, Durham, North Carolina) was finally used to stain intracellular Smad7 for 30 min at room temperature. Appropriate isotype-matched controls were included. Gallios flow cytometer (Beckman Coulter, Brea, CA) was used for acquisition, and Kaluza software (Beckman Coulter) was used for analysis. Ex vivo organ cultures Mucosal samples taken from CP patients were placed on steel grids in an organ culture chamber at 37°C in a 5% CO2/95% O2 atmosphere and cultured in RPMI 1640 medium. To determine whether Smad7 controls CP inflammatory response, the mucosal samples were either left untreated or transfected with a specific Smad7 AS or sense (control) oligonucleotide (both used a 10 µg/ml, GeneLink, Orlando, Florida) for 24 h using Opti-MEM medium and Lipofectamine 3000 reagent according to the manufacturer’s instructions (both from Life Technologies, Milan, Italy). The efficiency of transfection was determined by real-time PCR. Real-time PCR Total RNA was extracted from CP LPMCs transfected with Smad7 sense or AS using the PureLink mRNA mini-kit (#12183025, Thermo Fisher Scientific). A constant amount of RNA (1 μg/sample) was retrotranscribed into complementary DNA (cDNA) using Oligo(dT) primers and M-MLV-reverse transcriptase (#28025021, Thermo Fisher Scientific). The cDNA was amplified using the following conditions: denaturation for 1 min at 95°C; annealing for 30 seconds at 59°C for human Smad7 and at 60°C for human β-actin; 30 seconds of extension at 72°C. RNA expression was calculated relative to the β-actin gene using the ΔΔCt algorithm. The primer sequences were as follows: Smad7 Fwd 5′-GCCCGACTTCTTCATGGTGT-3′, Rev 5′-TGCCGCTCCTTCAGTTTCTT-3′; β-actin Fwd 5’-AAGATGACCCAGATCATGTTTGAGACC-3’, Rev 5’-AGCCAGTCCAGACGCAGGAT-3’; TNF-α Fwd 5′-AGGCGGTGCTTGTTCCTCAG-3′, Rev 5′-GGCTACAGGCTTGTCACTCC-3′; IL-8 Fwd 5′-AGGAACCATCTCACTGTGTG-3′, Rev 5′-CCACTCTCAATCACTCTCAG -3′. Enzyme-linked immunosorbent assay Cell-free supernatants of ex vivo organ cultures of mucosal biopsy samples taken from CP patients and transfected with sense or Smad7 AS were used to quantify extracellular TNF-α and IL-8 by enzyme-linked immunosorbent assay (ELISA) kits (#DTA00C and #D8000C respectively, both from R&D Systems). Absorbance readings were taken at 450 nm using a multimode detector DTX 880 (Beckman Coulter). Statistical analysis Differences between groups were compared using the Student’s t-test or one-way ANOVA. The Pearson correlation coefficient was used to measure the linear correlation between the levels of Smad7 and mPDAI. All analyses were performed using Graph-Pad 9 software.
The modified pouchitis disease activity index (mPDAI) was used for the diagnosis of pouchitis (i.e. mPDAI ≥ 5) ( ). Mucosal samples were taken from the inflamed pouch of active CP patients. Controls included mucosal biopsy samples taken from the uninflamed pouch of 6 patients without clinical/endoscopic evidence of pouchitis, from the terminal ileum of normal or inflamed, pre-pouch (pre-pouch ileitis) of patients with active CP and normal controls who underwent colonoscopy for colorectal carcinoma screening programs. Each patient who took part in the study gave their informed written consent and the study protocol was approved by the local Ethics Committee (Tor Vergata University Hospital, Rome, R.S. 58.23).
Total proteins were extracted from mucosal biopsy samples. Samples were lysed on ice in a buffer containing 10 mM HEPES (pH 7.9), 10 mM potassium chloride (KCl), 0.1 mM ethylenediaminetetraacetic acid (EDTA), 0.2 mM ethylene glycol-bis (β-aminoethyl ether)- N,N,N’,N’-tetraacetic acid (EGTA), and 0.5% Nonidet P40 supplemented with 1 mM dithiothreitol (DTT), 10 mg/ml aprotinin, 10 mg/ml leupeptin, 1 mM phenylmethylsulphonyl fluoride (PMSF), 1 mM Na3VO4, and 1 mM sodium fluoride (NaF). The lysates were separated on sodium dodecyl sulfate (SDS)-polyacrylamide gels and transferred to nitrocellulose membranes using a Trans-Blot Turbo apparatus (Bio-Rad Laboratories, Hercules, CA). The membranes were incubated with antibodies anti-human Smad7 (1:1000, #MAB2029, R&D Systems, Minneapolis, MN), or anti-human-Vinculin (1:10000, #ab129002, Abcam, Cambridge, UK), followed by a secondary antibody conjugated to horseradish peroxidase (1:20000, # P0448, Dako, Santa Clara, CA). Membrane imaging was performed using chemiluminescence with the ChemiDoc Imaging System (Bio-Rad Laboratories).
Immunofluorescence was performed on frozen sections of ileal and pouch samples taken from CP patients, and ileal samples from CTR. The samples were embedded in a cryostat mounting medium (Neg–50, #6502, Epredia, Kalamazoo, Michigan), snap frozen, and stored at -80°C. Sections of mucosal biopsy samples were fixed with 4% paraformaldehyde for 10 min and permeabilized with 0.1% TritonX-100 for 20 min at room temperature. The sections were then blocked for 1 hour at room temperature (PBS, BSA 1%, goat serum 10%), and incubated overnight at 4°C with a mouse primary antibody against human Smad7 (1:150, R&D Systems), a rabbit primary antibody against human Smad7 (1:100, #37036, SAB, Greenbelt, Maryland) and with a mouse primary antibody against human EpCAM (1:1000, #2929, Cell signaling Technology, EuroClone, Milan, Italy). After washing with PBS 1X, the secondary goat anti-mouse Alexa488 antibody (1:1000, #A11017, Thermo Fisher Scientific, Waltham, MA), goat anti-rabbit Alexa488 antibody (1:2000, #A11008, Thermo Fisher Scientific) and goat anti-mouse Alexa568 antibody (1:2000, #A11004, Thermo Fisher Scientific), were applied for 1 hour at room temperature. After washing with PBS 1X, the sections were mounted using the prolonged gold antifade reagent with DAPI (#P36931, Thermo Fisher Scientific), and analyzed using the LEICA DMI4000 B microscope with the LEICA application suite software (V4.6.2) (Leica, Wetzlar, Germany).
Lamina propria mononuclear cells (LPMCs) were isolated from mucosal biopsy samples using dithiothreitol (DTT)–ethylenediaminetetraacetic acid (EDTA) and through enzymatic digestion. Briefly, pieces of intestinal mucosa were washed in Hank’s balanced salt solution containing 1 mM DTT and antibiotics for 15 min at room temperature to remove mucus. The samples were then minced and incubated in Hank’s balanced salt solution containing 1 mM EDTA and antibiotics for 20 min at 37 °C to remove epithelial cells. After two washes in Hank’s balanced salt solution, the samples were incubated with Liberase TM (200 μg/Ml, #05401127001) and DNase I (200 μg/mL, #11284932001) (both from Roche Diagnostics GmbH, Mannheim, Germany), for 30 min at 37°C. After enzymatic digestion, mononuclear cells were collected. To determine Smad7-expressing cells in CP LPMCs, the collected cells were analyzed by flow cytometry.
CP LPMCs were stained with LIVE/DEAD cell viability assay (1:1000 for 10 6 cells, #L34966A, Thermo Fisher Scientific). Cells were then washed, and stained for 30 min at room temperature with anti-human CD3-PerCP-Cyanine 5.5 (#45-0037-42, Thermo Fisher Scientific) CD45-APC-H7 (#641417, BD Biosciences, San Diego, CA), CD8-PE-Cy7 (#557746, BD Biosciences), CD56-AlexaFluor 647 (#318314, Biolegend, San Diego, CA), CD19-FITC (#345776, BD Biosciences), CD14-PE-Cy7 (#557742, BD Biosciences), CD68-APC (#333810, Biolegend), CD11c-PerCP-Cy 5.5 (#337210, Biolegend) (all used at 1:50 final dilution). After washing, cells were fixed and permeabilized with IC-Fixation Buffer (Bioscience) and Permeabilization Buffer (Thermo Fisher Scientific), respectively. Anti-human Smad7-PE (#orb485741, Biorbyt, Durham, North Carolina) was finally used to stain intracellular Smad7 for 30 min at room temperature. Appropriate isotype-matched controls were included. Gallios flow cytometer (Beckman Coulter, Brea, CA) was used for acquisition, and Kaluza software (Beckman Coulter) was used for analysis.
organ cultures Mucosal samples taken from CP patients were placed on steel grids in an organ culture chamber at 37°C in a 5% CO2/95% O2 atmosphere and cultured in RPMI 1640 medium. To determine whether Smad7 controls CP inflammatory response, the mucosal samples were either left untreated or transfected with a specific Smad7 AS or sense (control) oligonucleotide (both used a 10 µg/ml, GeneLink, Orlando, Florida) for 24 h using Opti-MEM medium and Lipofectamine 3000 reagent according to the manufacturer’s instructions (both from Life Technologies, Milan, Italy). The efficiency of transfection was determined by real-time PCR.
Total RNA was extracted from CP LPMCs transfected with Smad7 sense or AS using the PureLink mRNA mini-kit (#12183025, Thermo Fisher Scientific). A constant amount of RNA (1 μg/sample) was retrotranscribed into complementary DNA (cDNA) using Oligo(dT) primers and M-MLV-reverse transcriptase (#28025021, Thermo Fisher Scientific). The cDNA was amplified using the following conditions: denaturation for 1 min at 95°C; annealing for 30 seconds at 59°C for human Smad7 and at 60°C for human β-actin; 30 seconds of extension at 72°C. RNA expression was calculated relative to the β-actin gene using the ΔΔCt algorithm. The primer sequences were as follows: Smad7 Fwd 5′-GCCCGACTTCTTCATGGTGT-3′, Rev 5′-TGCCGCTCCTTCAGTTTCTT-3′; β-actin Fwd 5’-AAGATGACCCAGATCATGTTTGAGACC-3’, Rev 5’-AGCCAGTCCAGACGCAGGAT-3’; TNF-α Fwd 5′-AGGCGGTGCTTGTTCCTCAG-3′, Rev 5′-GGCTACAGGCTTGTCACTCC-3′; IL-8 Fwd 5′-AGGAACCATCTCACTGTGTG-3′, Rev 5′-CCACTCTCAATCACTCTCAG -3′.
Cell-free supernatants of ex vivo organ cultures of mucosal biopsy samples taken from CP patients and transfected with sense or Smad7 AS were used to quantify extracellular TNF-α and IL-8 by enzyme-linked immunosorbent assay (ELISA) kits (#DTA00C and #D8000C respectively, both from R&D Systems). Absorbance readings were taken at 450 nm using a multimode detector DTX 880 (Beckman Coulter).
Differences between groups were compared using the Student’s t-test or one-way ANOVA. The Pearson correlation coefficient was used to measure the linear correlation between the levels of Smad7 and mPDAI. All analyses were performed using Graph-Pad 9 software.
Study population Nineteen CP patients (6 female, 31.58%) who underwent endoscopic evaluation of the pouch were included in this study. The demographic and clinical characteristics of the patients are shown in . The median age of the patients was 47 years (range, 21-69 years). Two patients were smokers at the time of endoscopy, 5 were former smokers, and 12 were not smokers. Thirteen patients (68.42%) had an active clinical and endoscopic disease (mPDAI ≥ 5). At the time of mucosal sample collection, 8 patients were receiving no drug, 6 patients were taking biologics, and 5 patients were receiving antibiotic therapy. High expression of Smad7 in the inflamed pouch of CP patients To determine whether the CP-associated inflammatory response is characterized by elevated expression of Smad7, we initially compared the protein levels of Smad7 in biopsy samples taken from 9 CP patients with those expressed in the uninflamed pouch of 3 patients with a history of CP, normal pre-pouch ileum (n=5) of patients with active CP and normal CTR ileal samples. Enhanced expression of Smad7 was observed in the inflamed pouch of patients with CP compared to the uninflamed pouch, normal pre-pouch ileum, and CTR ( , ). In 3 CP patients, endoscopy documented active lesions in the pre-pouch mucosa (pre-pouch ileitis). Smad7 protein expression in such samples was significantly higher than that seen in the normal ileum of the CTR and did not differ from that documented in the inflamed pouch of the same patients ( , ). Next, we correlated Smad7 protein levels with mPDAI in 12 CP patients (9 with active disease and 3 with inactive disease). The data shown in indicate a positive correlation between the content of Smad7 and mPDAI. In the inflamed pouch, Smad7 is expressed in both the epithelial and lamina propria compartments Next, we determined which cells express Smad7 in the inflamed pouch of CP patients. For this purpose, we collected biopsies from clinically active CP patients with endoscopic evidence of lesions in the pouch and CTR and examined the expression of Smad7 by immunofluorescence. Smad7-positive cells were evident in both the epithelial and lamina propria compartments of biopsy samples taken from all groups ( ). The expression of Smad7 in the epithelial compartment was confirmed through immunofluorescence co-staining with the epithelial cell marker EpCAM ( ). Quantification of the positive cells showed, however, that Smad7-expressing cells were more abundant in the lamina propria compartments of inflamed samples of CP patients than in CTR whereas no significant differences in terms of Smad7-positive epithelial cells were seen among the groups ( ). To ascertain which cell types in the lamina propria compartment express Smad7, LPMCs isolated from inflamed CP samples were analyzed for Smad7 by flow cytometry. Initially, we assessed the fractions of CD3+/CD8-, CD3+CD8+, CD19+, CD56+, CD11c+, CD68+, and CD14+ cells in the inflamed pouch by gating on CD45-expressing cells. T lymphocytes, and mainly CD3+CD8- cells, were the dominant population in all the samples analyzed ( ). Next, we gated Smad7-positive cells and assessed the percentages of CD45+ and CD45- cells. The majority of Smad7-expressing cells were CD45+ cells even though nearly one-third of Smad7-expressing cells were CD45 negative ( ). When the analysis was restricted to CD45+ cells, it was evident that one-fifth of them were positive for Smad7 ( ). Among the CD45+Smad7+ cells, CD3+CD8- T lymphocytes were the predominant population, even though the protein was variably expressed by the other cell types analyzed ( ). Further analysis of CD45+ LPMCs showed that more than 10% of CD3+CD8- T cells were positive for Smad7 ( ). Furthermore, Smad7 positivity was seen in more than 20% of CD3+CD8+ T cells, as well as in more than 30% of all the other cell populations analyzed ( ). Evaluation of the Smad7 mean fluorescence intensity (MFI) showed no significant differences among the various CD45+ LPMC subtypes ( ). In parallel, we assessed Smad7 expression in LPMCs isolated from biopsy samples taken from the terminal ileum of 2 healthy controls. However, Smad7-positive CD45-expressing LPMCs were barely detectable in the normal ileum ( ). Collectively, these findings indicate that CP-associated inflammation is associated with elevated expression of Smad7 in immune and non-immune cells. Knockdown of Smad7 in CP ex vivo mucosal explants is associated with changes in the expression of inflammatory molecules To functionally link the high Smad7 with the ongoing mucosal inflammation in CP, we inhibited Smad7 in ex vivo mucosal explants of patients with CP with a well-characterized AS and evaluated the changes in the expression of TNF-α and IL-8, the latter being involved in the recruitment of neutrophils to the inflamed pouch ( , ). Smad7 knockdown resulted in a significant down-regulation of TNF-α and IL-8 at both mRNA and protein levels ( ).
Nineteen CP patients (6 female, 31.58%) who underwent endoscopic evaluation of the pouch were included in this study. The demographic and clinical characteristics of the patients are shown in . The median age of the patients was 47 years (range, 21-69 years). Two patients were smokers at the time of endoscopy, 5 were former smokers, and 12 were not smokers. Thirteen patients (68.42%) had an active clinical and endoscopic disease (mPDAI ≥ 5). At the time of mucosal sample collection, 8 patients were receiving no drug, 6 patients were taking biologics, and 5 patients were receiving antibiotic therapy.
To determine whether the CP-associated inflammatory response is characterized by elevated expression of Smad7, we initially compared the protein levels of Smad7 in biopsy samples taken from 9 CP patients with those expressed in the uninflamed pouch of 3 patients with a history of CP, normal pre-pouch ileum (n=5) of patients with active CP and normal CTR ileal samples. Enhanced expression of Smad7 was observed in the inflamed pouch of patients with CP compared to the uninflamed pouch, normal pre-pouch ileum, and CTR ( , ). In 3 CP patients, endoscopy documented active lesions in the pre-pouch mucosa (pre-pouch ileitis). Smad7 protein expression in such samples was significantly higher than that seen in the normal ileum of the CTR and did not differ from that documented in the inflamed pouch of the same patients ( , ). Next, we correlated Smad7 protein levels with mPDAI in 12 CP patients (9 with active disease and 3 with inactive disease). The data shown in indicate a positive correlation between the content of Smad7 and mPDAI.
Next, we determined which cells express Smad7 in the inflamed pouch of CP patients. For this purpose, we collected biopsies from clinically active CP patients with endoscopic evidence of lesions in the pouch and CTR and examined the expression of Smad7 by immunofluorescence. Smad7-positive cells were evident in both the epithelial and lamina propria compartments of biopsy samples taken from all groups ( ). The expression of Smad7 in the epithelial compartment was confirmed through immunofluorescence co-staining with the epithelial cell marker EpCAM ( ). Quantification of the positive cells showed, however, that Smad7-expressing cells were more abundant in the lamina propria compartments of inflamed samples of CP patients than in CTR whereas no significant differences in terms of Smad7-positive epithelial cells were seen among the groups ( ). To ascertain which cell types in the lamina propria compartment express Smad7, LPMCs isolated from inflamed CP samples were analyzed for Smad7 by flow cytometry. Initially, we assessed the fractions of CD3+/CD8-, CD3+CD8+, CD19+, CD56+, CD11c+, CD68+, and CD14+ cells in the inflamed pouch by gating on CD45-expressing cells. T lymphocytes, and mainly CD3+CD8- cells, were the dominant population in all the samples analyzed ( ). Next, we gated Smad7-positive cells and assessed the percentages of CD45+ and CD45- cells. The majority of Smad7-expressing cells were CD45+ cells even though nearly one-third of Smad7-expressing cells were CD45 negative ( ). When the analysis was restricted to CD45+ cells, it was evident that one-fifth of them were positive for Smad7 ( ). Among the CD45+Smad7+ cells, CD3+CD8- T lymphocytes were the predominant population, even though the protein was variably expressed by the other cell types analyzed ( ). Further analysis of CD45+ LPMCs showed that more than 10% of CD3+CD8- T cells were positive for Smad7 ( ). Furthermore, Smad7 positivity was seen in more than 20% of CD3+CD8+ T cells, as well as in more than 30% of all the other cell populations analyzed ( ). Evaluation of the Smad7 mean fluorescence intensity (MFI) showed no significant differences among the various CD45+ LPMC subtypes ( ). In parallel, we assessed Smad7 expression in LPMCs isolated from biopsy samples taken from the terminal ileum of 2 healthy controls. However, Smad7-positive CD45-expressing LPMCs were barely detectable in the normal ileum ( ). Collectively, these findings indicate that CP-associated inflammation is associated with elevated expression of Smad7 in immune and non-immune cells.
ex vivo mucosal explants is associated with changes in the expression of inflammatory molecules To functionally link the high Smad7 with the ongoing mucosal inflammation in CP, we inhibited Smad7 in ex vivo mucosal explants of patients with CP with a well-characterized AS and evaluated the changes in the expression of TNF-α and IL-8, the latter being involved in the recruitment of neutrophils to the inflamed pouch ( , ). Smad7 knockdown resulted in a significant down-regulation of TNF-α and IL-8 at both mRNA and protein levels ( ).
This study was undertaken to determine whether Smad7 is over-expressed in CP. We report up-regulation of Smad7 in the inflamed pouch of CP patients compared to the normal ileum of healthy individuals and patients with CP, and the uninflamed pouch of patients with a history of CP. The findings are in line with data from our initial study aimed at exploring the function of Smad7 in IBD, which included the analysis of Smad7 in two samples taken from CP patients: in both samples, Smad7 expression was greater than that found in the normal, unaffected colon ( ). A small subgroup of our CP population had pre-pouch CD-like ileitis ( ), as the endoscopic lesions extended up to the pre-pouch ileum. In such samples, Smad7 expression was similar to that documented in the inflamed pouch of the same patients. These findings, together with the demonstration that in patients with CP there was a good correlation between the Smad7 content in the pouch and the mPDAI, reinforce the notion that the ongoing mucosal inflammation rather than other variables (e.g. current therapy) is the driving force for Smad7 induction in the gut. We also collected biopsy samples from the inflamed and uninflamed pouch and normal terminal ileum to localize Smad7-positive cells by immunofluorescence. In all the subgroups analyzed, Smad7-expressing cells were evident in both the epithelial and lamina propria compartments, but in each of these compartments, the number of Smad7-positive cells was higher in the inflamed pouch than in the controls. Next, we characterized the lamina propria cell sources of Smad7. Since the number of cells obtained from tiny endoscopic biopsies was not sufficient to make further comparisons among the various groups, we focused our analysis on cells isolated from the inflamed pouch. Flow cytometry analysis of CP LPMCs showed that Smad7 was mainly expressed by CD45+ cells, and among these, T lymphocytes were the main source, even though Smad7 was evident in virtually all types of immune cells analyzed, including myeloid cells, B cells, and NK cells. In contrast, Smad7 MFI in T lymphocytes did not differ from that measured in other cell types and the analysis of Smad7 in single cell types revealed that only one-third of T cells were positive. Altogether these data suggest that the upregulation of Smad7 in active CP reflects, at least in part, the increased infiltration of T lymphocytes in the inflamed mucosa of the pouch rather than a specific induction of the protein in such cell types. Smad7 was also detectable in nearly one-third of CD45-negative cells. The LPMC isolation procedure allows the recovery of non-immune cells, including various subsets of stromal cells, and endothelial cells, which are known to express Smad7 in other systems ( , ). Thus, it is conceivable that these cell types contribute to the positivity of Smad7 in the CD45-negative lamina propria compartment of patients with active CP. Studies are now ongoing to address this issue. Knockdown of Smad7 in CP mucosal explants with a specific AS led to a significant down-regulation of TNF-α and IL-8. This data could be pathogenically relevant as both cytokines are over-produced in CP and are supposed to amplify the mucosal inflammation ( ). The molecular mechanism by which Smad7 controls the expression of such inflammatory molecules remains unclear even though it is conceivable that the documented changes in TNF-α and IL-8 production can reflect the modulatory effects of Smad7 on various signaling pathways controlling inflammatory gene expression. In this context, for instance, we previously documented a positive effect of Smad7 on the activation of NF-kB, a transcription factor that up-regulates the production of inflammatory cytokines and contributes to the propagation of mucosal inflammation ( , ). Additionally, the Smad7-mediated abrogation of TGF-β1 activity could result in changes in the function of MAP kinases, another signaling pathway that ultimately influences cytokine/chemokine secretion ( ). Altogether the above findings confirm and expand on previous data supporting the inflammatory role of Smad7 in the gut ( – ), and raise the possibility that Smad7 is a target for the treatment of patients with chronic pouchitis. However, the failure of oral Mongersen in CD patients suggests the need for a better definition of potential candidates for such a treatment as well as the optimization of the pharmaceutical compounds containing the Smad7 AS before future clinical trials ( ). We are aware that the relatively small sample size can represent a limitation of this study, although there was a noticeable difference between CP patients and controls in terms of Smad7. The fact that Smad7 is expressed in epithelial cells other than LPMCs raises the possibility that Smad7 plays a role in the control of epithelial cell behavior. However, we experienced technical difficulties in setting experiments with epithelial cells isolated from CP and assessing the function of Smad7. Although the inflammatory role of Smad7 was validated by assessing the expression of two cytokines, classically associated with gut inflammation, in cultures of mucosal samples treated with Smad7 AS, further work is needed to fully evaluate the contribution of Smad7 in the propagation of CP-associated inflammation. In conclusion, this study shows that Smad7 is over-expressed in the inflamed mucosa of patients with CP, further supporting the pathogenic role of Smad7 in the gut.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.